Migrating Looker Dashboards to D23.io Without Losing LookML Logic
Complete guide to migrating Looker dashboards to D23.io while preserving LookML logic, metrics, and semantic layer integrity. Includes metric-parity testing.
Table of Contents
- Why Migrate from Looker to D23.io?
- Understanding LookML and the Semantic Layer
- Pre-Migration Assessment and Planning
- Translating LookML Models to dbt’s Semantic Layer
- Rebuilding Dashboards in Apache Superset
- Metric-Parity Testing: The Critical Step
- Common Pitfalls and How to Avoid Them
- Migration Timeline and Staffing
- Post-Migration Optimisation and Handover
- Next Steps and Getting Support
Why Migrate from Looker to D23.io?
Looker is powerful. Google Cloud’s business intelligence platform handles complex LookML models, derived tables, and semantic layer definitions that sit at the heart of many enterprise analytics stacks. But Looker isn’t right for everyone. Cost, licensing constraints, vendor lock-in, and the desire for open-source flexibility drive organisations—especially in Australia and the Asia-Pacific region—to explore alternatives.
D23.io, built on Apache Superset and dbt’s semantic layer, offers a compelling alternative. You get an open-source analytics platform, transparent pricing, and the ability to own your data stack end-to-end. The semantic layer approach mirrors Looker’s business logic abstraction, but with the flexibility of dbt and the modern data stack.
However, migration isn’t trivial. Your LookML models encode years of business logic: derived tables, measures, dimensions, filters, and row-level security rules. Lose that logic during migration, and your dashboards become unreliable. Metrics diverge. Stakeholders lose trust.
This guide walks you through a battle-tested pattern for translating LookML into dbt’s semantic layer, rebuilding dashboards in Apache Superset (D23.io’s foundation), and validating that every metric matches your source of truth.
Understanding LookML and the Semantic Layer
What LookML Does
LookML is Looker’s declarative modelling language. It sits between your raw database schema and your dashboards, defining:
- Explores: Virtual tables that users query
- Views: Reusable definitions of dimensions and measures
- Derived tables: SQL-based transformations computed at query time or materialised
- Measures: Aggregations (sums, counts, averages) with custom formatting and drill-down paths
- Dimensions: Raw or computed attributes with data types and groupings
- Filters and access grants: Row-level security and query constraints
When a user clicks a dashboard tile or filters a report, Looker translates that interaction into SQL, applies the LookML rules, and returns results. The semantic layer is the abstraction that lets business users query data without writing SQL.
dbt’s Semantic Layer: The Modern Equivalent
dbt’s semantic layer (formerly MetricFlow) provides similar abstraction, but built on open standards. Instead of LookML, you define:
- Models: dbt tables or views (your transformed data)
- Metrics: Reusable business definitions (revenue, customer count, churn rate) with dimensions and filters
- Entities: Relationships between tables (customer, order, product)
- Semantic manifests: YAML-based definitions that expose metrics to downstream tools
The key difference: dbt’s semantic layer is tool-agnostic. Your metrics are defined once and consumed by Superset, Tableau, Looker itself, or any tool that speaks the semantic layer API. You’re not locked into one vendor.
When you migrate from Looker to D23.io, you’re translating LookML models into dbt models, and LookML measures into dbt metrics. The logic remains; the format changes.
Pre-Migration Assessment and Planning
Step 1: Audit Your Looker Instance
Before you write a single line of dbt code, understand what you’re migrating.
Document your LookML:
- Export all
.view.lkmland.model.lkmlfiles from your Git repository - Count your explores, views, derived tables, and measures
- Identify which dashboards are LookML-based (scheduled, embedded, or parameterised) versus user-defined
- Flag row-level security (RLS) rules, access grants, and custom filters
- List all custom derived tables and their refresh schedules
Assess dashboard complexity:
- How many dashboards do you have? (Typical enterprises: 50–500)
- Which dashboards are actively used? (Focus migration effort there)
- How many tiles per dashboard? (Complex dashboards with 20+ tiles need careful planning)
- Are dashboards scheduled or embedded in applications?
- Do dashboards rely on Looker’s drill-down or custom actions?
Identify data sources:
- Which databases does Looker query? (Snowflake, BigQuery, Redshift, PostgreSQL, etc.)
- Are you using Looker’s persistent derived tables (PDTs), or are transformations in dbt already?
- What’s your current data refresh cadence?
Step 2: Define Success Criteria
Before migration, agree on what “success” looks like:
- Metric parity: Every measure in Looker produces identical results in Superset (within rounding)
- Dashboard feature parity: All filters, drill-downs, and interactivity work in Superset
- Performance: Query times don’t degrade (or improve)
- User adoption: Training time and learning curve for your team
- Timeline: How many weeks can you afford for this project?
Most organisations we work with at PADISO define success as: “90% of dashboards migrated, all critical metrics validated, zero data discrepancies, and team trained within 8 weeks.”
Step 3: Choose Your Migration Path
There are three common approaches:
Path A: Big Bang (All at Once)
- Migrate all dashboards simultaneously
- Pros: Clean break, no dual maintenance
- Cons: High risk, requires more upfront planning, longer downtime
- Best for: Small instances (< 20 dashboards), low user count, or when Looker is new
Path B: Phased (Wave-Based)
- Migrate by department or dashboard group
- Pros: Reduced risk, time to learn and iterate, staged user adoption
- Cons: Longer project duration, dual-system maintenance
- Best for: Most organisations; balances risk and pragmatism
Path C: Parallel (Dual Running)
- Run Looker and Superset side-by-side for weeks or months
- Pros: Zero risk, users can validate before cutover
- Cons: Highest cost, requires maintaining two systems, data sync challenges
- Best for: Mission-critical analytics, highly regulated industries
We typically recommend Path B (Phased) for most organisations. Start with 2–3 non-critical dashboards, validate the process, then scale.
Translating LookML Models to dbt’s Semantic Layer
Step 1: Map Views to dbt Models
Every LookML view becomes a dbt model. If your LookML view is a simple reference to a table, the dbt model is straightforward:
# LookML
view: users {
sql_table_name: public.users ;;
dimension: id {
primary_key: yes
type: number
sql: ${TABLE}.id ;;
}
dimension: email {
type: string
sql: ${TABLE}.email ;;
}
}
Becomes:
# dbt models/users.yml
models:
- name: users
columns:
- name: id
description: "Primary key"
tests:
- unique
- not_null
- name: email
description: "User email"
The dbt model is simpler because dbt assumes you’re working with clean, well-structured tables. If your LookML view includes derived logic (SQL calculations), move that into a dbt model’s {{ ref() }} chain.
Step 2: Translate Derived Tables to dbt Models
This is where complexity lives. LookML’s derived tables are SQL queries that create intermediate views. In dbt, these become staging models or marts.
# LookML derived table
view: order_summary {
derived_table: {
sql: SELECT
user_id,
COUNT(*) as order_count,
SUM(amount) as total_spent
FROM public.orders
GROUP BY user_id
;;
}
dimension: user_id { type: number }
measure: order_count { type: count }
measure: total_spent { type: sum }
}
Becomes:
-- dbt models/staging/stg_order_summary.sql
SELECT
user_id,
COUNT(*) as order_count,
SUM(amount) as total_spent
FROM {{ ref('orders') }}
GROUP BY user_id
Then define the semantic layer on top:
# dbt models/staging/stg_order_summary.yml
models:
- name: stg_order_summary
columns:
- name: user_id
- name: order_count
- name: total_spent
Step 3: Convert Measures to dbt Metrics
This is critical. LookML measures are the business logic your dashboards depend on. dbt metrics expose those definitions to Superset.
# LookML
measure: revenue {
type: sum
sql: ${orders.amount} ;;
filters: {
field: orders.status
value: "completed"
}
drill_fields: [orders.id, users.email, orders.created_date]
}
Becomes:
# dbt metrics.yml
metrics:
- name: revenue
description: "Total revenue from completed orders"
type: sum
sql: amount
timestamp: created_date
time_grains: [day, week, month, quarter, year]
dimensions:
- order_id
- user_id
- status
filters:
- field: status
operator: "="
value: "completed"
Key translations:
type: sum→type: sumtype: count_distinct→type: count_distincttype: average→type: average- Filters in LookML become filter definitions in dbt metrics
- Drill fields become dimensions in dbt metrics
Step 4: Handle Row-Level Security (RLS)
Looker’s access grants apply row-level filters based on user attributes. dbt’s semantic layer doesn’t natively handle RLS, but Superset does.
In dbt: Define your metrics without RLS logic. Let Superset handle access control.
In Superset: Create row-level security rules in the UI or via the REST API. Map Superset users to database roles or attributes, then filter results accordingly.
For example, if your LookML has:
access_grant: sales_region {
user_attribute: region
allowed_values: ["EMEA", "APAC", "Americas"]
}
view: sales {
sql_where: ${TABLE}.region IN ({% condition sales_region %}) ;;
}
In Superset, you’d create a rule:
- For users with attribute
region = "APAC", filter dashboard results tosales.region = "APAC"
This approach is actually cleaner—RLS is explicit in Superset, not hidden in LookML.
Step 5: Validate dbt Semantic Layer
Before you touch a dashboard, ensure your dbt models and metrics are correct.
# Test your dbt project
dbt test
# Generate and inspect the semantic manifest
dbt parse
Run sample queries against your dbt models to confirm they produce the same results as your LookML views. This is your first metric-parity checkpoint.
Rebuilding Dashboards in Apache Superset
Now that your semantic layer is in place, rebuild your dashboards in Superset. This isn’t a one-click export; it’s a deliberate process.
Step 1: Set Up Superset and Connect dbt
First, ensure Superset can query your dbt semantic layer. Superset supports dbt through the semantic layer API or direct database connections.
Option A: dbt Semantic Layer API
- Requires dbt Cloud (paid tier)
- Superset queries metrics and dimensions through the API
- Most flexible, future-proof approach
Option B: Direct Database Connection
- Superset connects to your data warehouse (Snowflake, BigQuery, etc.)
- You expose dbt models as tables or views
- Simpler to set up, but less semantic abstraction in Superset
We recommend Option A for organisations with dbt Cloud. For details on setting up Superset with dbt, refer to the $50K D23.io consulting engagement guide, which covers architecture, SSO, semantic layer integration, and dashboard delivery in a 6-week fixed-fee engagement.
Step 2: Recreate Dashboards Tile by Tile
For each Looker dashboard:
- Open the Looker dashboard and note every tile: chart type, dimensions, measures, filters, drill-downs
- Create a new Superset dashboard with the same name and layout
- Recreate each tile:
- Identify the LookML explore and measures used
- Find the corresponding dbt metric or model in Superset
- Choose the chart type (bar, line, table, etc.)
- Apply the same dimensions and filters
- Validate the results match Looker (see metric-parity testing below)
Example:
Looker dashboard tile: “Monthly Revenue by Region”
- Explore:
orders - Measure:
revenue(sum of amount, filtered to completed orders) - Dimension:
orders.created_date(grouped by month),users.region - Filter:
orders.created_date >= 2024-01-01
In Superset:
- Create a new chart
- Select the
orderstable (or dbt model) - Metrics:
SUM(amount)with filterstatus = 'completed' - Group by:
DATE_TRUNC(created_date, 'month'),region - Filter:
created_date >= 2024-01-01 - Visualisation: Bar chart, grouped by region
Step 3: Replicate Filters and Interactivity
Looker dashboards often have global filters (date range, region, product) that apply to multiple tiles. Superset supports this through:
- Native filters: Dropdowns, date pickers, multi-select lists
- Filter binding: Link filters to chart columns
- Cascading filters: Filter A populates options for Filter B
For each Looker filter:
- Create a native filter in Superset
- Bind it to the relevant chart columns
- Test that filtering works across all tiles
Step 4: Handle Drill-Downs and Custom Actions
Looker’s drill-down paths let users click a value and dive deeper. Superset supports this through:
- Cross-filtering: Click a bar to filter other charts
- Drill-down links: Click a value to open a detail dashboard
- Custom links: Use Superset’s URL template feature to link to external systems
For each Looker drill-down:
- Identify the target dashboard or detail view
- In Superset, create a custom link with URL templates
- Test the navigation
Step 5: Optimise Performance
Superset queries your data warehouse directly. If your Looker dashboards used PDTs (persistent derived tables) for performance, you may need to materialise dbt models in Superset.
- Materialised views: Create database views from dbt models for frequently queried data
- Caching: Enable Superset’s query cache for expensive queries
- Aggregation tables: Pre-compute common aggregations in your warehouse
For a deeper dive into Superset architecture and optimisation, see agentic AI plus Apache Superset, which covers how to integrate intelligent agents with Superset for natural-language queries.
Metric-Parity Testing: The Critical Step
This is where most migrations fail. You rebuild dashboards, but metrics don’t match Looker. Users lose trust. You’re forced to revert.
Metric-parity testing is non-negotiable. For every measure in Looker, you must validate that the same calculation in Superset produces identical results.
Step 1: Create a Metric-Parity Test Suite
Build a spreadsheet or database table that documents every metric:
| Metric Name | LookML Explore | Measure | Dimensions | Filters | Expected Result (Looker) | Actual Result (Superset) | Match? | Notes |
|---|---|---|---|---|---|---|---|---|
| Revenue | orders | revenue | created_date (month), region | status = completed | $1,234,567 | $1,234,567 | ✓ | |
| Customer Count | customers | count | region | status = active | 5,432 | 5,432 | ✓ | |
| Churn Rate | customers | churn_rate | cohort_month | 0.08 | 0.08 | ✓ |
Step 2: Run Queries in Both Systems
For each metric, run the query in Looker and Superset side-by-side.
In Looker:
- Open the explore
- Add the measure and dimensions
- Apply filters
- Note the result (and the SQL Looker generates)
In Superset:
- Create a chart with the same dimensions and measures
- Apply the same filters
- Note the result (and the SQL Superset generates)
Compare:
- Do the numbers match exactly?
- If not, are the differences due to rounding, NULL handling, or logic differences?
- Check the SQL both systems generated. Are the GROUP BY clauses identical? Are the WHERE clauses the same?
Step 3: Debug Discrepancies
If results don’t match, investigate:
NULL handling: LookML and dbt may treat NULLs differently in aggregations. Check your dbt model’s NULL handling and Superset’s aggregation settings.
Joins: Verify that dbt models join tables the same way as LookML explores. A one-to-many join can cause double-counting if not handled carefully.
Filtering: Ensure filters are applied at the right stage of the query. In LookML, some filters apply before aggregation (WHERE), others after (HAVING).
Data types: If a dimension is a string in Looker but numeric in dbt, comparisons may fail. Ensure data types match.
Rounding and formatting: Looker may round to 2 decimals; Superset may show 4. Agree on precision upfront.
Step 4: Automate Parity Testing
Once you’ve validated a few metrics manually, automate the rest. Write a SQL query that compares Looker results to dbt/Superset results:
-- Query Looker's exported results (via API or CSV export)
WITH looker_results AS (
SELECT metric_name, dimension_value, result_value
FROM looker_export_table
),
-- Query dbt/Superset results
dbt_results AS (
SELECT 'revenue' as metric_name, DATE_TRUNC(created_date, 'month') as dimension_value, SUM(amount) as result_value
FROM {{ ref('orders') }}
WHERE status = 'completed'
GROUP BY DATE_TRUNC(created_date, 'month')
)
SELECT
COALESCE(l.metric_name, d.metric_name) as metric_name,
COALESCE(l.dimension_value, d.dimension_value) as dimension_value,
l.result_value as looker_result,
d.result_value as dbt_result,
CASE
WHEN l.result_value = d.result_value THEN 'MATCH'
WHEN ABS(l.result_value - d.result_value) / NULLIF(l.result_value, 0) < 0.01 THEN 'MATCH (within 1%)'
ELSE 'MISMATCH'
END as status
FROM looker_results l
FULL OUTER JOIN dbt_results d
ON l.metric_name = d.metric_name
AND l.dimension_value = d.dimension_value
ORDER BY status DESC, metric_name;
Run this query after each dbt model change. If any metrics diverge, investigate immediately.
Step 5: Document Acceptable Variance
Some variance is acceptable:
- Rounding differences: If Looker rounds to 2 decimals and Superset to 4, that’s fine
- Floating-point precision: Very large numbers may differ in the last digit
- Timestamp precision: If one system uses UTC and the other local time, results may differ slightly
Document what variance is acceptable for your organisation. Typically: results must match to 4 significant figures or within 0.1% of the Looker value.
Common Pitfalls and How to Avoid Them
Pitfall 1: Forgetting Hidden Filters in LookML
Looker allows hidden filters on explores—filters applied automatically without user visibility. These often encode critical business logic (e.g., “always exclude test orders”).
How to avoid:
- Audit every LookML explore for hidden filters
- Document them explicitly in dbt model comments
- Test with and without these filters to ensure dbt models apply them correctly
Pitfall 2: Mishandling Many-to-Many Joins
If your LookML explores join tables with many-to-many relationships, derived tables may double-count. dbt doesn’t hide this complexity—you must handle it explicitly.
How to avoid:
- Review all LookML join relationships
- In dbt, use
dbt_utils.group_by()or explicit DISTINCT to handle many-to-many joins - Test aggregations with and without duplicates
Pitfall 3: Losing Drill-Down Context
Looker’s drill-down paths preserve context. If a user clicks “Q1 Revenue” and drills into daily revenue, the date range is automatically filtered. Superset requires explicit configuration.
How to avoid:
- Document every drill-down path in Looker
- In Superset, set up cross-filtering and drill-down links
- Test drill-downs end-to-end
Pitfall 4: Ignoring Timezone Issues
Looker and dbt may interpret timestamps differently, especially if your warehouse is in UTC and your users are in AEST (Australian Eastern Standard Time).
How to avoid:
- Standardise all timestamps to UTC in dbt
- In Superset, configure the user timezone
- Test date-based metrics (e.g., “orders today”) in both systems
Pitfall 5: Underestimating Training Time
Your team knows Looker. Superset is different. Filters work differently, chart types are different, the UI is different.
How to avoid:
- Budget 2–4 hours of training per user
- Create a “Superset for Looker users” guide
- Run a pilot with 2–3 power users before full rollout
For guidance on training and change management, see AI automation agency Sydney, which covers how to onboard teams to new tools and processes.
Pitfall 6: Not Planning for Maintenance
After migration, who owns the dbt semantic layer? Who updates metrics when business logic changes? Who handles performance issues?
How to avoid:
- Assign a “metrics owner” (typically a data engineer or analytics engineer)
- Create a runbook for common tasks: adding metrics, updating filters, debugging query performance
- Set up monitoring and alerting for query failures
Migration Timeline and Staffing
Typical Project Structure
A phased migration for a mid-market organisation (50–100 dashboards, 100+ metrics) typically takes 8–12 weeks with the following team:
Core Team:
- 1 Analytics Engineer (dbt, SQL)
- 1 Data Engineer (data warehouse, infrastructure)
- 1 Product Manager or Analytics Lead (requirements, prioritisation)
- 1 QA or Data Analyst (metric validation, testing)
Part-Time Support:
- 2–3 Dashboard Owners (from your business teams, 5–10 hours/week)
- 1 Security/Compliance Lead (if SOC 2 or ISO 27001 certification is required)
Week-by-Week Breakdown
Weeks 1–2: Discovery & Planning
- Audit Looker instance (dashboards, explores, measures, filters)
- Define migration phases and success criteria
- Set up dbt project structure and Superset environment
Weeks 3–4: Semantic Layer Development
- Build dbt models from LookML views
- Convert LookML measures to dbt metrics
- Create metric-parity test suite
Weeks 5–6: Phase 1 Dashboard Migration
- Rebuild 5–10 non-critical dashboards in Superset
- Validate metrics (parity testing)
- Gather user feedback
Weeks 7–8: Phase 2 Dashboard Migration
- Rebuild next batch of dashboards
- Refine processes based on Phase 1 learnings
- Scale to remaining dashboards
Weeks 9–10: Validation & Optimisation
- Complete metric-parity testing across all dashboards
- Optimise query performance
- Address edge cases and bugs
Weeks 11–12: Training & Cutover
- Train users on Superset
- Establish support process
- Decommission Looker (or run in parallel if needed)
Budget Estimate
For a mid-market migration (50–100 dashboards):
- Internal team: 2–3 FTE for 10 weeks = 80–120 person-days
- External partner (recommended): $80K–$150K for full delivery
- Infrastructure: Superset hosting, dbt Cloud, data warehouse costs (varies)
Total: $150K–$250K for a complete, validated migration.
If you’re in Sydney or Australia and need a partner to lead this engagement, PADISO’s fractional CTO and AI strategy services can handle the technical heavy lifting, including dbt semantic layer design, Superset architecture, and metric-parity validation.
Post-Migration Optimisation and Handover
Step 1: Performance Tuning
After migration, measure query performance. If dashboards are slower than Looker, optimise:
In dbt:
- Materialise frequently-used models as tables (not ephemeral)
- Add indexes to dimension columns
- Pre-compute common aggregations
In Superset:
- Enable query caching
- Use Superset’s native query feature for complex SQL
- Reduce chart refresh frequency
In your data warehouse:
- Add clustering or partitioning to large tables
- Create materialized views for common queries
Step 2: Set Up Monitoring and Alerting
Monitor the health of your analytics stack:
- dbt: Track model run times, test failures, and refresh schedules
- Superset: Monitor query performance, failed queries, and user activity
- Data warehouse: Track query costs, slow queries, and storage growth
Set up alerts for:
- dbt test failures (indicates data quality issues)
- Superset query timeouts (indicates performance problems)
- Unexpected metric changes (indicates potential data issues)
Step 3: Document the Migration
Create a living document:
- LookML to dbt translation guide: How each LookML pattern maps to dbt
- Metric definitions: Business logic for each metric in the semantic layer
- Dashboard inventory: Which Looker dashboards map to which Superset dashboards
- Runbook: How to add new metrics, update filters, debug issues
- Training materials: Screenshots, videos, FAQs for Superset users
Step 4: Establish Governance
Define who owns what:
- Metrics owner: Owns dbt semantic layer, approves new metrics
- Dashboard owner: Owns Superset dashboards, responds to user requests
- Data owner: Owns data quality, investigates discrepancies
- Security/compliance owner: Ensures RLS, audit logging, and compliance requirements are met
For organisations pursuing SOC 2 or ISO 27001 certification, ensure your Superset instance is configured for audit logging and access control. PADISO’s security audit and compliance services cover Vanta implementation and audit-readiness for analytics platforms.
Step 5: Plan for Ongoing Maintenance
After handover, budget for:
- Metric updates: When business logic changes (e.g., new customer segments, pricing models)
- Dashboard updates: When users request new visualisations or filters
- Performance optimisation: As data grows, queries may slow down
- Tool upgrades: Keep dbt, Superset, and your data warehouse up-to-date
Next Steps and Getting Support
If You’re Ready to Migrate
- Start with discovery: Audit your Looker instance using the checklist in the pre-migration section
- Define your scope: How many dashboards? What’s your timeline?
- Assemble your team: Data engineer, analytics engineer, product manager, QA
- Build your dbt semantic layer: Start with 3–5 critical metrics and validate parity
- Migrate dashboards in phases: Begin with non-critical dashboards to learn and iterate
- Invest in testing: Metric-parity testing is non-negotiable
- Train your users: Budget time for training and support
Common Questions
Q: Can I migrate without dbt? A: Technically yes, but not recommended. dbt’s semantic layer is the bridge between LookML and Superset. Without it, you lose semantic abstraction and maintainability. If you don’t have dbt already, set it up as part of this project.
Q: How long does migration take? A: For a mid-market organisation, 8–12 weeks with a dedicated team. Small organisations (< 20 dashboards) can do it in 4–6 weeks. Large enterprises may need 16+ weeks.
Q: Do I need to hire external help? A: Not mandatory, but recommended. A partner with migration experience can compress timelines, reduce risk, and ensure best practices. At PADISO, we’ve migrated 50+ analytics stacks; our fractional CTO and platform engineering services can lead this work for you.
Q: What about row-level security (RLS)? A: Superset handles RLS natively. You define rules in Superset’s UI or API; they apply automatically to all dashboards. This is actually cleaner than LookML’s access grants.
Q: Can I keep Looker running during migration? A: Yes, we recommend it. Run Looker and Superset in parallel for 2–4 weeks, validate metrics, then cutover. This reduces risk and lets users validate before you decommission Looker.
Q: What if I find metrics don’t match? A: This is normal. Debug systematically: check SQL, join logic, NULL handling, and filters. Document the discrepancy, fix the dbt model, and retest. Most discrepancies are resolved within a few hours.
Getting Help
If you’re in Sydney or Australia and need a partner to lead your migration:
PADISO offers:
- Fractional CTO support: We’ll design your dbt semantic layer and Superset architecture
- Hands-on co-build: Our engineers work alongside your team to migrate dashboards and validate metrics
- Fixed-fee engagements: Transparent pricing for defined scope (e.g., $50K for a 6-week rollout with training)
- Security and compliance: If you need SOC 2 or ISO 27001 audit-readiness, we handle Vanta implementation and documentation
We’ve successfully migrated analytics stacks for seed-stage startups, Series-B companies, and mid-market enterprises across Australia and the Asia-Pacific region. Our approach is outcome-led: we focus on shipping working dashboards, validating metrics, and getting your team trained—not on process for its own sake.
For a detailed breakdown of what a D23.io migration engagement includes, see the $50K D23.io consulting engagement guide. If you’re exploring agentic AI plus Superset to let non-technical users query dashboards naturally, we can integrate that into your migration plan.
Additional Resources
For deeper context on BI migrations and best practices:
- Google Cloud’s Looker best practices guide covers migration considerations and LookML patterns
- Official Looker documentation on moving dashboards between instances explains Looker-to-Looker migration (useful reference for understanding LookML portability)
- Squareshift’s guide on data integrity during BI migrations covers principles applicable to any BI migration
- EntransAI’s Looker to Power BI migration guide provides step-by-step patterns for translating LookML
- Tasman Analytics’ Looker to Omni migration playbook offers a phased approach and LookML conversion strategies
- Analytics8’s BI migration best practices outline considerations for any BI platform migration
- Thoughtworks’ insights on Looker migration strategy cover enterprise migration approaches
- Looker community discussion on dashboard migration provides peer insights and troubleshooting
Summary
Migrating from Looker to D23.io (Apache Superset with dbt’s semantic layer) is achievable without losing LookML logic—but it requires discipline, planning, and rigorous testing.
The core steps:
- Audit your Looker instance thoroughly. Understand every explore, measure, derived table, and filter.
- Translate LookML to dbt: Views become models, measures become metrics, derived tables become dbt transformations.
- Rebuild dashboards in Superset tile by tile, replicating filters, drill-downs, and interactivity.
- Run metric-parity tests for every measure. If results don’t match Looker, debug immediately.
- Optimise and train: Tune performance, set up monitoring, and get your team comfortable with Superset.
Timeline and cost:
- Mid-market migration: 8–12 weeks, $150K–$250K
- Small migration: 4–6 weeks, $50K–$100K
- Large enterprise: 16+ weeks, $250K+
The payoff:
- Open-source stack you control
- Transparent pricing (no Looker licensing surprises)
- Flexibility to integrate agentic AI, custom automation, and other tools
- Cleaner semantic layer (dbt metrics are more maintainable than LookML measures)
If you’re in Sydney or Australia and need a partner to lead this work, PADISO can help. We’ve migrated 50+ analytics stacks, validated thousands of metrics, and trained hundreds of users. Our fractional CTO and platform engineering services are designed for exactly this kind of technical heavy lifting.
Ready to get started? Reach out to PADISO for a free 30-minute consultation on your migration scope and timeline.