PADISO.ai: AI Agent Orchestration Platform - Launching May 2026
Back to Blog
Guide 27 mins

Apache Superset for Operational Dashboards in Manufacturing

Design and operate real-time manufacturing dashboards with Apache Superset. Data modelling, dashboard design, and proven rollout patterns.

The PADISO Team ·2026-06-01

Table of Contents

  1. Why Apache Superset for Manufacturing Operations
  2. Understanding Your Data: Modelling for Operational Dashboards
  3. Building the Foundation: Data Architecture and Connectivity
  4. Dashboard Design Principles for Manufacturing
  5. Creating Effective Visualisations
  6. Real-Time and Near-Real-Time Data Strategies
  7. Performance Optimisation and Caching
  8. Rollout and Adoption Strategy
  9. Security and Access Control
  10. Common Pitfalls and How to Avoid Them
  11. Next Steps and Getting Started

Why Apache Superset for Manufacturing Operations

Manufacturing organisations operate in an environment where visibility into production metrics, equipment performance, and operational efficiency directly impacts the bottom line. Every minute of downtime, every percentage point of scrap rate, and every delay in order fulfilment translates to lost revenue or increased cost. Yet most manufacturing firms still rely on legacy systems, spreadsheets, or fragmented reporting tools that don’t provide the speed, flexibility, or real-time insight needed to make fast, data-driven decisions on the shop floor.

Apache Superset is an open-source data visualisation and exploration platform that has become the tool of choice for operations teams who need to build fast, interactive dashboards without the complexity and cost of traditional enterprise business intelligence platforms. Unlike legacy BI tools that require months to deploy and specialist skills to maintain, Superset can be deployed in weeks, scaled to handle millions of data points, and operated by teams with modest technical overhead.

Superset is lightweight, intuitive, and built for speed. It connects to virtually any data source—SQL databases, data warehouses, message queues, and APIs—and allows you to build dashboards that refresh in seconds, not hours. For manufacturing operations, this means real-time visibility into production status, equipment performance, quality metrics, supply chain flow, and labour efficiency. The platform is also open-source, so there are no per-user licensing fees, no vendor lock-in, and no surprise costs as your operation scales.

At PADISO, we’ve partnered with manufacturing operators, plant managers, and operations directors to design and deploy Superset dashboards that cut through noise and surface the metrics that matter. The goal is always the same: give your team the data they need, in the format they understand, delivered at the speed they need to act. This guide walks you through how to do it.


Understanding Your Data: Modelling for Operational Dashboards

Before you build a single dashboard, you need to understand your data. This is not a technical exercise in schema design—it is a business exercise in defining what “truth” looks like for your operation.

Define Your Operational Metrics

Start by asking: what decisions do your operators, supervisors, and plant managers need to make every day? The answers will tell you what data you need to measure and how to model it.

For a manufacturing plant, the typical operational metrics fall into a few categories:

  • Production metrics: units produced per shift, cycle time, throughput, scrap rate, rework rate, first-pass yield
  • Equipment metrics: uptime, downtime by reason (planned maintenance, unplanned failure, changeover), mean time between failures (MTBF), mean time to repair (MTTR)
  • Quality metrics: defect rate, parts per million (PPM), inspection pass rate, customer returns, warranty claims
  • Labour metrics: labour efficiency, attendance, training completion, safety incidents
  • Supply chain metrics: material availability, inventory turnover, lead time, on-time delivery from suppliers
  • Cost metrics: labour cost per unit, material cost per unit, overhead absorption, cost of quality

Not all of these will be relevant to your operation. The key is to start with the metrics that drive the most value or the most risk. If your plant loses $50,000 per hour of unplanned downtime, equipment uptime is a priority metric. If you’re losing customers to quality issues, defect rate and first-pass yield are priority metrics.

Map Your Data Sources

Manufacturing data lives in many places: manufacturing execution systems (MES), enterprise resource planning (ERP) systems, programmable logic controllers (PLCs), sensors, time clocks, inspection systems, and often spreadsheets and email. Your data model needs to account for all of these sources and reconcile them into a single version of truth.

For each operational metric, document:

  • Where the data originates (which system, which database table, which API endpoint)
  • How frequently it is updated (real-time, batch hourly, batch daily)
  • What granularity it is captured at (per machine, per production line, per shift, per day)
  • What latency is acceptable for decision-making (does a supervisor need to see equipment status in 10 seconds or is 5 minutes acceptable?)
  • What historical depth you need (do you need 12 months of data or 3 years?)

This mapping exercise will reveal gaps, inconsistencies, and opportunities for data improvement. You may discover that your MES records downtime reasons inconsistently, or that your ERP doesn’t track scrap separately by reason, or that your sensors are sending data but it is not being stored. These discoveries are valuable; they tell you where to invest in data quality before you build dashboards.

Design Your Dimensional Model

Once you have mapped your sources, design a simple dimensional model (or star schema) that brings all of this data together. You do not need a complex, enterprise-grade data warehouse. A well-designed set of tables in a PostgreSQL database or a data warehouse like Snowflake will suffice.

The basic structure is:

  • Fact tables: contain the metrics (production count, downtime hours, defect count) and foreign keys to dimensions
  • Dimension tables: contain the attributes that describe the facts (machine name, shift, operator, production order, product type, reason for downtime)

For example, a simple production fact table might look like:

production_fact
  - fact_id
  - date_key (links to date dimension)
  - machine_key (links to machine dimension)
  - shift_key (links to shift dimension)
  - product_key (links to product dimension)
  - units_produced
  - scrap_units
  - rework_units
  - cycle_time_seconds

And a machine dimension:

machine_dimension
  - machine_key
  - machine_id
  - machine_name
  - production_line
  - plant_location
  - equipment_type
  - manufacturer
  - installation_date

This structure allows you to slice and dice your metrics by any dimension—by machine, by product, by shift, by date—without reloading data or rebuilding queries. It also makes your dashboards faster and more flexible.


Building the Foundation: Data Architecture and Connectivity

With your data model defined, you need to decide where to run Superset and how to connect it to your data sources. This decision will affect deployment time, operational overhead, and long-term scalability.

Deployment Options

You have three main options:

Option 1: Self-hosted on your infrastructure (VMs, Kubernetes, Docker). This gives you full control, no recurring SaaS fees, and the ability to integrate with your internal networks and security controls. It requires you to manage infrastructure, updates, backups, and monitoring. For most manufacturing organisations, this is the right choice if you have an internal IT team or a systems integrator partner.

Option 2: Managed Superset service (e.g., Preset, which is the commercial offering from the Superset team). This removes infrastructure management from your plate, includes hosting, backups, and updates, and simplifies deployment. The trade-off is a per-user or per-dashboard subscription cost. This is a good fit if you want to move fast and do not want to manage infrastructure.

Option 3: Cloud data warehouse with built-in BI (Snowflake with Snowsight, BigQuery with Looker, or similar). These are tightly integrated with your data warehouse and often simpler to set up. The trade-off is that you are locked into that vendor’s ecosystem and may pay more for BI features than you need.

For this guide, we assume a self-hosted deployment on your infrastructure or a managed Superset service. Both follow the same design and operational principles.

Connectivity and Data Source Configuration

Superset connects to data sources via database drivers or API connectors. The most common sources for manufacturing are:

  • PostgreSQL, MySQL, or MariaDB: your transactional databases or data warehouse
  • Snowflake, BigQuery, Redshift, or Azure Synapse: cloud data warehouses
  • SQL Server: if you are running on Microsoft stack
  • Elasticsearch: if you are ingesting sensor or log data
  • REST APIs: if your data lives in SaaS applications (e.g., MES, ERP)

When configuring a data source in Superset, you specify:

  • Database type and connection details (host, port, username, password)
  • Whether to use SSL/TLS (you should, always)
  • Connection pool settings (how many concurrent connections to allow)
  • Which tables or schemas are exposed to dashboard builders

For security-sensitive environments (and manufacturing is often security-sensitive due to IP concerns), use database users with minimal privileges. Create a read-only user for Superset that can only access the tables it needs. Store credentials in your secrets manager (e.g., HashiCorp Vault, AWS Secrets Manager) and inject them at runtime, rather than hardcoding them in configuration files.

Preparing Your Data for Superset

Before you expose tables to dashboard builders, prepare them. This means:

  • Denormalise where necessary: Superset works best when the tables you query are already close to the shape you want to visualise. If your dashboard needs to join five tables together every time it loads, it will be slow. Pre-join and aggregate data in your data warehouse or create materialised views.

  • Aggregate fact tables: if you have millions of raw transaction records, do not expose them directly to Superset. Instead, create pre-aggregated fact tables (e.g., production by machine by hour, defects by product by day). This makes queries fast and dashboards responsive.

  • Add calculated columns: add commonly used calculations as columns in your tables (e.g., scrap_rate = scrap_units / total_units, downtime_hours = downtime_minutes / 60). This saves time in dashboard building and ensures consistency across dashboards.

  • Index heavily: add database indexes on columns that will be filtered or grouped in dashboards (date, machine, product, shift). This is one of the highest-impact performance optimisations you can make.

The Data Engineer’s Guide to Lightning-Fast Apache Superset Dashboards provides detailed guidance on this data preparation work.


Dashboard Design Principles for Manufacturing

A well-designed dashboard is not just a collection of charts. It tells a story, guides the viewer’s eye to what matters, and enables fast decision-making. For manufacturing operations, this means designing dashboards that answer the questions your operators and managers ask every shift.

The Three-Dashboard Pattern

We recommend building three types of dashboards for a manufacturing operation:

1. The Plant Status Dashboard (for plant managers, production supervisors, operations directors)

This is a single-page, high-level view of plant health. It answers: “What is happening right now, and is it normal?”

Typical elements:

  • Current production rate (units per hour, compared to target)
  • Equipment uptime (percentage, by line or by machine)
  • Current scrap rate (percentage, compared to target)
  • Orders on schedule (percentage)
  • Safety incidents (count this shift, this month)
  • Staffing (planned vs. actual, by shift)
  • Key alerts (equipment down, quality issue, material shortage)

This dashboard should fit on a single screen (or two at most) and refresh every 30–60 seconds. The goal is to give a plant manager a complete picture of operations in 30 seconds.

2. The Operational Deep-Dive Dashboard (for production supervisors, shift leads, quality engineers)

This is a multi-page dashboard that allows the user to drill down into specific areas—production, quality, equipment, labour. It answers: “Why is something not normal, and what do I do about it?”

For production, typical elements:

  • Production by machine (units, rate, vs. target)
  • Downtime by machine (hours, reason, trend)
  • Scrap by product, by machine, by shift
  • Cycle time by machine (actual vs. standard)
  • First-pass yield by product

For quality:

  • Defects by type, by machine, by shift
  • Inspection results (pass rate, by product)
  • Customer returns (count, reason, trend)

For equipment:

  • Equipment status (running, idle, down, maintenance)
  • Maintenance history (planned vs. unplanned, by machine)
  • Equipment age and reliability (MTBF, MTTR)

These dashboards should allow filtering by machine, product, date range, and shift. They refresh every 5–15 minutes and are typically viewed on a monitor at the production floor or in a control room.

3. The Historical Analysis Dashboard (for plant managers, operations directors, continuous improvement teams)

This is for trend analysis, root-cause analysis, and continuous improvement. It answers: “What is the trend, and where should we focus improvement efforts?”

Typical elements:

  • Production trends (daily, weekly, monthly)
  • Equipment reliability trends (MTBF, MTTR, by machine)
  • Scrap and quality trends
  • Labour efficiency trends
  • Cost trends (labour, material, overhead per unit)
  • Pareto analysis (which machines, products, or reasons account for 80% of downtime or scrap?)

These dashboards are viewed daily or weekly, not constantly. They refresh once or twice a day.

Design Principles

Regardless of which dashboard you are building, follow these principles:

Clarity over aesthetics. A manufacturing operator needs to understand a chart in two seconds. Use clear labels, consistent colours, and simple chart types. Avoid pie charts (they are hard to compare), avoid dual-axis charts (they are confusing), and avoid 3D effects (they distort data).

Colour for meaning, not decoration. Use colour to highlight status (green = good, red = problem, yellow = caution) or to distinguish categories. Do not use five different colours just because it looks nice.

Context matters. Always show a metric alongside its target, its historical average, or its acceptable range. “Equipment uptime is 92%” is meaningless without knowing whether the target is 95% or 85%.

Drill-down capability. Allow users to click on a chart to drill down into detail. If a user sees that “Line 3 is at 85% uptime”, they should be able to click on it to see which machines on Line 3 are causing the problem.

Consistent interaction patterns. If a dashboard has a date filter at the top, all charts should respond to it. If you can click on a bar in one chart to filter another, do the same everywhere. Consistency reduces cognitive load.

Mobile-friendly is optional, but mobile-aware is essential. Most manufacturing dashboards are viewed on a desktop monitor or a large tablet in a control room. Design for that. But make sure your dashboards degrade gracefully if someone views them on a phone.


Creating Effective Visualisations

Superset offers a wide range of visualisation types. For manufacturing dashboards, a few are particularly useful.

Chart Types and When to Use Them

Time series (line chart): for trends over time. Use this for production rate, equipment uptime, scrap rate, or any metric that you want to see trending up or down. Include a target line or a reference line (e.g., average) to provide context.

Bar chart: for comparisons across categories. Use this to compare production by machine, downtime by reason, or scrap by product. Order the bars by value (highest first) to make it easy to spot the top performers or worst performers.

Gauge: for a single metric against a target. Use this for “Equipment uptime: 92% (target 95%)” or “Production rate: 450 units/hour (target 500)”. Gauges are visually clear and work well on dashboards that are viewed from a distance.

Table: for detailed data. Use this for a list of machines with their current status, or a list of downtime events with reason and duration. Keep tables to a few key columns and sort them by the most important metric.

Heatmap: for patterns across two dimensions. Use this to show scrap rate by product and by machine, or downtime by machine and by time of day. Heatmaps make patterns jump out.

Number card: for a single key metric. Use this for “Orders on schedule: 94%” or “Safety incidents this month: 0”. Number cards are simple and clear.

Avoid pie charts, 3D charts, and fancy visualisations that look good but do not communicate clearly. Remember: your audience is operators and managers who need to make fast decisions, not data scientists who have time to study a chart.

Superset-Specific Features

Building a metrics dashboard with Superset and Cube provides a detailed walkthrough of how to structure metrics in Superset for fast, interactive dashboards. The key Superset features to leverage are:

Virtual datasets: instead of writing a SQL query every time you create a chart, create a virtual dataset (a saved query or a view) that represents a logical table. Then use that virtual dataset in multiple charts. This reduces duplication and makes it easier to update the underlying logic.

Saved metrics: if you have a metric that you use in multiple charts (e.g., scrap_rate = scrap_units / total_units), define it once as a saved metric in Superset. Then use it in any chart. If the definition changes, it updates everywhere.

Drill-through: configure drill-through on your charts so that clicking on a bar or a point in a chart filters the dashboard or navigates to another dashboard. This is powerful for investigation.

Filters: use dashboard-level filters (date, machine, product, shift) that apply to multiple charts at once. This allows users to slice and dice the data without rebuilding charts.

SQL Lab and Ad-Hoc Queries

Superset includes a SQL Lab where users can write custom SQL queries to explore data. This is powerful for ad-hoc analysis, but it is also a potential performance risk if users write inefficient queries. To manage this:

  • Educate users on query best practices (use indexes, aggregate data, avoid Cartesian joins)
  • Set query timeouts to prevent runaway queries
  • Monitor query performance and identify slow queries
  • Encourage users to save frequently-used queries as virtual datasets

Real-Time and Near-Real-Time Data Strategies

For manufacturing operations, real-time visibility is often critical. A machine failure that goes unnoticed for an hour can cost thousands of pounds. But “real-time” is a relative term, and achieving true real-time dashboards requires careful architecture.

Understanding Latency Requirements

First, define what “real-time” means for your operation:

  • True real-time (seconds): equipment status, production rate, safety alerts. Latency tolerance: 10–30 seconds.
  • Near real-time (minutes): quality metrics, downtime events, labour efficiency. Latency tolerance: 5–15 minutes.
  • Daily or batch (hours): cost analysis, trend analysis, historical comparisons. Latency tolerance: 1–24 hours.

Different metrics have different latency requirements. Do not try to make everything real-time; it is expensive and often unnecessary. Focus on the metrics that drive fast decisions.

Data Ingestion Patterns

Option 1: Direct sensor/PLC connection. If you have sensors or PLCs that output data in real-time (via MQTT, OPC-UA, or HTTP), ingest that data into a message queue (Kafka, RabbitMQ) and stream it into your data warehouse. This is the lowest-latency approach but requires infrastructure to handle streaming data. Building Real-Time Dashboards with Apache Superset - GoCodeo covers this pattern in detail.

Option 2: Frequent batch ingestion. If you are pulling data from your MES or ERP, ingest it every 5–15 minutes instead of once a day. This is simpler to implement than streaming but introduces some latency. Use a tool like Airbyte or a custom Python script to pull data on a schedule.

Option 3: Hybrid approach. Ingest real-time metrics (equipment status, production rate) via streaming, and batch-ingest slower-moving metrics (quality, labour) every 15 minutes. This gives you the best of both worlds.

Caching and Refresh Strategies

Once data is in your data warehouse, Superset queries it to render dashboards. To keep dashboards responsive, use caching:

  • Query caching: Superset caches the results of SQL queries for a configurable time (e.g., 60 seconds). This means if two users run the same query within 60 seconds, the second user gets the cached result instead of hitting the database. For real-time dashboards, set cache TTL to 30–60 seconds.

  • Materialised views: in your data warehouse, create materialised views that pre-aggregate data (e.g., production by machine by hour). These views are refreshed on a schedule (e.g., every 5 minutes) and are much faster to query than raw data.

  • Incremental aggregation: if you have millions of raw records, do not aggregate them every time. Instead, maintain a running aggregate table that is updated incrementally as new data arrives. For example, maintain a production_by_machine_by_hour table that is updated every hour with new data.

Async Queries

Superset supports asynchronous queries, which allow long-running queries to complete in the background without blocking the user interface. For manufacturing dashboards that need to pull large amounts of historical data, async queries can improve responsiveness. Configure async queries in Superset’s settings and set a reasonable timeout (e.g., 5 minutes).


Performance Optimisation and Caching

A slow dashboard is a useless dashboard. Operators will not wait 10 seconds for a chart to load; they will close the tab and make a decision based on incomplete information. Performance optimisation is not a nice-to-have; it is essential.

Database-Level Optimisations

Start at the database level. This is where 80% of performance gains come from:

  • Indexes: add indexes on columns that are filtered or grouped in your queries (date, machine_id, product_id, shift). A missing index can make a query 100x slower.

  • Partitioning: if your fact tables are large (millions of rows), partition them by date. This allows the database to scan only the relevant partitions instead of the entire table.

  • Aggregation tables: pre-aggregate data at the granularity you need. For example, instead of storing every production event, store production totals by machine by hour. This reduces the data volume by 60x and makes queries much faster.

  • Column selection: in your Superset queries, select only the columns you need. Selecting 50 columns when you only need 5 wastes I/O and memory.

  • Join optimisation: if your queries join multiple tables, ensure the join columns are indexed and that the join order is optimal. Most databases can optimise this automatically, but check the query plan to be sure.

Superset-Level Optimisations

  • Virtual datasets: create virtual datasets (saved queries) that encapsulate complex logic. This makes dashboards easier to build and ensures consistent definitions across dashboards.

  • Saved metrics: define commonly used metrics once and reuse them. This reduces query complexity.

  • Query caching: set appropriate cache TTLs for each dashboard. Real-time dashboards might cache for 30 seconds; historical dashboards might cache for 1 hour.

  • Limit row counts: in your charts, limit the number of rows returned. For example, if you are showing “Top 10 machines by downtime”, set the limit to 10 in the chart configuration. Do not return 1000 rows and rely on the visualisation to show only the top 10.

  • Avoid expensive operations: avoid DISTINCT, UNION, and subqueries when possible. These operations are expensive and can slow down queries.

Monitoring and Alerting

Set up monitoring to track dashboard performance:

  • Query execution time: monitor the average execution time of queries on each dashboard. If a dashboard is taking more than 5 seconds to load, investigate.

  • Database load: monitor CPU, memory, and disk I/O on your database. If a dashboard is causing high load, optimise it.

  • User experience: if users are reporting that dashboards are slow, collect feedback and prioritise optimisation.

Superset includes built-in monitoring (in the admin panel, you can see query performance metrics). Use this to identify slow queries and optimise them.


Rollout and Adoption Strategy

Building a dashboard is one thing; getting your team to use it consistently is another. A dashboard that sits unused is a waste of time and money. Plan your rollout carefully.

Phased Rollout

Do not try to build and deploy all dashboards at once. Instead, follow a phased approach:

Phase 1: Pilot (4–8 weeks)

  • Build the Plant Status Dashboard (the high-level view)
  • Deploy to a small group of power users (plant manager, shift lead, operations director)
  • Gather feedback, refine, and optimise
  • Measure impact (e.g., “Did this help you spot issues faster?”)

Phase 2: Expand (8–12 weeks)

  • Deploy the Plant Status Dashboard to all supervisors and managers
  • Build and deploy the Operational Deep-Dive Dashboards (production, quality, equipment)
  • Train users on how to use the dashboards
  • Establish a process for users to request new dashboards or changes

Phase 3: Optimise (ongoing)

  • Build the Historical Analysis Dashboard
  • Optimise performance based on usage data
  • Retire dashboards that are not being used
  • Iterate based on feedback

At PADISO, we’ve found that this phased approach reduces risk, builds momentum, and ensures that dashboards actually address real user needs.

Training and Change Management

Dashboards are only useful if people know how to use them:

  • Live training: conduct live training sessions with supervisors and operators. Walk through the dashboards, explain what each chart means, and show how to use filters and drill-through.

  • Documentation: create simple, visual documentation (screenshots, short videos) that users can reference. Avoid long manuals; keep it to one page per dashboard.

  • Support: assign a point person (or a small team) to answer questions and help users troubleshoot. In the first few weeks, expect questions. Answer them quickly and use the feedback to improve documentation.

  • Reinforcement: send out a weekly “dashboard highlight” that shows an interesting insight from the dashboards. This keeps dashboards top-of-mind and shows their value.

Measuring Adoption and Impact

Track how dashboards are being used and what impact they are having:

  • Usage metrics: how many users are accessing each dashboard, how often, and for how long? Superset logs this data; use it to identify which dashboards are popular and which are not.

  • Business impact: are dashboards helping you achieve your goals? For example, if you deployed a quality dashboard, has the scrap rate gone down? If you deployed an uptime dashboard, has equipment downtime been reduced? Measure this.

  • User feedback: regularly ask users for feedback. What is working? What is not? What would they like to see?

Use this data to prioritise improvements and justify continued investment in dashboards.


Security and Access Control

Manufacturing data is often sensitive. Production metrics, equipment failures, and labour data can reveal proprietary information or competitive advantages. Secure your dashboards accordingly.

Authentication and Authorisation

  • Authentication: require users to log in with a username and password (or integrate with your company’s single sign-on / LDAP). Do not allow anonymous access to dashboards.

  • Authorisation: use Superset’s role-based access control (RBAC) to control who can see which dashboards and data. For example, a shift supervisor might see dashboards for their production line but not for other lines. A plant manager might see all dashboards.

  • Row-level security: if you need to restrict data within a dashboard (e.g., a supervisor should only see data for their shift), use Superset’s row-level security feature. This filters data at query time based on the user’s role.

Data Security

  • Encryption in transit: use HTTPS and TLS for all connections between Superset, the database, and users’ browsers.

  • Encryption at rest: if your data warehouse supports encryption at rest, enable it.

  • Database credentials: store database credentials in a secrets manager (AWS Secrets Manager, HashiCorp Vault) and do not hardcode them in configuration files.

  • Audit logging: enable audit logging in Superset to track who accessed which dashboards and when. This is useful for compliance and troubleshooting.

Compliance

If your manufacturing operation is subject to regulatory compliance (e.g., ISO 9001 for quality management, GDPR for employee data), ensure your dashboards and data practices comply. For organisations pursuing SOC 2 or ISO 27001 compliance, working with a partner like PADISO who understands both security and operational dashboards can accelerate your compliance journey whilst ensuring your dashboards remain effective.


Common Pitfalls and How to Avoid Them

We have seen many Superset deployments succeed and many stumble. Here are the most common pitfalls and how to avoid them.

Pitfall 1: Too Many Metrics, No Focus

The problem: you build a dashboard with 20 charts, each showing a different metric. Users are overwhelmed and do not know what to focus on.

The solution: be ruthless about what goes on a dashboard. Every chart should answer a specific question that a user needs answered. If a chart does not drive a decision or action, remove it. A good plant status dashboard has 5–8 charts, not 20.

Pitfall 2: Slow Dashboards

The problem: a dashboard takes 10 seconds to load. Users get frustrated and stop using it.

The solution: optimise at the database level first (indexes, aggregation tables). Then optimise at the Superset level (caching, virtual datasets). Monitor query performance and set a target (e.g., all dashboards should load in under 3 seconds).

Pitfall 3: Inconsistent Data Definitions

The problem: one dashboard shows “production rate” as units per hour, another shows it as units per shift. Users are confused about which is correct.

The solution: define your metrics once, in a centralised place (a metrics layer in your data warehouse or a saved metric in Superset). Use that definition everywhere.

Pitfall 4: Poor Data Quality

The problem: your dashboards look good, but the data is wrong. Machines are marked as “down” when they are actually running. Scrap counts are inconsistent. Users lose trust in the dashboards.

The solution: invest in data quality before you build dashboards. Audit your data sources. Reconcile data between systems. Create data quality rules and monitor them. If data is wrong, fix it at the source, not in the dashboard.

Pitfall 5: Dashboards That No One Uses

The problem: you build beautiful dashboards, but users do not use them. They continue to rely on spreadsheets and email.

The solution: involve users in the design process from the start. Build dashboards that answer their actual questions, not your assumptions about what they need. Train them, support them, and measure adoption. Iterate based on feedback.

Pitfall 6: No Maintenance Plan

The problem: you build dashboards, deploy them, and then forget about them. Over time, they break (data sources change, metrics change), and users stop using them.

The solution: assign ownership of each dashboard. Create a process for users to request changes or report issues. Review dashboards quarterly and retire ones that are not being used. Plan for maintenance and updates as part of your ongoing operations.


Next Steps and Getting Started

If you are ready to build operational dashboards for your manufacturing operation, here is how to get started:

Step 1: Assess Your Current State (1–2 weeks)

  • Map your current data sources and data quality
  • Define your operational metrics and decision-making needs
  • Identify your power users (the people who will use dashboards most)
  • Assess your infrastructure (do you have a database, a data warehouse, or do you need to build one?)

Step 2: Design Your Data Architecture (2–4 weeks)

  • Design your dimensional model (fact and dimension tables)
  • Decide on your data ingestion strategy (batch, streaming, or hybrid)
  • Set up your data warehouse or prepare your existing database
  • Implement data quality checks

Step 3: Deploy Superset (1–2 weeks)

  • Choose a deployment option (self-hosted or managed)
  • Deploy Superset to your infrastructure
  • Configure data sources and test connectivity
  • Set up authentication and authorisation

Step 4: Build Your First Dashboard (2–4 weeks)

  • Start with the Plant Status Dashboard
  • Build charts that address your top operational questions
  • Optimise performance
  • Deploy to your pilot group

Step 5: Gather Feedback and Iterate (ongoing)

  • Collect feedback from users
  • Measure adoption and impact
  • Refine dashboards based on feedback
  • Plan Phase 2 (expand to more dashboards and users)

Getting Expert Help

Building operational dashboards requires expertise in data engineering, dashboard design, and manufacturing operations. If you do not have that expertise in-house, consider partnering with a specialist. PADISO has helped manufacturing operators design and deploy Superset dashboards that cut through noise and surface the metrics that matter. We can help you with data architecture, dashboard design, performance optimisation, and rollout strategy. We work with you to ensure your dashboards are not just technically sound but also practically useful for your team.

For more detailed guidance on building fast, scalable dashboards, see Build Scalable Apache Superset Dashboards for Logistics Teams, which covers patterns that apply directly to manufacturing. For enterprise-grade dashboard design principles, Building enterprise grade dashboards with Apache Superset is a valuable resource.

Final Thoughts

Operational dashboards are not a luxury; they are a necessity in modern manufacturing. They give your team the visibility and speed they need to make fast, data-driven decisions. But dashboards are only as good as the data they show and the decisions they enable. Start with a clear understanding of your operational metrics and your users’ needs. Build incrementally, measure impact, and iterate. With the right approach, Apache Superset can become the nervous system of your manufacturing operation, connecting data to decisions and decisions to outcomes.

Your next step is to assess your current state and define your operational metrics. Start there, and you will be on your way to dashboards that matter.

Want to talk through your situation?

Book a 30-minute call with Kevin (Founder/CEO). No pitch — direct advice on what to do next.

Book a 30-min call