Table of Contents
- Why Filter Box Patterns Matter in Production
- Foundational Concepts and Architecture
- Single-Column Filter Patterns
- Multi-Column and Cascading Filters
- Performance Optimisation Strategies
- Real-World Deployment Gotchas
- Advanced Patterns and Edge Cases
- Monitoring and Debugging Filter Performance
- Migration and Rollout Strategy
- Summary and Next Steps
Why Filter Box Patterns Matter in Production {#why-filter-box-patterns-matter}
Filter boxes are the interface layer between your users and your data. In a production Superset cluster, they’re often the first thing that breaks under load, the source of mysterious query slowdowns, and the component that determines whether your dashboards feel responsive or sluggish.
We’ve seen teams at PADISO spend weeks optimising query logic only to discover that the real bottleneck was a poorly configured filter box triggering redundant API calls or scanning millions of rows to populate a dropdown. The difference between a filter that loads in 200ms and one that takes 3 seconds is the difference between a dashboard your team uses daily and one they avoid.
This guide walks through the patterns we’ve refined across dozens of production Superset deployments—from seed-stage startups running single-cluster setups to enterprise teams managing multi-region deployments with hundreds of concurrent users. We’ll cover what works, what breaks, and the specific configuration and code changes that move the needle in real environments.
Filter box patterns matter because they directly affect:
- User adoption: Slow, unreliable filters drive users back to SQL or BI tools they know. Fast, predictable filters become the default.
- Infrastructure cost: Inefficient filters generate unnecessary queries, bloating compute and storage costs.
- Data freshness: Cascading filters that fetch stale reference data undermine trust in your dashboards.
- Operational stability: A single misconfigured filter can cascade into cluster-wide slowdowns or connection exhaustion.
Foundational Concepts and Architecture {#foundational-concepts}
How Superset Filter Boxes Work
A filter box in Superset is a chart that renders a set of form controls (dropdowns, date pickers, search inputs) and exposes their values to other charts on the dashboard via dashboard-level variables. When a user changes a filter, Superset broadcasts that change to all dependent charts, which re-execute their queries with the new filter values.
Under the hood, the filter box is a special chart type that doesn’t render data—it renders a form. The form controls are defined in the chart’s JSON config, and the form values are stored in the dashboard’s state. When a dependent chart refreshes, it reads those state values and injects them into its query via templating or native filter syntax.
This architecture has important implications:
- Filter boxes don’t query data themselves (usually). A filter box with a dropdown of countries can either hardcode the list or fetch it from a query. Hardcoding is fast but stale; querying is fresh but slower.
- Filter values are broadcast to all charts. If you have 20 charts on a dashboard and change a filter, all 20 charts re-query (unless you configure dependencies to prevent unnecessary refreshes).
- Filter boxes are charts. They have all the overhead of charts: caching, permissions, VizLib rendering. A misconfigured filter box chart can be as expensive as a slow data visualisation.
The Role of Native Filters vs. Filter Boxes
Superset now has two filtering systems: the legacy filter box chart and the newer native filters feature. Native filters are faster, more flexible, and recommended for new deployments. However, many production clusters still use filter boxes because they predate native filters or because migrations are risky.
This guide focuses on filter box patterns because they’re still common in production and because the patterns (query optimisation, cascading logic, state management) apply to native filters too.
Where Filter Performance Breaks Down
In our experience, filter performance degrades in three scenarios:
- Dropdown population at scale: A filter that queries a table with 10M rows to populate a dropdown will block the UI while it scans the table.
- Cascading filters with tight coupling: Filter A depends on Filter B depends on Filter C. If Filter B’s query is slow, the cascade stalls and users see loading spinners everywhere.
- Dashboard-wide broadcasts: A single filter change triggers re-queries on 50 charts. If even one chart is slow, the user sees a stalled dashboard.
Each of these has a solution, and we’ll walk through them in detail.
Single-Column Filter Patterns {#single-column-patterns}
Pattern 1: Static Dropdown with Hardcoded Values
The fastest filter is one that doesn’t query anything. If your filter values are stable (e.g., a list of regions, product categories, or cost centres), hardcode them in the filter box JSON config.
When to use: Values that change infrequently (quarterly or less) and are small enough to fit in the config (< 100 values).
Implementation:
{
"filter_box": {
"region": {
"label": "Region",
"description": "Select region",
"defaultValue": "APAC",
"multiple": false,
"values": [
{ "text": "APAC", "value": "APAC" },
{ "text": "EMEA", "value": "EMEA" },
{ "text": "Americas", "value": "Americas" }
]
}
}
}
Performance: Dropdown renders in < 50ms. No database queries.
Gotcha: When values change, you must update the config and redeploy. Teams often forget to do this, and users see stale values. Set a calendar reminder to audit hardcoded values quarterly.
Pattern 2: Query-Based Dropdown with Aggressive Caching
If your filter values change frequently, you need to query them. But querying on every dashboard load is expensive. Use Superset’s query cache to fetch values once per cache period (e.g., 1 hour) and reuse them.
When to use: Values that change daily or weekly, and you can tolerate stale values for a few hours.
Implementation:
Create a simple query that returns distinct values:
SELECT DISTINCT region AS value, region AS text
FROM sales_fact
WHERE region IS NOT NULL
ORDER BY region
Create a chart from this query, set the cache TTL to 3600 seconds (1 hour), and configure the filter box to use this chart as the source.
In the filter box JSON:
{
"filter_box": {
"region": {
"label": "Region",
"description": "Select region",
"defaultValue": "APAC",
"multiple": false,
"isRequired": false,
"query": "SELECT DISTINCT region FROM sales_fact WHERE region IS NOT NULL ORDER BY region"
}
}
Then, in Superset’s UI, set the cache timeout on the underlying dataset to 3600 seconds.
Performance: First load queries the database and caches the result (200–500ms). Subsequent loads within the cache window serve from cache (< 50ms).
Gotcha: If values are cached and a new region is added to the database, users won’t see it until the cache expires. This is often acceptable, but document the lag for your team. If freshness is critical, reduce the TTL to 300 seconds (5 minutes), accepting the trade-off of more database queries.
Pattern 3: Search-Based Dropdown for High-Cardinality Columns
If your filter column has thousands of distinct values (e.g., customer names, product SKUs), a dropdown is unusable—it will have 5,000 options and render slowly. Instead, use a search box that queries as the user types.
When to use: Columns with > 500 distinct values where users know what they’re searching for.
Implementation:
Configure the filter box to use a search input instead of a dropdown:
{
"filter_box": {
"customer_name": {
"label": "Customer",
"description": "Search by name",
"defaultValue": "",
"multiple": false,
"isRequired": false,
"filterType": "search",
"query": "SELECT DISTINCT customer_name FROM customers WHERE customer_name LIKE '{{search_term}}%' ORDER BY customer_name LIMIT 100"
}
}
}
The {{search_term}} variable is replaced with what the user types. The LIMIT 100 prevents the query from returning thousands of rows.
Performance: Each keystroke triggers a query (50–200ms depending on index quality). Users see results as they type.
Gotcha: If your table doesn’t have an index on the filter column, the LIKE query will do a full table scan. Add an index:
CREATE INDEX idx_customer_name ON customers(customer_name);
Also, users might type a partial name that matches thousands of rows (e.g., “A”). The LIMIT 100 prevents the query from returning all of them, but users might not see the value they want. Consider increasing the limit to 500 or adding a message like “showing first 100 matches”.
Multi-Column and Cascading Filters {#multi-column-patterns}
Pattern 4: Independent Multi-Column Filters
When you have multiple filters that are independent (e.g., region, product category, date range), the simplest pattern is to define each filter separately without cascading logic. Each filter is independent and can be changed without affecting the others.
When to use: Filters that don’t have dependencies (e.g., filtering by region AND product category, where any region can have any category).
Implementation:
{
"filter_box": {
"region": {
"label": "Region",
"values": ["APAC", "EMEA", "Americas"]
},
"category": {
"label": "Category",
"values": ["Electronics", "Clothing", "Home"]
},
"date_range": {
"label": "Date Range",
"type": "date_range",
"defaultValue": ["2024-01-01", "2024-12-31"]
}
}
}
Each filter is rendered independently. When a user changes any filter, all dependent charts re-query using the new values.
Performance: Each filter renders independently (< 100ms total). No cascading delays.
Gotcha: If filters are not truly independent, users will select combinations that have no data. For example, if you have region and warehouse filters, and a particular warehouse only serves one region, users might select “APAC” and “Warehouse-US” and see no results. Use cascading filters (next pattern) to prevent this.
Pattern 5: Cascading Filters with Conditional Queries
When filters have dependencies, use cascading logic: the values in Filter B depend on the value selected in Filter A. For example, after selecting a region, the warehouse dropdown should only show warehouses in that region.
When to use: Filters with hierarchical or dependent relationships (region → warehouse, country → state → city).
Implementation:
Define filters in dependency order and use template variables to pass values downstream:
{
"filter_box": {
"region": {
"label": "Region",
"description": "Select region",
"defaultValue": "APAC",
"multiple": false,
"values": ["APAC", "EMEA", "Americas"]
},
"warehouse": {
"label": "Warehouse",
"description": "Select warehouse",
"defaultValue": "",
"multiple": false,
"isRequired": false,
"query": "SELECT DISTINCT warehouse_name FROM warehouses WHERE region = '{{region}}' ORDER BY warehouse_name"
}
}
}
When the user selects a region, the {{region}} variable is substituted into the warehouse query, and the warehouse dropdown is re-populated with only warehouses in that region.
Performance: First filter renders instantly (hardcoded values). Second filter queries on change (100–300ms). Cascading prevents invalid combinations.
Gotcha: If the upstream filter (region) is changed, the downstream filter (warehouse) might show stale values. Superset should refresh the downstream filter automatically, but in some versions, it doesn’t. Test thoroughly. If it fails, you may need to use native filters instead, which handle cascading more reliably.
Pattern 6: Cascading with Default Values and Null Handling
When a cascading filter’s upstream value is changed, the downstream filter might become invalid. For example, if the user selects warehouse “Sydney” (in APAC) and then changes the region to “Americas”, “Sydney” is no longer valid. Handle this gracefully.
Implementation:
Clear the downstream filter when the upstream filter changes:
{
"filter_box": {
"region": {
"label": "Region",
"defaultValue": "APAC",
"multiple": false,
"values": ["APAC", "EMEA", "Americas"],
"onChange": "clearDownstream(['warehouse'])"
},
"warehouse": {
"label": "Warehouse",
"defaultValue": "",
"multiple": false,
"query": "SELECT DISTINCT warehouse_name FROM warehouses WHERE region = '{{region}}' ORDER BY warehouse_name"
}
}
}
The clearDownstream function clears the warehouse filter when region changes, forcing the user to re-select a warehouse. This prevents invalid filter combinations.
Gotcha: Not all Superset versions support onChange callbacks. Check your version’s documentation. If not supported, consider using native filters, which have better event handling.
Performance Optimisation Strategies {#performance-optimisation}
Optimisation 1: Index Your Filter Columns
The most common performance issue is that filter queries do full table scans. Add database indexes to filter columns to make queries fast.
For PostgreSQL:
CREATE INDEX idx_region ON sales_fact(region);
CREATE INDEX idx_warehouse ON warehouses(region, warehouse_name);
For cascading filters, create a composite index on the upstream column first, then the downstream column. This allows the database to use a single index scan instead of a full table scan.
For Redshift (if you’re using Redshift, common in AWS setups):
CREATE INDEX idx_region ON sales_fact(region) DISTKEY;
CREATE INDEX idx_warehouse ON warehouses(region, warehouse_name);
Performance impact: A query that scans 100M rows without an index (5–10 seconds) might scan 10K rows with an index (50–100ms). This is the single biggest performance win.
Gotcha: Indexes have a cost: they slow down writes and consume storage. For tables that are updated frequently, balance read performance against write performance. For analytical tables that are updated nightly, indexes are almost always worth it.
Optimisation 2: Materialised Views for Complex Filter Queries
If your filter query is complex (e.g., joining multiple tables, aggregating data), materialise it into a table or view that’s refreshed nightly. This way, the filter query is fast (it’s just a SELECT from the materialised view) and fresh (it’s updated every night).
Implementation:
Instead of querying the raw tables:
SELECT DISTINCT warehouse_name
FROM warehouses w
JOIN shipments s ON w.warehouse_id = s.warehouse_id
JOIN sales_fact sf ON s.shipment_id = sf.shipment_id
WHERE w.region = '{{region}}'
AND sf.date >= CURRENT_DATE - INTERVAL '90 days'
ORDER BY warehouse_name;
Create a materialised view:
CREATE MATERIALIZED VIEW mv_active_warehouses AS
SELECT DISTINCT w.warehouse_name, w.region
FROM warehouses w
JOIN shipments s ON w.warehouse_id = s.warehouse_id
JOIN sales_fact sf ON s.shipment_id = sf.shipment_id
WHERE sf.date >= CURRENT_DATE - INTERVAL '90 days';
CREATE INDEX idx_mv_region ON mv_active_warehouses(region);
Then, in the filter box, query the materialised view:
SELECT DISTINCT warehouse_name
FROM mv_active_warehouses
WHERE region = '{{region}}'
ORDER BY warehouse_name;
Refresh the materialised view nightly:
REFRESH MATERIALIZED VIEW mv_active_warehouses;
Performance: The filter query now scans a small, indexed table instead of joining three large tables. Query time drops from 2–5 seconds to 50–100ms.
Gotcha: The materialised view is stale until the next refresh. If you need real-time freshness, this won’t work. Also, maintaining materialised views adds operational overhead—you need to monitor refresh jobs and handle failures.
Optimisation 3: Limit Result Sets and Pagination
When a filter query returns thousands of rows, rendering them in a dropdown is slow. Use LIMIT to cap the result set and add pagination or search to let users find what they need.
Implementation:
SELECT DISTINCT customer_name
FROM customers
WHERE customer_name LIKE '{{search_term}}%'
ORDER BY customer_name
LIMIT 100;
If the user doesn’t find what they’re looking for in the first 100 results, they can refine their search.
Performance: Limiting to 100 rows means the query returns quickly and the dropdown renders instantly.
Gotcha: Users might not see the value they want if it’s beyond the limit. Consider increasing the limit to 500 or adding a message like “showing first 100 results, refine your search to see more”. Also, make sure the ORDER BY clause is on an indexed column, or the query will still scan all rows to find the top 100.
Optimisation 4: Cache Filter Results Aggressively
Filter values change infrequently. Cache them for hours or days to avoid querying the database every time the dashboard loads.
In Superset, set the cache TTL on the dataset that powers the filter:
- Go to the dataset settings.
- Find the cache timeout setting (usually in the “Advanced” section).
- Set it to 3600 seconds (1 hour) or higher, depending on how fresh the values need to be.
For Superset clusters with Redis, you can also set cache policies at the cluster level:
# In superset_config.py
CACHE_CONFIG = {
'CACHE_TYPE': 'redis',
'CACHE_REDIS_URL': 'redis://localhost:6379/0',
'CACHE_DEFAULT_TIMEOUT': 3600,
}
Performance: After the first load, filter values are served from cache (< 10ms) instead of querying the database.
Gotcha: Cached values can become stale. If a new region is added to the database, users won’t see it until the cache expires. Document the cache lag and set up alerts to refresh the cache when values change, if needed. For critical filters, you might want a manual “refresh cache” button in the dashboard.
Real-World Deployment Gotchas {#real-world-gotchas}
Gotcha 1: Filter Queries Blocking the Dashboard
In some Superset versions, if a filter query is slow, it blocks the entire dashboard from loading. Users see a loading spinner and can’t interact with anything until the filter finishes.
Symptom: Dashboard takes 30+ seconds to load, even though the underlying data queries are fast.
Root cause: A filter box query is slow, and Superset waits for it to complete before rendering the dashboard.
Solution:
- Identify the slow filter query by looking at Superset’s logs or using your database’s query monitor.
- Optimise the query using the strategies above (add indexes, materialise views, limit results).
- If the query still can’t be made fast enough, set a timeout:
SELECT ... LIMIT 100000with a timeout of 5 seconds. If the query exceeds 5 seconds, it returns an error, and the filter is left empty. This is better than blocking the dashboard.
Gotcha 2: Cascading Filters with Stale Values
When a cascading filter’s upstream value changes, the downstream filter should refresh. In some Superset versions, it doesn’t, and users see stale values.
Symptom: User selects region “APAC”, then changes to “Americas”. The warehouse dropdown still shows warehouses from APAC.
Root cause: Superset didn’t re-query the downstream filter when the upstream filter changed.
Solution:
- Upgrade Superset to a recent version (4.0+). This has been fixed in recent releases.
- If you can’t upgrade, use native filters instead of filter boxes. Native filters have more reliable cascading logic.
- If neither is an option, add a manual “refresh” button to the dashboard that users can click to refresh all filters.
Gotcha 3: Filter Values Disappearing After Cluster Failover
If your Superset cluster has multiple instances behind a load balancer, and one instance fails, users might be routed to a different instance. If that instance doesn’t have the filter values cached, the filter will be empty.
Symptom: Dashboard loads fine, but the filter dropdown is empty. Refreshing the page sometimes fixes it.
Root cause: Filter values are cached in memory on the instance, not in a shared cache like Redis. When the user is routed to a different instance, the cache is missing.
Solution:
- Use a shared cache like Redis for all Superset instances. This ensures all instances have the same cached values.
- Configure Superset to use Redis: see the Installing Superset from Scratch - Apache Superset Official Documentation for details.
Gotcha 4: Filter Queries Breaking After Database Schema Changes
If a column is renamed or removed from the database, filter queries that reference it will break. Users will see an error, and the filter won’t load.
Symptom: Filter was working, now it shows an error. Database team renamed a column.
Root cause: Filter query references a column that no longer exists.
Solution:
- Set up alerts to notify you when queries fail. Most database monitoring tools (Datadog, New Relic) can do this.
- When a schema change happens, immediately update the filter query to use the new column name.
- If you can’t update immediately, temporarily disable the filter or set it to a hardcoded list of values.
Advanced Patterns and Edge Cases {#advanced-patterns}
Pattern 7: Dynamic Filter Lists Based on User Permissions
In some cases, you want to show different filter values to different users. For example, a regional manager should only see warehouses in their region, while a global manager sees all warehouses.
Implementation:
Use Superset’s user context variables in the filter query:
SELECT DISTINCT warehouse_name
FROM warehouses
WHERE region = '{{ current_user.custom_attributes.region }}'
ORDER BY warehouse_name;
Superset will substitute the current user’s custom attribute (region) into the query. You need to set up custom user attributes in Superset first.
Gotcha: Custom user attributes need to be set up in Superset’s user management interface or via API. This adds operational overhead. Also, if a user’s region changes, their custom attributes might not update immediately.
Pattern 8: Multi-Select Filters with AND vs. OR Logic
When a filter allows multiple selections, the default logic is usually OR (show data where region = “APAC” OR region = “EMEA”). Sometimes you need AND logic, which is more complex.
Implementation for OR (default):
{
"filter_box": {
"region": {
"label": "Region",
"multiple": true,
"values": ["APAC", "EMEA", "Americas"]
}
}
}
In the dependent chart, use:
SELECT * FROM sales_fact WHERE region IN ({{ region }});
If the user selects [“APAC”, “EMEA”], this becomes WHERE region IN ('APAC', 'EMEA'), which is OR logic.
Implementation for AND (less common, more complex):
If you need AND logic (e.g., “show sales where region = APAC AND category = Electronics”), use separate filters for each dimension:
{
"filter_box": {
"region": {
"label": "Region",
"multiple": true,
"values": ["APAC", "EMEA", "Americas"]
},
"category": {
"label": "Category",
"multiple": true,
"values": ["Electronics", "Clothing", "Home"]
}
}
}
In the dependent chart:
SELECT * FROM sales_fact WHERE region IN ({{ region }}) AND category IN ({{ category }});
This is AND logic: the query filters by region AND category.
Gotcha: AND logic with multiple selections can return no results if the user selects incompatible values. For example, if Electronics is only sold in APAC, and the user selects [“EMEA”] and [“Electronics”], the query returns no rows. This can confuse users. Consider using cascading filters to prevent incompatible selections.
Pattern 9: Date Range Filters with Rolling Windows
Date range filters are common, but hardcoding them (“last 30 days”) means they need to be updated every day. Use rolling windows to make them dynamic.
Implementation:
Instead of hardcoding dates, use relative dates:
{
"filter_box": {
"date_range": {
"label": "Date Range",
"type": "date_range",
"defaultValue": ["{{ dateSubtract(today(), 30) }}", "{{ today() }}"]
}
}
}
Superset will substitute the actual dates, so the filter always shows the last 30 days.
Gotcha: Not all Superset versions support dynamic date functions in filter configs. Check your version’s documentation. If not supported, you can use SQL to generate the dates:
SELECT CURRENT_DATE - INTERVAL '30 days' AS start_date, CURRENT_DATE AS end_date;
Then use these values in the filter.
Monitoring and Debugging Filter Performance {#monitoring-debugging}
How to Identify Slow Filters
Step 1: Check Superset Logs
Superset logs all queries, including filter queries. Look for queries with high execution times:
grep "filter" /var/log/superset/superset.log | grep "duration"
Look for entries like:
2024-01-15 10:30:45 - superset.sql_lab - INFO - Query duration: 5.234s - SELECT DISTINCT region FROM sales_fact
If a filter query takes > 1 second, it’s slow.
Step 2: Check Database Query Performance
Run the filter query directly in your database client and time it:
EXPLAIN ANALYZE SELECT DISTINCT region FROM sales_fact;
If the query plan shows a full table scan, add an index.
Step 3: Monitor Dashboard Load Time
Use your browser’s developer tools to measure dashboard load time:
- Open the dashboard.
- Open the browser’s Network tab (F12 → Network).
- Reload the page.
- Look for API calls to
/api/v1/dataset/...that correspond to filter queries. - Check the time taken for each request.
If a single request takes > 2 seconds, that filter is slow.
Debugging Tools and Commands
Enable Superset Debug Logging:
# In superset_config.py
LOG_LEVEL = 'DEBUG'
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'standard',
'stream': 'ext://sys.stdout'
},
},
'root': {
'level': 'DEBUG',
'handlers': ['console']
}
}
Restart Superset and watch the logs as you interact with filters.
Use Superset’s Query Monitor:
- Go to Superset → Admin → Query Monitor.
- Look for slow queries.
- Click on a query to see its details, including execution time and database.
Profile Database Queries:
For PostgreSQL:
EXPLAIN (ANALYZE, BUFFERS) SELECT DISTINCT region FROM sales_fact;
This shows the query plan and actual execution stats. Look for:
- Full table scans (Seq Scan) on large tables.
- Index scans that return many rows (high actual rows).
- Hash joins or nested loops that might be slow.
Migration and Rollout Strategy {#migration-strategy}
If you’re deploying new filter patterns to a production Superset cluster, follow this strategy to minimise risk.
Phase 1: Test in Development
- Set up a development Superset instance with a copy of production data.
- Implement the new filter pattern on a test dashboard.
- Load test the dashboard with a tool like Apache JMeter or Locust to simulate concurrent users.
- Measure filter load time, dashboard load time, and database query time.
- Identify bottlenecks and optimise.
Phase 2: Test in Staging
- Deploy to a staging Superset instance that mirrors production.
- Have a few power users test the dashboard and provide feedback.
- Run performance tests again to ensure results match development.
- Monitor logs and database performance.
Phase 3: Gradual Rollout to Production
- Deploy to production during a low-traffic window (e.g., early morning).
- Monitor dashboard load time and filter query performance for the first hour.
- If performance is good, announce the change to users.
- If performance degrades, roll back immediately.
Rollback Plan
If something goes wrong, you need to roll back quickly. Keep the old filter configuration in version control and be ready to redeploy it:
git checkout HEAD~1 -- dashboards/sales_dashboard.json
kubectl apply -f dashboards/sales_dashboard.json
Summary and Next Steps {#summary}
Filter box patterns are critical to dashboard performance and user adoption. The patterns in this guide—from simple hardcoded dropdowns to complex cascading filters—are battle-tested in production environments.
Key takeaways:
- Start simple: Hardcoded values are fastest. Use them when possible.
- Cache aggressively: Filter values change infrequently. Cache them for hours or days.
- Index your columns: Add database indexes to filter columns. This is the single biggest performance win.
- Cascade carefully: Cascading filters prevent invalid combinations but add complexity. Test thoroughly.
- Monitor relentlessly: Set up logging and alerts to catch slow filters before users notice.
Next steps:
- Audit your current Superset dashboards. Identify filter queries that take > 1 second.
- Apply the optimisation strategies above (add indexes, materialise views, cache results).
- Set up monitoring to track filter performance over time.
- Document your filter patterns for your team. This guide can serve as a reference.
If you’re building a production data platform and need help optimising Superset clusters, designing filter architectures, or implementing platform engineering best practices, PADISO specialises in helping teams ship scalable analytics infrastructure. We’ve optimised Superset deployments for teams ranging from seed-stage startups to enterprise organisations, and we bring the operational rigour needed to keep dashboards fast and reliable. Reach out if you’d like to discuss your specific setup.
For more details on Superset architecture and best practices, check out the Dashboard Filtering - Preset Documentation and the Contributing to Apache Superset - GitHub repository, which contains insights into Superset’s design and development patterns.
Filter performance is not a one-time fix—it’s an ongoing practice. Monitor your dashboards, measure your queries, and iterate. The patterns in this guide will serve you well as your data platform scales.