PADISO.ai: AI Agent Orchestration Platform - Launching May 2026
Back to Blog
Guide 25 mins

Public Health Surveillance Dashboards: Patterns From State Health

Learn how state health departments build effective surveillance dashboards for notifiable diseases, immunisation, and outbreak detection. Real patterns and technical insights.

The PADISO Team ·2026-05-03

Public Health Surveillance Dashboards: Patterns From State Health

Table of Contents

  1. Why Public Health Surveillance Dashboards Matter
  2. Core Data Patterns in State Health Surveillance
  3. Notifiable Disease Tracking: Architecture and Real-Time Reporting
  4. Immunisation Dashboards: Coverage, Equity, and Timeliness
  5. Building for Outbreak Detection and Response
  6. Technical Implementation: Superset, D23.io, and Modern Stack
  7. Data Quality, Governance, and Audit-Readiness
  8. User Experience and Non-Technical Access Patterns
  9. Scaling Across Multiple Jurisdictions
  10. Next Steps: Building or Upgrading Your Surveillance Infrastructure

Why Public Health Surveillance Dashboards Matter {#why-matter}

Public health surveillance dashboards are no longer optional infrastructure—they are the operational backbone of modern disease monitoring, outbreak response, and population health management. When a state health department can see notifiable disease trends, immunisation coverage gaps, and anomalous case clusters in real time, response time collapses from weeks to hours. Lives depend on speed.

The stakes are concrete. A study evaluating usability of federal and state public health dashboards found that health departments using integrated, real-time dashboards detected outbreaks 40–60% faster than those relying on manual reporting and weekly aggregations. The same research showed that dashboards designed for epidemiological clarity—not just data volume—improved equity assessments by allowing teams to disaggregate cases by geography, demographics, and vulnerability.

In Australia, state health departments face similar pressures. Notifiable disease lists span dozens of conditions: measles, whooping cough, COVID-19, dengue, and foodborne pathogens. Immunisation programs track millions of doses across age cohorts, vaccination schedules, and equity targets. Surveillance must be real-time, granular, and actionable. Yet many state systems still rely on legacy databases, manual Excel workflows, and weekly batch reports. The gap between what’s possible and what’s deployed is enormous.

This guide distils patterns from state health surveillance deployments—including recent work with Australian state health departments on notifiable disease, immunisation, and syndromic surveillance analytics using Superset deployment for AU state health departments covering notifiable diseases, immunisation, and surveillance analytics on D23.io’s managed stack. We’ll walk through architecture, data patterns, user experience design, and the operational practices that make dashboards actually work in high-pressure public health environments.


Core Data Patterns in State Health Surveillance {#core-patterns}

The Notifiable Disease Model

Notifiable disease surveillance operates on a simple but critical model: when a case is diagnosed, it must be reported to the state health department within a defined window (often 24–48 hours). The dashboard ingests these reports and surfaces them for epidemiological investigation, contact tracing, and outbreak response.

The data model looks like this:

  • Case-level records: Patient identifier (de-identified), disease code, date of onset, date of report, location (postcode, LGA, region), demographics (age, sex, vaccination status where relevant), exposure information, and outcome.
  • Aggregation layers: Daily case counts by disease, location, and demographics; rolling 7-day averages to smooth noise; cumulative counts for the reporting year; comparison to historical baseline (same period last year).
  • Timeliness metrics: Days from onset to report, days from report to dashboard visibility, percentage of cases reported within 48 hours.

Key insight: the dashboard must show both raw counts and timeliness. A state reporting 100 measles cases looks different if 80 arrived within 48 hours versus 20. Timeliness determines response capacity.

Immunisation Coverage Dashboards

Immunisation dashboards track population-level coverage against targets. The data model includes:

  • Dose administration records: Vaccine type, date administered, location (clinic, LGA, region), patient age and demographics, manufacturer, batch number.
  • Coverage calculations: Percentage of population in each age cohort who have received each required dose; comparison to national targets (e.g., 95% for routine childhood vaccines).
  • Equity layers: Coverage disaggregated by geography (urban vs. rural), socioeconomic status (via postcode indexing), Aboriginal and Torres Strait Islander status, and migrant communities.
  • Timeliness: Percentage of cohort vaccinated on schedule; days from eligibility to administration.

The challenge: immunisation data lives in fragmented systems. GPs report to state registries. Community health clinics report separately. Pharmacies report vaccine supply but not always administration. Dashboards must aggregate across these silos and handle reporting delays (some clinics report weekly, others monthly).

Syndromic Surveillance and Outbreak Detection

Syndromic surveillance captures early warning signals before formal diagnosis. Data sources include:

  • Emergency department chief complaints: Fever, cough, diarrhoea, rash (coded by syndrome).
  • Telehealth call volumes: Calls mentioning specific symptoms.
  • Pharmacy over-the-counter sales: Cough medicine, antidiarrhoeal, thermometers.
  • Wastewater pathogen detection: SARS-CoV-2, polio, mpox RNA levels.

The CDC’s National Syndromic Surveillance Program dashboards exemplify this approach. These signals feed anomaly detection algorithms that flag unusual increases in specific syndromes or locations, triggering investigation before confirmed cases accumulate.


Notifiable Disease Tracking: Architecture and Real-Time Reporting {#notifiable-disease}

Data Ingestion and Timeliness

Notifiable disease dashboards must ingest reports within hours, not days. The technical pattern:

  1. Source systems: LIS (laboratory information systems), EHRs (electronic health records), and standalone notification forms feed case data to a central intake point.
  2. Validation layer: Automated checks confirm required fields, flag duplicates, validate disease codes and locations, and route invalid records for manual review.
  3. De-identification: PII (names, dates of birth, full addresses) is stripped; patients are identified by encrypted ID, postcode, and age group only.
  4. Real-time indexing: Validated records land in a searchable database (PostgreSQL or similar) within 30 minutes of submission.
  5. Dashboard refresh: Superset queries the database and refreshes every 15–30 minutes, so epidemiologists see new cases near-real-time.

This pipeline requires robust error handling. If a lab system crashes and backlog 500 cases, the dashboard must ingest them in bulk without duplicating. If a postcode lookup fails, the record must queue for manual geocoding rather than blocking the entire pipeline.

Epidemiological Visualisation Patterns

Notifiable disease dashboards typically show:

  • Trend lines: Daily or weekly case counts for each disease, with 95% confidence intervals and comparison to baseline (historical average for the same period).
  • Geographic heat maps: Cases per 100,000 population by LGA or postcode, updated daily. Hot spots trigger investigation.
  • Case trees: For outbreak investigation, dashboards show case networks: who had contact with whom, in what locations, over what time period. This enables rapid contact tracing.
  • Demographics: Age pyramids, sex distribution, and vaccination status for each disease. If measles clusters in unvaccinated 5–9-year-olds in a specific suburb, the dashboard makes this visible instantly.
  • Timeliness metrics: Percentage of cases reported within 24 and 48 hours; median days from onset to report. If timeliness drops, the health department knows response capacity is degrading.

Key principle: every chart must answer an operational question. “How many cases today?” is less useful than “Are cases arriving on time? Are they concentrated in one location? What’s the vaccination status of cases?” The dashboard guides investigation and response.

Handling Reporting Delays and Corrections

Notifiable disease data is messy. Cases reported today might have had onset three weeks ago. A case initially reported as measles might be reclassified as rubella after serology. The dashboard must handle this:

  • Epidemic curves by date of onset, not date of report: This shows the true disease trajectory, not reporting artifacts.
  • Revision flags: When a case is corrected, the dashboard notes the change and allows users to toggle between “current” and “historical” views.
  • Timeliness buckets: Cases are categorised as “reported within 48 hours,” “reported 3–7 days late,” and “reported >7 days late.” This reveals systemic reporting delays and guides process improvement.

In practice, a state health department might see 50 measles cases reported on a Tuesday, but epidemiological analysis reveals only 30 had true onset in the past week; 20 are historical cases caught up in the reporting queue. The dashboard must distinguish these clearly.


Immunisation Dashboards: Coverage, Equity, and Timeliness {#immunisation}

Coverage Calculation and Denominators

Immunisation dashboards measure coverage as: (number of people in age cohort who received vaccine) / (total population in age cohort) × 100.

The challenge: denominators are hard. A state health department might know it administered 50,000 doses of vaccine A to 2-year-olds last year. But how many 2-year-olds live in the state? Census data is annual; births are reported monthly with lag. Some families move interstate mid-year. Denominators must be estimated, and estimates must be refreshed quarterly.

Best practice dashboards show:

  • Coverage by vaccine and age cohort: e.g., “DPT dose 1 coverage in 12-month-olds: 94.2% (target: 95%).”
  • Denominator source and date: “Denominator: ABS estimated resident population, June 2024. Last updated: 30 August 2024.”
  • Confidence intervals: Because denominators are estimates, coverage estimates have uncertainty. Showing 94.2% ± 2% is more honest than 94.2%.
  • Trend: Coverage over time. If coverage was 96% in 2023 and 94% in 2024, something changed. The dashboard flags this.

Equity Disaggregation

Immunisation coverage varies dramatically by geography and demographics. A state might have 95% coverage overall but only 78% in remote Aboriginal communities and 82% in recently arrived migrant neighbourhoods. Dashboards must surface these disparities:

  • Geographic disaggregation: Coverage by LGA, by postcode, by remoteness classification (urban, regional, remote). Colour-code LGAs by coverage level to spot cold spots.
  • Socioeconomic disaggregation: Postcode-level socioeconomic indices (SEIFA) correlate strongly with coverage. Dashboards can overlay coverage against SEIFA deciles.
  • Cultural and linguistic disaggregation: Where data is available, disaggregate by Aboriginal status, CALD (culturally and linguistically diverse) status, and language spoken at home. These are strong equity markers.
  • Timeliness disaggregation: On-time vaccination (within 4 weeks of eligibility) often varies by geography. Remote areas might have 85% on-time coverage because families must travel for clinics.

Equity dashboards require careful framing. Showing “Aboriginal coverage is 78%” risks stigma unless paired with context: “Lower coverage in remote areas reflects access barriers (distance to clinic, cost of travel). Funding mobile clinics in these regions increased coverage by 8 percentage points last year.”

Timeliness and Schedule Adherence

Vaccines must be given at specific ages to be effective. A dose given 6 months late is less protective than one given on schedule. Dashboards track:

  • On-schedule vaccination: Percentage of cohort vaccinated within the recommended window (e.g., DPT dose 1 by 4 months of age).
  • Delayed vaccination: Percentage vaccinated 1–3 months late, >3 months late.
  • Unvaccinated: Percentage not yet vaccinated (may be due soon, or may be refusing).

Trend analysis is critical. If on-time vaccination drops from 92% to 87% over three months, the dashboard should flag this. Causes might include: clinic closures, staff shortages, supply chain issues, or vaccine hesitancy. Early detection enables rapid response.


Building for Outbreak Detection and Response {#outbreak-detection}

Real-Time Anomaly Detection

Outbreak detection dashboards must flag unusual increases in cases before epidemiologists manually notice them. The technical pattern:

  1. Baseline establishment: For each disease and location, calculate the expected number of cases for each week based on historical data (same week in past 3–5 years).
  2. Anomaly scoring: Each week, compare actual cases to baseline. If actual > baseline × 1.5 (or other threshold), flag as anomaly.
  3. Filtering: Many anomalies are noise (small-number variations in low-incidence diseases). Apply statistical tests (e.g., Poisson regression) to distinguish signal from noise.
  4. Alerting: Anomalies above threshold trigger automated alerts to epidemiologists (email, SMS, dashboard notification).
  5. Investigation workflow: Dashboard links anomalies to case details, enabling rapid investigation.

Example: A state’s syndromic surveillance system detects a 40% increase in “fever + cough” presentations in emergency departments in the northern region on a Tuesday. The dashboard flags this as anomalous (baseline is 120 presentations; actual is 168). Epidemiologists click through and see cases are concentrated in one suburb and correlate with a cluster of confirmed influenza cases. They initiate outbreak response within 2 hours of detection.

Contact Tracing and Case Networks

For outbreaks requiring contact tracing (measles, COVID-19, monkeypox), dashboards must visualise case networks:

  • Case nodes: Each confirmed case is a node, coloured by disease status (confirmed, probable, suspected).
  • Contact edges: Lines connect cases with known contacts or shared exposures (same event, same workplace, same household).
  • Timeline: Nodes are positioned horizontally by date of onset, showing disease progression over time.
  • Metadata: Hovering over a case shows demographics, vaccination status, symptom onset, date confirmed, and investigation status.

This visualisation enables rapid identification of transmission chains, high-risk contacts, and outbreak sources. A measles outbreak might show a single case (index case) with 12 secondary cases, half unvaccinated, clustered in one school. The dashboard makes this pattern instantly visible.

Situational Awareness for Response Teams

Outbreak response teams need executive summaries:

  • Current status: Total confirmed/probable cases, new cases in past 24 hours, hospitalisations, deaths.
  • Geographic spread: Cases by LGA; map showing outbreak epicentre.
  • Timeline: Daily case count for past 30 days, with trend arrow (increasing, stable, decreasing).
  • Bottlenecks: Cases pending investigation, contacts pending follow-up, test results pending.
  • Resource allocation: Testing capacity utilised, vaccination clinic locations, contact tracing team availability.

During the COVID-19 pandemic, state COVID-19 data dashboards became critical tools for public health leadership. States that updated dashboards daily and made them publicly accessible gained public trust and enabled faster response. Dashboards that were opaque or updated slowly eroded confidence.


Technical Implementation: Superset, D23.io, and Modern Stack {#technical-implementation}

Why Apache Superset for Public Health Surveillance

Apache Superset is the standard open-source BI tool for public health surveillance because it balances power, cost, and operational simplicity. Key advantages:

  • Multi-source data: Superset connects to PostgreSQL, MySQL, Snowflake, and other databases. Public health data lives in multiple systems; Superset aggregates seamlessly.
  • SQL layer: Epidemiologists and analysts write SQL queries to define metrics. No vendor lock-in; SQL is portable.
  • Semantic layer: Define reusable metrics (“cases per 100,000,” “coverage %,” “days to report”) once; use across all dashboards.
  • Row-level security: Different users see different data. A regional health officer sees only their region; a state epidemiologist sees all regions.
  • Embedded dashboards: Dashboards can be embedded in state health department websites, making data public.
  • Cost: Open-source; can be self-hosted or run on D23.io’s managed stack. No per-user licensing.

Recent work with Australian state health departments deployed Superset on D23.io’s managed stack for notifiable disease, immunisation, and surveillance analytics. The deployment included:

  • Database layer: PostgreSQL instance containing notifiable disease records, immunisation administration records, and syndromic surveillance data.
  • Semantic layer: Pre-built metrics for case counts, coverage %, timeliness, and anomaly flags.
  • Dashboard suite: 15+ dashboards covering notifiable diseases, immunisation, and outbreak response.
  • SSO integration: Single sign-on via state health department identity provider (Azure AD or similar).
  • Refresh schedule: Dashboards refresh every 15 minutes for real-time data; some metrics refresh hourly.
  • Training and handover: 2 days of analyst training; documentation for dashboard maintenance and metric updates.

This deployment took 6 weeks from kickoff to production. Cost was fixed-fee; outcomes included dashboards live, analysts trained, and audit-ready documentation delivered. The $50K D23.io consulting engagement breakdown details what’s included in a typical Superset rollout: architecture, SSO, semantic layer, dashboards, and training.

Data Warehouse Architecture

Public health surveillance dashboards require a data warehouse that handles:

  • High write volume: Thousands of notifiable disease cases ingested daily; millions of immunisation records.
  • Low-latency reads: Dashboards must query and refresh in seconds, not minutes.
  • Data quality: Invalid or duplicate records must be caught and quarantined, not silently corrupted.
  • Audit trail: Every record must have provenance: when it was ingested, from which source, and what corrections were made.

Architecture pattern:

Source Systems (LIS, EHR, Immunisation Registry)

Data Intake Layer (Validation, De-identification, Deduplication)

Staging Database (PostgreSQL, raw data)

Transform Layer (dbt or Airflow, business logic)

Analytics Database (PostgreSQL or Snowflake, optimised for queries)

Superset (BI tool, dashboards)

Key principle: separate staging (raw, untransformed data) from analytics (clean, business-logic data). This allows reprocessing if logic changes without losing raw data.

Agentic AI and Natural Language Querying

One emerging pattern: agentic AI + Apache Superset integration allows non-technical users to query dashboards using natural language. An epidemiologist might ask: “Show me measles cases by postcode for the past 30 days, ordered by case count.” An agentic AI system (e.g., Claude) translates this to SQL, runs the query in Superset, and returns results.

This is valuable for public health because epidemiologists are domain experts, not SQL experts. Agentic AI reduces the analyst bottleneck. However, it requires careful implementation: agents must not hallucinate data, must respect row-level security, and must flag when queries are ambiguous.


Data Quality, Governance, and Audit-Readiness {#data-governance}

Data Quality Dimensions

Public health surveillance data must be high-quality. Poor data leads to missed outbreaks, wasted resources, and public health failures. Quality dimensions:

  • Completeness: Are all required fields present? A notifiable disease record without a location is useless for geographic analysis.
  • Accuracy: Are values correct? A case reported as 2-year-old when the patient is 22 is a data entry error.
  • Timeliness: Are records available when needed? If cases arrive 3 weeks late, outbreak response is crippled.
  • Consistency: Are values consistent across systems? If one LIS reports measles as “Measles” and another as “Measles (confirmed),” aggregation breaks.
  • Uniqueness: Are duplicates detected and removed? A case reported twice is not two cases.

Dashboards must surface data quality metrics:

  • Completeness scorecard: Percentage of records with all required fields, by data source.
  • Timeliness trend: Median days from onset to report, by disease and data source.
  • Duplicate rate: Percentage of records flagged as potential duplicates (same patient, same disease, same date of onset).
  • Validation failure rate: Percentage of inbound records that fail validation checks (invalid disease code, invalid postcode, missing required field).

If a data source suddenly has 20% missing postcodes, the dashboard alerts the team. Investigation might reveal the source system changed its export format, or staff are skipping the postcode field. Early detection enables rapid remediation.

Governance and Access Control

Notifiable disease data is sensitive. A state health department must implement strict access control:

  • Role-based access: State epidemiologists see all data. Regional health officers see only their region. GPs see only their own notifications.
  • Row-level security: Implemented at the database level, not the BI tool. A regional health officer’s query for “all measles cases” returns only cases in their region, automatically.
  • Audit logging: Every dashboard view, every query, every data download is logged. Who accessed what, when, and from where.
  • Data retention: Notifiable disease records are kept for 7 years (regulatory requirement). After 7 years, records are de-identified further or deleted.
  • Breach response: If unauthorised access is detected, the dashboard disables the account and alerts security.

AI automation for government: public services and administrative tasks discusses governance patterns for government AI systems. The same principles apply to surveillance dashboards: transparency, auditability, and human oversight.

Compliance and Audit-Readiness

Public health surveillance systems must pass security audits. Common frameworks:

  • SOC 2 Type II: Attestation that the system has appropriate controls for security, availability, and confidentiality. Required by many health departments.
  • ISO 27001: International standard for information security. Some state health departments require this.
  • HIPAA equivalent (Australia): Privacy Act 1988 and Health Records Act 2001 require privacy-by-design. Dashboards must implement de-identification, access control, and audit logging.

Key audit-ready practices:

  • Encryption in transit and at rest: Data travelling from source systems to dashboard is encrypted (TLS). Data stored in database is encrypted (at-rest encryption).
  • Access control documentation: Every user account, every role, every permission is documented. Auditors can verify that access is appropriate.
  • Change management: Every change to dashboard SQL, every schema change, every access rule change is logged with timestamp, author, and approval.
  • Incident response plan: If a dashboard query fails or a data quality issue is detected, the team has a documented response procedure.

Implementing audit-readiness via Vanta (or similar compliance automation tools) streamlines this. Vanta integrates with Superset, PostgreSQL, and identity systems to automatically verify controls and generate audit reports. A state health department can demonstrate compliance to auditors without manual evidence gathering.


User Experience and Non-Technical Access Patterns {#user-experience}

Dashboard Design for Epidemiologists

Epidemiologists are domain experts but often not data analysts. Dashboards must be intuitive:

  • Executive summary first: Top of dashboard shows key metrics (total cases, new cases, trend). Epidemiologists get situational awareness in 10 seconds.
  • Drill-down capability: Clicking on a metric (e.g., “45 cases”) drills into details (which diseases, which locations, which dates). Exploration is guided, not free-form.
  • Consistent colour coding: Red = alert (above baseline, unusual increase). Yellow = caution (approaching threshold). Green = normal. Consistency across all dashboards reduces cognitive load.
  • Tooltips and metadata: Hovering over a chart shows the underlying data, the calculation method, and the data source. Epidemiologists can verify data integrity.
  • Mobile-friendly: Epidemiologists need dashboards on tablets during outbreak response meetings. Responsive design is essential.

Supporting Non-Technical Users

Not all users are analysts. Health promotion officers, clinic managers, and community health workers need dashboards too. Support patterns:

  • Pre-built filters: Instead of writing SQL, users select from dropdowns: “Show me immunisation coverage in [LGA], for [vaccine], for [age cohort].” Filters are pre-built and validated.
  • Export to Excel: Users can export dashboard data to Excel for further analysis or reporting. Exports include metadata (date generated, data source, caveats).
  • Guided reports: Dashboards include pre-built reports (e.g., “Weekly notifiable disease summary,” “Monthly immunisation coverage report”). Users can generate these with one click.
  • Help documentation: Every dashboard has a help button linking to documentation. What does this metric mean? How is it calculated? What should I do if I see an anomaly?

Training and Change Management

Rolling out a new surveillance dashboard requires training:

  • Group training: 2–3 hour session covering dashboard navigation, metric interpretation, and common use cases.
  • Role-specific training: Epidemiologists learn outbreak investigation workflows. Managers learn to interpret coverage metrics. Clinic staff learn to verify their own data.
  • Office hours: Post-launch, the analytics team holds weekly office hours. Users can ask questions, report issues, and request new features.
  • Feedback loops: The team collects user feedback and prioritises improvements. If epidemiologists repeatedly ask “Can you show me cases by vaccination status?”, that feature gets built.

Scaling Across Multiple Jurisdictions {#scaling-jurisdictions}

Multi-Jurisdiction Architecture

Australian state health departments often need to compare data across states or share data with the national health department (Department of Health). Dashboards must support this:

  • Federated data model: Each state owns its notifiable disease database. A national dashboard queries all state databases and aggregates results. Queries are optimised to avoid slow cross-state joins.
  • Harmonised data definitions: “Measles case” is defined consistently across all states. Case definitions, data validation rules, and timeliness thresholds are aligned.
  • Privacy-preserving aggregation: National dashboards show state-level totals (e.g., “NSW: 45 cases, VIC: 32 cases”) but never drill to individual case details. Row-level security prevents unauthorised access.
  • Latency management: National dashboards might refresh every 4 hours (not 15 minutes) because cross-state queries are slower. Users understand this trade-off.

Handling Regional Variation

Each state has different data systems, reporting practices, and epidemiological priorities. Dashboards must accommodate variation:

  • Configurable metrics: A metric like “timeliness” might be calculated differently across states. The dashboard allows each state to define its own timeliness threshold and see results accordingly.
  • Localised language: Dashboards support both English and other languages (e.g., simplified Chinese, Arabic) for communities with high non-English-speaking populations.
  • Regional context: A dashboard might show national trends but also highlight state-specific diseases (e.g., dengue in Queensland, Lyme disease in Victoria).

Interoperability and Data Exchange

Surveillance data must flow between systems:

  • HL7 FHIR standards: Notifiable disease records are exported in FHIR format, enabling integration with other health systems.
  • API endpoints: Dashboards expose read-only APIs. Other systems can query case counts, coverage metrics, and anomaly flags without direct database access.
  • Data sharing agreements: When data crosses jurisdictional boundaries, formal data sharing agreements document who can access what, for what purpose, and for how long.

Next Steps: Building or Upgrading Your Surveillance Infrastructure {#next-steps}

Assessment: What’s Your Current State?

Before building a new surveillance dashboard, assess your current infrastructure:

  • Data sources: What systems hold notifiable disease, immunisation, and syndromic surveillance data? Are they connected? Do they share a common patient identifier?
  • Reporting latency: How long does it take for a case to go from diagnosis to dashboard visibility? Hours? Days? Weeks?
  • Data quality: What percentage of records have complete, accurate data? What’s the duplicate rate?
  • User access: Who currently has access to surveillance data? How do they access it (direct database queries, Excel exports, printed reports)?
  • Governance: Is access control documented? Are audit logs kept? Can you demonstrate compliance to auditors?

If you’re reporting latency in days, data quality is poor, and access is ad-hoc, a modern surveillance dashboard will transform your capability. If you’re already near real-time with high-quality data and strong governance, you might focus on user experience improvements or agentic AI integration.

Building vs. Buying vs. Partnering

Three paths:

  1. Build in-house: Your team builds a custom dashboard using open-source tools (Superset, PostgreSQL, dbt). Pros: full control, no vendor lock-in, cost-effective at scale. Cons: requires skilled engineers, ongoing maintenance burden.

  2. Buy commercial software: Vendors like Tableau, Power BI, or Looker offer commercial BI platforms. Pros: vendor support, slick UX, integrations. Cons: per-user licensing (expensive), vendor lock-in, often overkill for public health (built for corporate BI, not epidemiology).

  3. Partner with a specialist: Work with a venture studio or AI agency experienced in public health. PADISO partners with ambitious teams to ship AI products and automate operations, including public health surveillance dashboards. Pros: leverages existing expertise, faster time-to-value, handover and training included. Cons: external dependency, less control over roadmap.

For most state health departments, partnering is optimal. You get a modern dashboard in 6–8 weeks, your team learns the technology, and you own the infrastructure post-launch. AI automation for healthcare: diagnostic tools and patient care discusses similar patterns for healthcare AI projects.

Implementation Roadmap

If you decide to build or upgrade:

Weeks 1–2: Discovery and design

  • Interviews with epidemiologists, data managers, and IT staff.
  • Document current data flows, pain points, and desired outcomes.
  • Design dashboard wireframes and SQL metrics.
  • Plan data architecture and security controls.

Weeks 3–4: Data pipeline and warehouse

  • Set up PostgreSQL database (or similar).
  • Build data ingestion pipeline (validation, de-identification, deduplication).
  • Implement row-level security and audit logging.
  • Load historical data (past 2–3 years).

Weeks 5–6: Dashboard development and testing

  • Build dashboards in Superset.
  • Implement semantic layer (reusable metrics).
  • Test with sample data; verify accuracy against known cases.
  • Integrate with identity provider (SSO).

Weeks 7–8: Training and launch

  • Conduct group training for epidemiologists and staff.
  • Launch dashboards to production.
  • Monitor for issues; fix bugs rapidly.
  • Collect feedback for post-launch improvements.

Weeks 9–12: Optimisation and handover

  • Optimise slow queries.
  • Build additional dashboards based on feedback.
  • Document all processes (how to add new users, refresh data, update metrics).
  • Conduct handover training for your operations team.

Key Success Factors

  1. Executive sponsorship: The state health director must champion the project. Surveillance dashboards require cross-team collaboration (IT, epidemiology, data, security). Executive support enables this.

  2. User involvement: Epidemiologists and data managers must be involved from day one. They know what questions the dashboard must answer. Dashboards built without user input fail.

  3. Data quality first: A beautiful dashboard showing poor data is worse than no dashboard. Invest in data validation, deduplication, and quality checks before building visualisations.

  4. Security and compliance: Public health data is sensitive. Implement access control, encryption, and audit logging from the start. Compliance retrofitted is expensive and fragile.

  5. Realistic timelines: A production-grade surveillance dashboard takes 8–12 weeks, not 4. Budget accordingly. Rushing leads to quality issues and rework.

  • Agentic AI for queries: Agentic AI vs traditional automation explores how autonomous agents can query dashboards using natural language. This is coming to public health surveillance.

  • Wastewater surveillance: Wastewater pathogen monitoring (SARS-CoV-2, polio) is becoming a standard surveillance modality. Dashboards must integrate wastewater data alongside clinical case data.

  • Predictive analytics: Beyond anomaly detection, dashboards are starting to include forecasts: “Based on current trends, measles cases will peak in 3 weeks. Prepare resources accordingly.”

  • Public dashboards: Transparency is increasing. States are publishing surveillance data publicly (cases by location, vaccination coverage by LGA). Public dashboards build trust and enable community engagement.


Conclusion

Public health surveillance dashboards are no longer a luxury—they are essential infrastructure. States that deploy modern, real-time dashboards detect outbreaks faster, respond more effectively, and ultimately save lives.

The patterns are well-established: notifiable disease tracking with real-time ingestion and timeliness metrics; immunisation coverage dashboards with equity disaggregation; syndromic surveillance with anomaly detection; outbreak response dashboards with case networks and contact tracing support. Technology (Superset, PostgreSQL, D23.io managed infrastructure) is mature and cost-effective.

The hard part is not the technology—it’s the organisational change. Epidemiologists must trust the data. Staff must use the dashboard instead of Excel. Governance must be implemented without slowing response. Success requires executive sponsorship, user involvement, and realistic timelines.

If your state health department is still relying on weekly batch reports and manual Excel analysis, you’re leaving lives on the table. The capability to detect and respond to outbreaks in hours, not weeks, is within reach. Start with discovery: interview your users, assess your data, and design your dashboard. Then build or partner to deploy. The return on investment—measured in outbreak detection speed, resource efficiency, and lives saved—is enormous.

For guidance on building surveillance dashboards, implementing Superset, or modernising your public health data infrastructure, contact PADISO. We’ve deployed surveillance dashboards across Australian state health departments and understand the operational, technical, and compliance requirements. We ship outcomes: real-time dashboards, trained teams, and audit-ready infrastructure.