Catastrophe Modelling Outputs in Apache Superset Dashboards
Learn how Australian insurers expose RMS, AIR, and Verisk cat model outputs via Apache Superset dashboards. Complete guide to D23.io deployment.
Catastrophe Modelling Outputs in Apache Superset Dashboards
Table of Contents
- Why Catastrophe Modelling Outputs Matter for Australian Insurers
- Core Components of Catastrophe Models
- Understanding RMS, AIR, and Verisk Outputs
- Apache Superset as Your Cat Model Dashboard Layer
- D23.io Deployment Architecture for Cat Models
- Building Effective Cat Model Dashboards
- Exposing Outputs to Underwriters and Capacity Teams
- Performance Optimisation and Scaling
- Security, Audit-Readiness, and Compliance
- Implementation Roadmap and Next Steps
Why Catastrophe Modelling Outputs Matter for Australian Insurers
Catastrophe modelling is no longer a back-office actuarial exercise. For Australian insurers and reinsurers, the ability to rapidly surface catastrophe model outputs—loss distributions, exceedance probability curves, event-level simulations, and accumulation analytics—directly to underwriting teams and capacity planners is now a competitive necessity.
Australia’s exposure to natural perils is substantial. Cyclones, hail, bushfire, and flood events drive significant reinsurance costs and shape underwriting appetite across the market. When underwriters cannot access real-time, interactive views of catastrophe model outputs, they make decisions on incomplete data. This leads to mispriced risk, missed opportunities, and inefficient capital deployment.
The traditional workflow—actuaries run models in proprietary software (RMS, AIR, Verisk), export CSVs, email spreadsheets, and underwriters manually build charts in Excel—is slow, error-prone, and creates version control chaos. By the time an underwriter sees a chart, the data may be stale or the assumptions outdated.
Apache Superset changes this. It is an open-source, modern data exploration and visualisation platform that lets you connect directly to your catastrophe model outputs, build interactive dashboards in minutes, and expose those dashboards securely to any stakeholder. When integrated with a D23.io deployment—a managed Superset platform built for enterprise reliability—you get a production-grade system that underwriters trust and actuaries can iterate on without touching code.
This guide walks through how to design, build, and operate catastrophe modelling output dashboards in Apache Superset, with a focus on the Australian insurance and reinsurance context. We’ll cover architecture, best practices, and the real-world deployment patterns that work.
Core Components of Catastrophe Models
Before you can visualise catastrophe model outputs effectively, you need to understand what those outputs actually represent.
What Catastrophe Models Generate
Catastrophe modelling for a resilient future—powered by AI outlines the foundational components of modern catastrophe models. Every cat model—whether RMS, AIR, or Verisk—simulates thousands or millions of synthetic events and calculates financial loss for each event based on your exposure data.
The core pipeline is:
-
Hazard Module: Generates stochastic event sets. For Australian catastrophe models, this includes cyclone tracks, wind speeds, rainfall intensities, and hail sizes. The hazard module uses historical data, climate science, and statistical distributions to create a catalogue of plausible events that may never have occurred historically but are physically possible.
-
Exposure Data: Your portfolio of insured properties, their locations, coverage limits, deductibles, and policy terms. Exposure is the foundation of loss calculation. Garbage in, garbage out—if exposure data is incomplete or mislabeled, model outputs are unreliable.
-
Vulnerability (Damage) Functions: Translate hazard intensity (e.g., wind speed) into damage ratio (percentage of replacement value lost). Vulnerability curves are engineering-based, calibrated to historical loss data, and vary by peril and building type. Australian cat models use different vulnerability curves for residential timber homes versus concrete-frame commercial structures.
-
Financial Module: Applies your policy terms (deductibles, sublimits, coverage exclusions, reinsurance) to the calculated damage to produce net loss. This is where your underwriting and claims rules live.
-
Output Generation: The model produces loss distributions, exceedance probability (EP) curves, average annual loss (AAL), probable maximum loss (PML) at various return periods, and event-level results. These outputs are the raw material for your dashboards.
Understanding and managing damage uncertainty in catastrophe models delves deeper into how hazard, engineering, and economic data interact to produce robust financial risk assessments. This uncertainty is critical—underwriters need to see not just point estimates but confidence intervals and sensitivity analyses.
Why Model Uncertainty Matters
Catastrophe models are inherently uncertain. A 1-in-250-year loss estimate from RMS might differ from AIR’s estimate by 20–40%, and both can shift significantly if exposure data is updated or assumptions change. Underwriters must understand this uncertainty to make sound decisions.
Your dashboards must surface uncertainty explicitly: show ranges, not just medians; highlight which assumptions drive sensitivity; flag when exposure data was last validated. This is not about hedging—it’s about transparent, informed underwriting.
Understanding RMS, AIR, and Verisk Outputs
Three vendors dominate the Australian catastrophe modelling landscape: Risk Management Solutions (RMS), Applied Insurance Research (AIR), and Verisk Analytics. Each produces outputs in slightly different formats and with different naming conventions. Your dashboard layer must abstract these differences.
RMS Output Structure
RMS models produce:
- Event Set: A catalogue of synthetic events, typically 10,000–100,000 events per peril per region, each with a frequency (annual probability) and a loss distribution across your portfolio.
- EP Curve: Exceedance probability curves showing the probability that loss will exceed a given threshold. RMS typically exports these as ASCII or binary files.
- Aggregate Loss Distribution: Histogram of total portfolio loss across all simulated events.
- Event Loss Tables (ELT): Loss for each event, broken down by coverage, sublimit, and deductible.
- Loss Metrics: AAL, PML at 1-in-100, 1-in-250, 1-in-500, 1-in-1000 return periods.
RMS outputs are often large (gigabytes for a large portfolio) and require specialised parsing to load into a data warehouse.
AIR Output Structure
AIR models produce similar outputs but with different naming:
- Stochastic Event Catalogue: Equivalent to RMS’s event set.
- Loss Exceedance Curves (LEC): Equivalent to EP curves.
- Aggregate Loss Distribution (ALD): Histogram of losses.
- Deterministic Scenarios: Specific named historical events (e.g., “1974 Cyclone Tracy”) with their modelled losses.
- Mean Loss Estimates: AAL and return-period losses.
AIR’s outputs are often more granular by peril and geography, which is useful for Australian regional analysis (NSW coast vs. Queensland cyclone zone, for example).
Verisk Output Structure
Verisk (formerly Eqecat) produces:
- Stochastic Event Loss Set: Equivalent to RMS/AIR event sets.
- Loss Probability Distribution: Aggregate loss histogram.
- Return Period Loss Estimates: PML at standard return periods.
- Deterministic Event Losses: Named historical scenarios.
- Accumulation Analysis: Loss concentration by geography, peril, and coverage.
Verisk’s outputs tend to emphasise accumulation risk—how much loss is concentrated in specific regions or perils—which is valuable for reinsurance placement and catastrophe bond structuring.
Normalising Outputs for Dashboards
The challenge: these three vendors use different schemas, units, and naming conventions. Your dashboard cannot require underwriters to learn three different interfaces.
The solution: build a normalisation layer in your data warehouse. When you ingest RMS, AIR, or Verisk outputs, transform them into a canonical schema:
model_output (
model_id (RMS_2024_Q2, AIR_2024_H1, Verisk_2024_v3),
peril (cyclone, hail, bushfire, flood),
region (NSW, QLD, VIC, WA, SA, TAS),
return_period_years (1, 10, 25, 50, 100, 250, 500, 1000),
loss_type (gross, net, reinsured),
loss_estimate (numeric),
loss_percentile_5 (numeric),
loss_percentile_95 (numeric),
aal (numeric),
last_updated (timestamp)
)
This canonical schema is what your Superset dashboards query. When a new model version arrives, your ETL pipeline transforms it into this schema, and your dashboards automatically reflect the new data without requiring design changes.
We’ve seen this pattern work well for clients integrating multiple model vendors. The normalisation layer adds a week of engineering upfront but saves months of downstream confusion.
Apache Superset as Your Cat Model Dashboard Layer
Apache Superset is an open-source data visualisation and business intelligence platform. It is not a catastrophe modelling tool—it does not run models or calculate losses. Instead, it is a query and visualisation layer that sits on top of your data warehouse (PostgreSQL, Snowflake, BigQuery, etc.) and lets users explore and visualise catastrophe model outputs interactively.
Why Superset for Cat Models?
Speed: Dashboards can be built in hours, not weeks. Underwriters can iterate on what they want to see without waiting for IT or data teams.
No Coding Required: The Superset UI lets business users (actuaries, underwriters) build charts via drag-and-drop. If you want SQL-level control, you can write queries directly, but you don’t have to.
Interactivity: Filters, drill-downs, cross-filtering. An underwriter can click on “Cyclone” to see only cyclone losses, then click on “QLD” to narrow further. This exploratory workflow is critical for underwriting.
Open Source: No vendor lock-in. Your dashboards are portable. If you decide to migrate to another platform, your data is not trapped.
Semantic Layer: Superset’s semantic layer lets you define business metrics (e.g., “AAL as % of Premium”) once, and they are consistent across all dashboards. This prevents underwriters from seeing different numbers in different reports.
Performance: The Data Engineer’s Guide to Lightning-Fast Apache Superset Dashboards covers optimisation techniques. With proper caching, database indexing, and query design, Superset dashboards load in under 2 seconds even with millions of rows of model output data.
What Superset Is Not
Superset is not a replacement for actuarial software. It does not run catastrophe models, calculate losses, or validate exposure data. It visualises outputs from models that have already been run. If you need to re-run a model with different assumptions, you still use RMS, AIR, or Verisk directly.
Superset is also not a real-time streaming platform. If your model outputs change every second, Superset is not the right tool. But for typical insurance workflows—models run weekly or monthly, outputs are static until the next run—Superset is ideal.
Core Superset Concepts for Cat Models
Datasets: A dataset in Superset is a query or table in your data warehouse. For cat models, you might have datasets like:
cat_model_outputs_canonical(the normalised schema mentioned earlier)event_loss_table(individual event losses)exposure_by_region(portfolio breakdown)reinsurance_placement(your reinsurance contracts)
Charts: A chart is a single visualisation (line chart, bar chart, heatmap, etc.) built on a dataset. A chart for “AAL by Peril” queries the canonical schema, groups by peril, and renders a bar chart.
Dashboards: A dashboard is a collection of charts and filters. A dashboard for “Cyclone Risk Overview” might have charts for AAL by region, EP curve, event frequency distribution, and a map of highest-exposure postcodes. Filters at the dashboard level (e.g., “Model Version”, “Return Period”) apply to all charts.
Alerts: Superset can monitor your data and alert stakeholders if, for example, AAL for a specific peril exceeds a threshold. Useful for flagging when a new model version produces materially different outputs.
Exploring Data in Superset covers the full feature set. For cat models, the most important features are:
- SQL query builder (for complex loss calculations)
- Filters and parameters (for underwriter-driven exploration)
- Caching (for performance)
- Alerts (for governance)
- Export (to Excel, PDF for board reports)
D23.io Deployment Architecture for Cat Models
D23.io is a managed platform built on Apache Superset. It handles the infrastructure, security, and operations so you don’t have to. For Australian insurers, D23.io offers several advantages:
What D23.io Provides
Managed Infrastructure: D23.io runs Superset on AWS, with automatic scaling, backups, and disaster recovery. You do not manage servers, patches, or infrastructure.
Enterprise SSO: Integration with Azure AD, Okta, or other identity providers. Users log in with their corporate credentials, and role-based access control (RBAC) is enforced. This is critical for regulated environments where you need audit trails of who accessed what.
Semantic Layer: D23.io includes a semantic layer (similar to dbt Core or Looker’s LookML) where you define business metrics once. This ensures that when an underwriter sees “AAL” in one dashboard and “Average Annual Loss” in another, they are looking at the same calculation.
Data Governance: Lineage tracking, data dictionary, and audit logs. When a dashboard shows a loss estimate, you can trace it back to the source model run, the exposure data version, and the assumptions used.
Performance: D23.io optimises Superset for speed. Query results are cached, database connections are pooled, and the platform scales to handle thousands of concurrent users.
The $50K D23.io Consulting Engagement: What’s Inside | PADISO Blog breaks down a typical D23.io rollout: architecture design, SSO integration, semantic layer definition, dashboard and training delivery in 6 weeks. For an Australian insurer with 50–100 underwriters, this engagement typically costs $40K–$60K fixed-fee and delivers a production system in 6 weeks.
Architecture Pattern
A typical D23.io deployment for catastrophe models looks like:
[RMS/AIR/Verisk Model Runs]
↓
[ETL Pipeline (Airflow/Prefect)]
↓ (Normalises outputs to canonical schema)
[Data Warehouse (Snowflake/BigQuery/Redshift)]
↓
[D23.io / Apache Superset]
↓ (Semantic layer, caching, RBAC)
[Underwriter/Capacity Dashboards]
The ETL pipeline is the critical piece. It must:
- Ingest model outputs from RMS, AIR, Verisk (often via SFTP or API)
- Validate data quality (check for missing values, outliers, schema mismatches)
- Transform to canonical schema
- Load into data warehouse
- Trigger Superset cache refresh
- Log metadata (model version, run date, row counts, data quality checks)
For Australian insurers, the ETL pipeline typically runs nightly after model runs complete. If a model run fails or produces suspicious outputs, the pipeline logs an alert, and the data team investigates before the data reaches underwriters’ dashboards.
Security and Compliance in D23.io
D23.io is built for regulated environments. It supports:
- SOC 2 Type II: D23.io can be deployed with SOC 2 compliance, meaning your infrastructure, access controls, and audit logs meet the standards required by Australian regulators and reinsurers.
- ISO 27001: For organisations pursuing ISO 27001 certification, D23.io provides the security controls and audit trails needed. When you implement Security Audit (SOC 2 / ISO 27001) via Vanta, D23.io integrates seamlessly.
- Data Residency: D23.io can run on AWS Sydney (ap-southeast-2), keeping your catastrophe model outputs in Australia. This is important for data sovereignty and latency.
- Encryption: Data in transit (TLS) and at rest (AES-256). Sensitive fields (e.g., specific loss values) can be masked for users without underwriting authority.
When designing your D23.io deployment, work with your security and compliance teams early. The platform supports the controls you need, but you must configure them correctly.
Building Effective Cat Model Dashboards
Not all dashboards are created equal. A dashboard that looks impressive but does not answer underwriters’ questions is a waste of time. Here is how to build dashboards that drive decisions.
Dashboard Design Principles
1. Start with Questions, Not Data
Before you open Superset, ask: what questions do underwriters need to answer? For catastrophe models, typical questions are:
- What is our AAL by peril and region?
- How does our current portfolio compare to the model’s baseline?
- What is the 1-in-250-year loss for each major region?
- Where is our accumulation risk (geographic or peril concentration)?
- How sensitive is our loss estimate to exposure data changes?
- How do our model outputs compare across RMS, AIR, and Verisk?
Each question should drive a dashboard or a section of a dashboard. If you build a dashboard without a clear question, it becomes a data dump that confuses rather than clarifies.
2. Hierarchy: Overview → Detail → Drill-Down
Design dashboards with layers:
- Executive Summary (top of page): One or two key metrics. For cat models: AAL, 1-in-250-year loss, highest-risk peril, highest-risk region. A CEO should understand the key risk in 10 seconds.
- Underwriting Detail (middle): Charts by peril, region, and coverage type. Underwriters spend 5–10 minutes here, exploring sensitivities.
- Drill-Down (bottom): Links to detailed event-level data, exposure validation, model assumptions. If an underwriter spots an anomaly, they can dig in.
3. Colour and Context
Use colour purposefully. For loss metrics:
- Green: Within expected range
- Yellow: Elevated, warrants review
- Red: Exceeds risk appetite, escalate
Define thresholds in your semantic layer. For example, if your risk appetite is AAL < 2% of premium, any region exceeding that threshold shows red. This makes anomalies jump out.
4. Comparative View
Underwriters care about change. Show:
- Current model vs. previous quarter
- RMS vs. AIR vs. Verisk
- Your portfolio vs. industry benchmark (if available)
A loss estimate in isolation is meaningless. In context—“AAL is up 15% from last quarter because exposure increased in high-risk postcodes”—it becomes actionable.
Example Dashboard: Cyclone Risk Overview
Here is a concrete example of a dashboard that works:
Title: Cyclone Risk by Region (RMS 2024 Q2)
Filters (top, sticky):
- Model Version (dropdown: RMS 2024 Q2, AIR 2024 H1, Verisk 2024)
- Return Period (dropdown: 1-in-100, 1-in-250, 1-in-500)
- Region (multi-select: NSW, QLD, VIC, WA)
Row 1 (Executive Summary):
- Metric 1: AAL (Cyclone, All Regions) — Large number, colour-coded
- Metric 2: 1-in-250-year Loss — Large number
- Metric 3: Highest-Risk Region — Text, with colour
- Metric 4: % of Portfolio in High-Risk Zones — Percentage, colour-coded
Row 2 (Underwriting Detail):
- Chart 1: AAL by Region (bar chart, colour-coded by risk level)
- Chart 2: Return Period Loss Curve (line chart, 1-in-10 to 1-in-1000 years)
Row 3 (Accumulation Risk):
- Chart 3: Loss by Postcode (map, heat map of highest-exposure postcodes)
- Chart 4: Loss by Building Type (bar chart, residential vs. commercial vs. industrial)
Row 4 (Sensitivity):
- Chart 5: AAL Sensitivity to Exposure Data Version (line chart, showing how AAL changes as exposure is updated)
- Chart 6: Model Comparison (AAL RMS vs. AIR vs. Verisk, bar chart)
Row 5 (Drill-Down):
- Link: “View Event Loss Table” (opens a detailed table of losses for top-10 events)
- Link: “Download Data” (Excel export of AAL by region, return period)
This dashboard answers the key questions. An underwriter can:
- See the headline risk in 10 seconds (Row 1)
- Understand regional breakdown and trends (Row 2–3)
- Assess model uncertainty (Row 4)
- Drill into details if needed (Row 5)
Building in Superset
To build this dashboard in Superset:
-
Create Datasets: Build or import datasets for
cyclone_aal_by_region,cyclone_loss_curve,exposure_by_postcode, etc. -
Create Charts: For each chart, use Superset’s visual builder or SQL query editor. Example SQL for AAL by region:
SELECT
region,
peril,
model_id,
SUM(aal) as total_aal,
CASE
WHEN SUM(aal) / (SELECT SUM(premium) FROM exposure) > 0.02 THEN 'HIGH'
WHEN SUM(aal) / (SELECT SUM(premium) FROM exposure) > 0.01 THEN 'MEDIUM'
ELSE 'LOW'
END as risk_level
FROM cat_model_outputs_canonical
WHERE peril = 'cyclone'
GROUP BY region, peril, model_id
ORDER BY total_aal DESC
-
Add Filters: At the dashboard level, add filters for Model Version, Return Period, Region. Superset automatically maps these to the underlying queries.
-
Configure Caching: Set the dashboard to cache for 1 hour. After a new model run, manually refresh the cache. This ensures underwriters see fresh data without the dashboard being slow.
-
Test with Underwriters: Before launching, have 3–5 underwriters use the dashboard and give feedback. Common feedback: “I need to see this metric”, “This chart is confusing”, “Can I export this?”. Iterate based on feedback.
Exposing Outputs to Underwriters and Capacity Teams
Building a dashboard is half the battle. The other half is getting it into the hands of users who actually use it.
User Access and Permissions
In D23.io, you define roles:
- Admin: Can create and edit dashboards, manage users, configure the semantic layer.
- Underwriter: Can view dashboards, apply filters, export data. Cannot edit dashboards.
- Capacity Team: Can view dashboards, with restrictions (e.g., cannot see individual loss values, only aggregates).
- Actuarial: Can view all dashboards including detailed event-level data.
RBAC is enforced at the dashboard and chart level. You can restrict specific charts to specific roles. For example, a chart showing individual large losses might be visible to actuaries but not to underwriters (to prevent information leakage to brokers).
Training and Adoption
A dashboard is only useful if people know how to use it. Plan for:
Live Training Session (1 hour): Walk underwriters through the dashboard, explain each chart, show how to apply filters and drill down. Record this session for onboarding new hires.
Quick Reference Guide (1 page): PDF showing the dashboard layout, what each chart means, and how to interpret colours and thresholds. Distribute via email and pin in Slack.
Office Hours (30 min/week): For the first month, hold a weekly “office hours” where underwriters can ask questions. This surfaces confusion early and builds confidence.
Feedback Loop: After 2 weeks, survey underwriters: “Is this dashboard useful? What’s missing? What’s confusing?” Iterate based on feedback. A dashboard that does not get used is a failure, even if it is technically correct.
Alerting and Governance
Superset can send alerts when data changes. For cat models, useful alerts include:
- New Model Version Loaded: “RMS 2024 Q3 model outputs now available in Superset”
- AAL Exceeds Threshold: “Cyclone AAL in QLD exceeds risk appetite threshold. Review required.”
- Data Quality Issue: “Event loss table has 500 missing values. Data validation failed.”
Alerts are sent via email or Slack. This keeps stakeholders informed without requiring them to check the dashboard daily.
Integration with Underwriting Workflows
For maximum impact, integrate the dashboard into your underwriting workflow. For example:
- Underwriting Submission Template: Include a link to the relevant cat model dashboard. When an underwriter evaluates a new submission, they click the link and see the latest model outputs for that region and peril.
- Risk Committee Agenda: Before the monthly risk committee meeting, the actuarial team pulls a snapshot of the dashboard (screenshot or PDF export) and includes it in the agenda. This ensures the committee is working from the same data.
- Reinsurance Placement: When placing reinsurance, the underwriting team references the dashboard to justify retentions and limits to brokers and reinsurers.
Performance Optimisation and Scaling
A beautiful dashboard that loads in 10 seconds is useless. Here is how to keep your cat model dashboards fast.
Database Optimisation
Indexing: Index the columns you filter on most frequently. For cat models, this typically means:
- Index on
peril(filter by cyclone, hail, etc.) - Index on
region(filter by NSW, QLD, etc.) - Index on
model_id(filter by model version) - Index on
return_period_years(filter by return period)
A composite index on (peril, region, model_id) can dramatically speed up queries.
Partitioning: If your cat model output table is large (billions of rows), partition it by peril or region. Superset can then prune partitions that are not relevant to a query, making queries faster.
Aggregation: Pre-calculate aggregates. Instead of querying raw event-level data and aggregating in Superset, pre-aggregate in your data warehouse. For example, create a table cat_model_agg_by_region_peril with AAL, return-period losses, etc., pre-calculated. Queries on this table are 100x faster.
The Data Engineer’s Guide to Lightning-Fast Apache Superset Dashboards covers these techniques in detail, including benchmarks. With proper indexing and aggregation, a dashboard with 10 charts should load in under 2 seconds.
Caching Strategy
Superset has multiple caching layers:
-
Query Cache: Results of SQL queries are cached for a configurable duration (e.g., 1 hour). If two underwriters run the same query within 1 hour, the second query is served from cache, not re-executed.
-
Dashboard Cache: The entire dashboard is cached. When an underwriter opens the dashboard, it loads from cache unless a filter is changed.
-
Database Query Cache: Some databases (Snowflake, BigQuery) have their own query result caching. Superset leverages this automatically.
For cat models, set query cache to 1 hour. After a new model run, manually refresh the cache so underwriters see fresh data. If you run models continuously (e.g., multiple times per day), consider caching for 15–30 minutes and accepting slightly stale data.
Scaling to Thousands of Users
D23.io handles scaling. As you add more underwriters, D23.io automatically scales the Superset application and database connections. You should not need to do anything.
However, monitor:
- Query Latency: Track the 95th percentile query time. If it exceeds 5 seconds, investigate. Usually, the culprit is a missing index or an inefficient query.
- Cache Hit Rate: Monitor how often queries are served from cache vs. executed fresh. A high hit rate (>80%) means your caching strategy is working.
- Concurrent Users: Monitor peak concurrent users. If you have 100 underwriters all opening the dashboard at 9 AM, D23.io should handle it. If not, scale up.
Security, Audit-Readiness, and Compliance
Catastrophe model outputs are sensitive. They inform underwriting decisions, reinsurance placement, and capital allocation. You must protect them.
Data Security
Encryption in Transit: All data between the underwriter’s browser and D23.io is encrypted with TLS 1.2+. This is standard and automatic.
Encryption at Rest: D23.io stores dashboards, queries, and cached results in encrypted databases. The encryption key is managed by AWS and rotated automatically.
Field-Level Masking: For sensitive fields (e.g., specific loss values), you can mask values for users without appropriate permissions. For example, a broker-facing dashboard might show only aggregate losses, not individual policy losses.
Access Control
Single Sign-On (SSO): D23.io integrates with Azure AD, Okta, or other identity providers. Users log in with their corporate credentials. When an employee leaves, you disable their AD account, and their access to D23.io is automatically revoked.
Role-Based Access Control (RBAC): Define roles (Admin, Underwriter, Capacity, Actuarial) and assign permissions at the dashboard and chart level. This ensures underwriters see only the data relevant to their role.
Audit Logging: Every action in D23.io is logged: who accessed which dashboard, when, what filters they applied, what data they downloaded. These logs are immutable and retained for 2+ years. When you run an audit or investigation, you can trace exactly who accessed what.
Compliance and Audit-Readiness
When pursuing Security Audit (SOC 2 / ISO 27001) compliance, D23.io supports the required controls:
- Change Management: Dashboards and queries are versioned. You can see who changed what and when.
- Segregation of Duties: Admins create dashboards; underwriters view dashboards. No underwriter can modify dashboards or access the underlying data warehouse directly.
- Data Retention: Logs are retained for 2+ years. Cached data is cleared on schedule.
- Incident Response: D23.io has a security incident response process. If a vulnerability is discovered, patches are deployed within 24 hours.
When your auditor (internal or external) reviews your cat model dashboard controls, you can show:
- Architecture diagram (who accesses what)
- Access control matrix (who has permissions to which dashboards)
- Audit logs (proof of access, changes, exports)
- Data lineage (where does each metric come from)
This evidence demonstrates that you have appropriate controls over sensitive cat model outputs.
Regulatory Considerations
For Australian insurers, relevant regulators include:
- APRA (Australian Prudential Regulation Authority): Requires that insurers have adequate governance and risk management. Your cat model dashboards support this by making risk data transparent and accessible to decision-makers.
- ASIC (Australian Securities and Investments Commission): For listed companies, requires disclosure of material risks. Cat model outputs inform these disclosures.
- State Regulators: Vary by state, but generally require that insurers understand their exposures and manage catastrophe risk.
D23.io does not guarantee regulatory compliance—that is your responsibility. But it provides the infrastructure and audit trails to support compliance.
Implementation Roadmap and Next Steps
Implementing cat model dashboards is not a weekend project. Here is a realistic roadmap.
Phase 1: Foundation (Weeks 1–4)
Goal: Get your first model outputs into a data warehouse and build a basic dashboard.
Tasks:
- Audit your current cat model outputs. Where do they live? RMS, AIR, Verisk? SFTP, API, USB drive? Document the current state.
- Choose a data warehouse (Snowflake, BigQuery, Redshift, PostgreSQL). If you already have one, use it. Otherwise, Snowflake is a good default for Australian companies.
- Build an ETL pipeline to ingest model outputs. Use Airflow, Prefect, or a simpler tool like Talend. The pipeline should:
- Download model outputs from RMS/AIR/Verisk
- Validate data quality
- Transform to canonical schema
- Load into data warehouse
- Log metadata
- Deploy D23.io or self-host Apache Superset. For most companies, D23.io is easier (no infrastructure management).
- Build 1–2 basic dashboards: AAL by peril, loss curve, exposure map.
- Train 5–10 pilot users (actuaries, senior underwriters). Gather feedback.
Outcome: Pilot users can access cat model outputs via dashboards. Data is fresh (updated daily or weekly). You have a foundation to build on.
Phase 2: Expansion (Weeks 5–12)
Goal: Roll out dashboards to all underwriters and capacity teams. Add advanced features.
Tasks:
- Expand to all three model vendors (RMS, AIR, Verisk). Ensure the canonical schema and ETL pipeline handle all three.
- Build additional dashboards:
- Peril-specific (Cyclone, Hail, Bushfire, Flood)
- Region-specific (NSW, QLD, VIC, WA, SA, TAS)
- Accumulation Risk (concentration by postcode, building type, coverage)
- Model Comparison (RMS vs. AIR vs. Verisk)
- Sensitivity Analysis (how does AAL change with exposure updates)
- Add alerts: notify stakeholders when AAL exceeds thresholds or new models are loaded.
- Integrate with underwriting workflow: add links to dashboards in underwriting submission templates.
- Train all underwriters and capacity teams. Conduct live sessions, create documentation, hold office hours.
- Collect feedback and iterate. Underwriters will ask for new metrics, charts, and views. Prioritize based on impact.
Outcome: All underwriters and capacity teams use dashboards daily. Dashboards are the single source of truth for cat model outputs. Decision-making is faster and more consistent.
Phase 3: Optimisation (Weeks 13+)
Goal: Optimise performance, add advanced analytics, pursue compliance.
Tasks:
- Optimise database performance: add indexes, partition tables, pre-calculate aggregates. Monitor query latency and cache hit rates.
- Add advanced analytics:
- Machine learning models to predict losses based on exposure characteristics
- Scenario analysis: “What if we increase exposure in QLD by 10%?”
- Reinsurance optimisation: recommend retentions and limits based on model outputs
- Pursue SOC 2 or ISO 27001 compliance. Work with your security and audit teams. D23.io supports the required controls.
- Integrate with other systems: connect dashboards to your underwriting system, claims system, and reinsurance placement system. This creates a closed loop: model outputs inform underwriting decisions, which inform claims experience, which feeds back into model calibration.
- Expand to external stakeholders: reinsurers, brokers, and regulators may want access to (redacted) cat model outputs. Set up read-only dashboards for external users.
Outcome: Cat model dashboards are fully integrated into your operations. They are fast, secure, and compliant. Underwriters, capacity teams, reinsurers, and regulators all use them. Decision-making is data-driven and auditable.
Budget and Timeline
For a typical Australian insurer (50–100 underwriters, 3 model vendors, 1 data warehouse):
- Phase 1: 4 weeks, 2 FTE (1 data engineer, 1 actuarial analyst), $30K–$50K (if using D23.io) or $50K–$100K (if self-hosting).
- Phase 2: 8 weeks, 2 FTE, $20K–$40K (additional dashboards, training, support).
- Phase 3: Ongoing, 1 FTE, $10K–$20K/month (maintenance, optimisation, new features).
Total first-year cost: $100K–$200K (excluding data warehouse, which you likely already have). ROI is typically realised in 6–12 months through faster underwriting, better risk selection, and more efficient reinsurance placement.
Common Pitfalls to Avoid
-
Building Without Requirements: Do not start building dashboards without understanding what underwriters actually need. Spend a week interviewing users first.
-
Normalisation as an Afterthought: Do not ingest RMS, AIR, and Verisk outputs into separate tables. Build a canonical schema from day one. It saves months of rework.
-
Ignoring Performance: Do not assume Superset will be fast. Benchmark your queries. If a dashboard loads in 10 seconds, underwriters will not use it. Optimise early.
-
Underestimating Training: Do not assume underwriters will figure out how to use dashboards on their own. Invest in training, documentation, and support. Adoption is your success metric.
-
Neglecting Security: Do not treat security as an afterthought. Involve your security and compliance teams from day one. D23.io makes security easier, but you must configure it correctly.
Getting Started
If you are ready to implement cat model dashboards, here are the first steps:
-
Audit Your Current State: Document where cat model outputs currently live, who uses them, and what pain points exist.
-
Define Your Scope: Which model vendors? Which underwriters? Which metrics are most important?
-
Choose Your Platform: D23.io (managed, easier) or self-hosted Superset (more control, more ops burden).
-
Engage a Partner: Building cat model dashboards requires both technical and actuarial expertise. Consider engaging PADISO, a Sydney-based venture studio and AI digital agency, or another partner with experience in insurance and Apache Superset. PADISO has delivered similar projects for Australian insurers and reinsurers and can accelerate your timeline significantly.
-
Start Small: Do not try to build 20 dashboards in month 1. Build 2–3 core dashboards, validate with users, then expand.
We’ve worked with insurers across Australia—Sydney, Melbourne, Brisbane—and seen firsthand how transparent, interactive cat model dashboards transform underwriting. The underwriters become more confident in their decisions. Reinsurers trust your risk assessments more. Capital is deployed more efficiently. The investment pays for itself.
Conclusion
Catastrophe modelling outputs are the foundation of modern insurance underwriting. But outputs locked in proprietary software or buried in spreadsheets are useless. By exposing RMS, AIR, and Verisk outputs through Apache Superset dashboards—via a platform like D23.io—you unlock the value in those models.
Underwriters get real-time, interactive access to loss estimates, exceedance curves, and accumulation risk. Capacity teams can make better decisions about reinsurance placement. Actuaries can iterate on models and see the impact immediately. Regulators and auditors can trace the data lineage and verify controls.
The implementation is straightforward: normalise your model outputs, load them into a data warehouse, build dashboards in Superset, and train your users. The first dashboards are live in 4–6 weeks. Full rollout takes 3–4 months. The ROI is typically realised within 6–12 months.
If you are an Australian insurer or reinsurer looking to modernise your cat model workflows, now is the time to act. The technology is mature, the tools are proven, and the benefits are clear.
For more insights into AI-driven decision-making and data platform engineering, explore PADISO’s blog on AI automation for insurance, which covers how AI and automation transform claims processing and risk assessment. You might also find value in reading about how agentic AI integrates with Apache Superset to enable non-technical users to query dashboards naturally, or exploring AI automation for supply chain to understand how similar patterns apply across industries.
Ready to get started? Contact PADISO for a consultation. We’ll assess your current cat model workflows, design a Superset deployment tailored to your underwriting team, and guide you through implementation. Our goal: dashboards that underwriters love and regulators trust.
Quick Reference: Key Metrics and Definitions
Average Annual Loss (AAL): The expected loss in any given year, calculated as the sum of all event losses weighted by their probability.
Exceedance Probability (EP) Curve: A curve showing the probability that loss will exceed a given threshold. Used to estimate return-period losses (1-in-100, 1-in-250, etc.).
Probable Maximum Loss (PML): The worst-case loss at a given return period (e.g., 1-in-250-year loss).
Event Loss Table (ELT): A table listing individual simulated events and their losses.
Accumulation Risk: The concentration of exposure (and potential loss) in specific geographies or perils. High accumulation risk means a single event could cause outsized losses.
Canonical Schema: A standardised data structure that normalises outputs from multiple model vendors (RMS, AIR, Verisk) into a single format for easier analysis and visualisation.
Semantic Layer: A business logic layer that defines metrics (e.g., “AAL”, “Return Period Loss”) once and makes them consistent across all dashboards.
RBAC (Role-Based Access Control): A security model where users are assigned roles (Admin, Underwriter, Actuarial) and permissions are based on roles, not individual users.