Insurance Run-Off Portfolios: AI-Assisted Reserving Reviews
Master AI-assisted reserving reviews for insurance run-off portfolios. Learn how Claude Opus 4.7 and agentic AI improve accuracy, speed, and actuarial sign-off discipline.
Table of Contents
- What Are Insurance Run-Off Portfolios?
- The Reserving Challenge in Run-Off Books
- Why Traditional Reserving Methods Fall Short
- AI-Assisted Reserving: The Claude Opus 4.7 Approach
- Building Agentic AI Workflows for Claim File Analysis
- Actuarial Sign-Off and Governance
- Implementation Patterns and Real-World Examples
- Security, Compliance, and Data Governance
- Measuring Success: KPIs and ROI
- Getting Started with Your AI Reserving Partner
What Are Insurance Run-Off Portfolios?
Insurance run-off portfolios represent one of the most complex and capital-intensive challenges in modern financial services. A run-off portfolio is a collection of legacy insurance policies—typically from closed or acquired business lines—where no new business is written, but claims continue to be paid out over time. These portfolios can span decades, involving millions of historical claim files, inconsistent data formats, and significant uncertainty about ultimate claim development.
The global insurance run-off market has grown substantially. According to the Global Insurance Run-Off Survey 2025 from PwC, the run-off market now encompasses trillions of dollars in liabilities across major insurance groups worldwide. Run-off portfolios arise from several sources: business lines closed to new underwriting, insurance companies acquired and integrated into larger groups, policies transferred to dedicated run-off entities, and legacy books from restructured or divested operations.
Managing these portfolios requires constant vigilance. Actuaries and reserving teams must regularly review claim files, assess development patterns, and adjust reserves to reflect emerging claims experience. The stakes are high: inaccurate reserves can misstate financial position, trigger regulatory scrutiny, and erode shareholder value. Yet the volume of data—often hundreds of thousands or millions of claim files scattered across legacy systems—makes manual review impractical.
This is where AI-assisted reserving enters the picture. Rather than replacing actuarial judgment, modern AI tools augment human expertise, accelerating the analysis of vast claim datasets while maintaining the rigour that regulators and auditors demand.
The Reserving Challenge in Run-Off Books
Reserving for run-off portfolios is fundamentally different from reserving for active underwriting businesses. In active books, actuaries can observe current claims frequency and severity, calibrate models against recent experience, and project forward with reasonable confidence. In run-off books, the picture is murkier.
Long Development Tails and Tail Risk
Claims in run-off portfolios often develop over 10, 20, or even 30+ years. A liability claim filed in 1995 might not be fully resolved until 2025 or beyond. This extended tail creates several problems. First, inflation compounds uncertainty—a reserve set in 2000 may be wildly inadequate by 2025 if medical cost inflation or construction cost escalation has outpaced original assumptions. Second, legal and regulatory changes can reopen seemingly settled claims or alter liability interpretations. Third, with so much time elapsed, key documentation can be lost, witnesses become unavailable, and the original underwriting context fades.
Accuaries must grapple with tail risk—the possibility of large, unexpected claims emerging years or decades after a policy period. Traditional reserving methods often underestimate tail risk because the most recent experience doesn’t fully capture it. AI can help by identifying patterns in historical claim development that human analysts might miss, flagging unusual claim characteristics that suggest elevated tail risk, and cross-referencing claim files against external data (court records, regulatory databases, medical inflation indices) to refine tail assumptions.
Data Fragmentation and Legacy Systems
Most run-off portfolios have been shuffled between systems multiple times. A single claim file might exist in three different formats across three different databases, with conflicting information. One system records the original reserve in 1998 dollars; another records subsequent adjustments in 2010 dollars; a third stores scanned documents with no structured data at all.
Extracting clean, consistent data from this mess is labour-intensive. Actuaries spend weeks or months on data validation before they can even begin substantive reserving analysis. AI tools like Claude Opus 4.7 excel at this task. They can ingest messy, semi-structured claim data—PDFs, scanned documents, legacy database exports, spreadsheets—and extract key facts (claim date, claimant name, injury type, reserve history, payment history, outstanding exposure) with high accuracy. This frees actuaries to focus on judgment calls rather than data wrangling.
Consistency and Reproducibility
Manual reserving reviews are vulnerable to inconsistency. One actuary might interpret a claim’s development pattern one way; another might interpret it differently. Over a portfolio of 500,000 claims, these inconsistencies compound. AI-assisted workflows enforce consistency: the same algorithm applies the same logic to every claim file, reducing subjective variance and making the reserving process more reproducible and auditable.
This consistency is crucial for regulatory and audit purposes. When auditors challenge reserves, they want to see a clear, defensible methodology applied uniformly across the portfolio. AI workflows provide exactly that.
Why Traditional Reserving Methods Fall Short
Traditional reserving approaches—actuarial judgment, chain-ladder models, Bornhuetter-Ferguson methods—remain essential. But they have limitations when applied to massive run-off portfolios with complex, heterogeneous claim histories.
Manual Review Doesn’t Scale
A portfolio of 1 million claims cannot be manually reviewed by a team of 20 actuaries in any reasonable timeframe. Even spot-checking 1% of claims (10,000 files) might require months of work. This means most claims in most portfolios are never individually examined; reserves are set using aggregate statistical methods that may miss important outliers or emerging trends.
AI changes this equation. A large language model like Claude Opus 4.7 can process thousands of claim files per day, extracting key facts, flagging anomalies, and summarising development patterns. This enables actuaries to review a much larger sample—potentially 10%, 25%, or even 100% of the portfolio—in the same timeframe that manual methods would allow for 1% or less.
Aggregate Models Miss Outliers
Chain-ladder and other aggregate methods work well for homogeneous claim populations. But run-off portfolios are often heterogeneous: they contain claims from different underwriting eras, different geographies, different product lines, and different claim types, each with distinct development characteristics.
A single chain-ladder model applied to the entire portfolio may obscure important differences. Claims from the 1980s might develop very differently from claims from the 2000s. Medical malpractice claims develop on a different timeline than workers’ compensation claims. Claims in Australia might develop differently than claims in the UK.
AI-assisted workflows can segment the portfolio more granularly, identifying natural clusters of similar claims and applying tailored analysis to each cluster. This improves accuracy by respecting the heterogeneity of the data.
Documentation and Context Are Lost
Aggregate models work with numerical data: claim counts, claim amounts, development factors. But claim files contain rich narrative context: medical reports, legal correspondence, adjuster notes, expert opinions. This context often holds crucial information about why a claim developed as it did, whether further development is likely, and what risks remain.
Manual review captures some of this context, but inconsistently. AI tools can systematically extract and summarise this narrative information, making it available to actuaries in structured form. This helps actuaries make more informed judgments about individual claim reserves and portfolio-level assumptions.
AI-Assisted Reserving: The Claude Opus 4.7 Approach
Claude Opus 4.7 represents a significant leap forward in AI capability for document-heavy tasks like claim file analysis. Unlike earlier language models, Opus 4.7 combines several capabilities that are essential for reserving work: superior document understanding, extended context windows, strong reasoning ability, and reliability in structured output generation.
Why Claude Opus 4.7 for Insurance Run-Off Reserving
Claude Opus 4.7 has been specifically designed to handle complex, real-world documents at scale. For insurance run-off reserving, this means:
Document Understanding: Opus 4.7 can ingest PDFs, scanned documents, and images with high fidelity. It understands tables, figures, handwritten notes, and mixed-format documents—exactly what you find in decades-old claim files.
Extended Context: With a 200,000-token context window, Opus 4.7 can process entire claim files—including the original policy, all claim correspondence, medical reports, expert opinions, and payment history—in a single pass. This enables it to understand the full claim lifecycle and identify patterns that might be missed if the file were processed in fragments.
Structured Reasoning: Opus 4.7 excels at complex, multi-step reasoning. It can read a claim file, extract relevant facts, cross-reference those facts against prior reserves and development patterns, and produce a structured summary with confidence levels and flagged uncertainties. This structured output is exactly what actuaries need for sign-off.
Reliability: Opus 4.7 has been trained to minimise hallucination and to be transparent about uncertainty. When it doesn’t have enough information to make a judgment, it says so. This is critical for actuarial work, where false confidence is worse than honest uncertainty.
These capabilities align closely with what actuaries need when reviewing run-off claim files. Rather than trying to force insurance data into a generic AI framework, Opus 4.7 is purpose-built for this kind of work.
Core Capabilities for Reserving Analysis
When applied to insurance run-off reserving, Claude Opus 4.7 can perform several key tasks:
Claim File Extraction: Read a claim file (or set of files) and extract key structured data: claim number, claimant, injury/loss type, date of loss, date reported, policy period, reserve history, payment history, outstanding exposure, key dates (statute of limitations, medical discharge, etc.), and development narrative.
Anomaly Detection: Flag claims that deviate from expected patterns—claims with unusually long development tails, claims with large reserve movements, claims with conflicting information across systems, claims approaching statute of limitations, claims with significant unpaid exposure relative to reserve.
Development Pattern Analysis: Analyse how a claim has developed over time. Has it followed the expected development curve for its class? Are there unusual spikes or plateaus? What does the payment history suggest about ultimate settlement?
Reserve Adequacy Assessment: Based on the claim’s history and current status, assess whether the current reserve appears adequate, inadequate, or excessive. Provide reasoning and flag uncertainties.
Tail Risk Flagging: Identify claims with characteristics associated with elevated tail risk—claims with open medical treatment, claims with ongoing litigation, claims with potential for reopening, claims in jurisdictions with unfavourable legal trends.
Comparative Analysis: Compare a claim’s development against peers in the same class, underwriting year, and geography. Identify claims that are developing faster or slower than expected, suggesting either reserve inadequacy or over-reservation.
These capabilities, applied systematically across a large portfolio, give actuaries a much richer picture of reserving adequacy than aggregate models alone can provide.
Building Agentic AI Workflows for Claim File Analysis
While Claude Opus 4.7 is powerful on its own, the real power emerges when you build agentic workflows around it. An agentic AI system is one that can break down complex tasks into steps, gather information, make decisions, and iterate toward a goal—all with minimal human intervention.
For insurance run-off reserving, an agentic workflow might look like this:
Step 1: Portfolio Stratification
The agent begins by understanding the portfolio structure. It accesses metadata about the portfolio: claim counts by underwriting year, claim counts by class of business, claim counts by reserve status (open, closed, reopened), distribution of claim amounts, distribution of reserve ages, and development patterns by cohort.
Based on this analysis, the agent recommends a sampling strategy. For a large, heterogeneous portfolio, it might recommend stratified sampling: selecting a higher proportion of large claims, a higher proportion of old claims, a higher proportion of claims with significant reserve movements, and a representative sample of typical claims. This ensures the sample is both statistically robust and focused on high-risk areas.
Step 2: Batch Processing and Extraction
Once the sample is defined, the agent retrieves claim files and feeds them to Claude Opus 4.7 for extraction. Because Opus 4.7 can handle large batches efficiently, the agent might process 50, 100, or even 500 claim files in parallel, extracting structured data from each.
The agent stores the extracted data in a structured database, with clear provenance: which file was processed, when, by which version of the extraction prompt, and what confidence level was assigned to each extracted fact.
Step 3: Comparative Analysis and Flagging
Once extraction is complete, the agent performs comparative analysis. It calculates development factors for each claim (how much has it paid relative to reserve, how long has it been open, what’s the ratio of cumulative paid to current reserve). It compares each claim against peers in the same class and underwriting year. It flags claims that deviate significantly from expected patterns.
This step produces a ranked list of claims that warrant actuarial attention: the top 100 claims by reserve amount, the top 50 claims by reserve movement, the top 50 claims by development tail risk, the top 50 claims by litigation or medical status, etc.
Step 4: Actuarial Review and Sign-Off
The agent presents this prioritised list to the actuary. Rather than reviewing 500,000 claims, the actuary reviews perhaps 500—the claims most likely to materially affect reserves. For each claim, the agent has already extracted key facts and provided comparative context.
The actuary reviews the agent’s analysis, applies professional judgment, and either accepts the agent’s assessment or overrides it with their own. This override is recorded, allowing the model to learn from actuarial corrections.
Step 5: Reserve Adjustment and Reporting
Based on actuarial sign-off, the agent calculates reserve adjustments. If actuarial review suggests that claims in a particular class are under-reserved by 5%, the agent adjusts reserves for that class accordingly. If specific claims are identified as over-reserved, individual reserve adjustments are made.
The agent then generates a comprehensive reserving report, documenting the methodology, the sample reviewed, the adjustments made, the rationale for each adjustment, and the actuarial sign-off. This report is audit-ready: it shows clear methodology, documented decisions, and professional sign-off at each step.
This agentic workflow is fundamentally different from simply running Claude Opus 4.7 on individual claim files. It creates a structured, repeatable, auditable process that combines AI efficiency with actuarial judgment and governance.
Actuarial Sign-Off and Governance
Here’s the critical point: AI-assisted reserving is not about replacing actuarial judgment. It’s about augmenting it. The actuary remains the decision-maker. The AI is a tool that makes the actuary more efficient and more thorough.
This distinction is crucial for regulatory compliance and audit defensibility. Regulators and auditors want to see that reserves are set by qualified actuaries applying professional judgment. AI can support that process, but it cannot replace it.
Governance Framework
A robust governance framework for AI-assisted reserving should include:
Clear Roles and Responsibilities: Define who is responsible for what. The AI system is responsible for data extraction, comparative analysis, and flagging anomalies. The actuary is responsible for professional judgment, reserve decisions, and sign-off. The compliance/audit function is responsible for overseeing the process and ensuring it meets regulatory standards.
Documented Methodology: The AI system’s methodology should be fully documented. How are claims extracted? What rules are used to flag anomalies? How are development factors calculated? What assumptions underlie comparative analysis? This documentation should be available for audit and regulatory review.
Validation and Testing: Before deploying an AI-assisted reserving system on the full portfolio, validate it on a subset. Have actuaries review the AI’s extractions and assessments, compare them against manual review, and measure accuracy. Only deploy at scale once validation is complete.
Audit Trail: Every decision should be traceable. If a reserve is adjusted, the audit trail should show: the original reserve, the AI’s analysis, the actuary’s judgment, the final reserve, and the date and sign-off. This trail is essential for regulatory review and for learning from outcomes over time.
Exception Handling: Define how exceptions are handled. If the AI flags a claim as anomalous, but the actuary disagrees, how is that disagreement resolved and documented? If the AI’s analysis is unclear, how does the actuary escalate for human review? These processes should be clear and documented.
Sign-Off Discipline: The actuary must sign off on the reserves. This sign-off means the actuary has reviewed the AI’s work, applied professional judgment, and is willing to stand behind the reserves. The sign-off should be explicit and dated.
Regulatory Considerations
Different regulators have different requirements for reserving. In Australia, APRA requires that reserves be set by qualified actuaries and be reviewed annually. The reserving process should be documented, and the assumptions should be justified. AI-assisted reserving fits within this framework—it’s a tool that helps actuaries do their job more thoroughly.
However, there are some specific considerations:
Actuary Qualification: The actuary who signs off on reserves must be appropriately qualified (e.g., Fellow of the Institute of Actuaries Australia). The AI tool is an aid; it doesn’t change the qualification requirement.
Methodology Documentation: The reserving methodology must be documented and justified. If AI is part of the methodology, the AI’s role should be clearly described.
Assumption Justification: Reserves are based on assumptions (development factors, inflation rates, litigation costs, etc.). These assumptions must be justified. If AI is used to inform assumptions (e.g., by identifying development patterns), the justification should explain how the AI was used and why the resulting assumptions are reasonable.
Change Management: If the reserving methodology changes (e.g., by introducing AI), this change should be documented and justified. The impact of the change should be assessed.
Working with a partner like PADISO, who understands both AI and insurance regulation, can help navigate these considerations. PADISO’s AI & Agents Automation service is specifically designed to help financial services firms implement AI in a way that meets regulatory requirements and audit standards. Their experience with SOC 2 compliance and ISO 27001 implementation via Vanta ensures that data handling and security meet the standards that insurance regulators expect.
Implementation Patterns and Real-World Examples
Let’s walk through some concrete patterns for implementing AI-assisted reserving with Claude Opus 4.7.
Pattern 1: Legacy Run-Off Book with Mixed Data Sources
Scenario: A large Australian insurer has acquired a legacy run-off book with 500,000 claims. The claims data exists in three systems: a legacy mainframe system with structured data but no documents, a mid-2000s claims management system with some documents and some structured data, and a modern system with full documentation but only claims from the last 10 years.
Approach:
-
Extract structured data from all three systems and reconcile it. Use Claude Opus 4.7 to identify conflicts (e.g., claim 12345 has reserve $50,000 in System A but $75,000 in System B) and flag them for investigation.
-
For claims with documents, feed the documents to Claude Opus 4.7 and extract narrative information: claim summary, key events, medical status, litigation status, reserve history, and development narrative.
-
For claims without documents, use Claude Opus 4.7 to synthesise a narrative from the structured data alone, noting where data is sparse or conflicting.
-
Stratify the portfolio by claim age, claim amount, and development status. Identify the highest-risk claims (oldest, largest, or most unusual).
-
Have actuaries review the top 5% of claims (25,000 claims) in detail, using the AI-extracted information as a starting point. The AI has already done the heavy lifting of data extraction and organisation; the actuary can focus on judgment.
-
For the remaining 95% of claims, use the AI’s comparative analysis to assess reserve adequacy. If claims in a particular cohort are developing faster than expected, adjust reserves accordingly.
-
Document the entire process: what data sources were used, how conflicts were resolved, which claims were reviewed, what adjustments were made, and who signed off.
Outcome: What might have taken 18 months of manual work is completed in 3-4 months. The actuary has reviewed a much larger sample than would have been possible manually, and the reserves are more defensible because the methodology is clear and systematic.
Pattern 2: Continuous Reserving Reviews with Agentic Workflows
Scenario: An insurer wants to move from annual reserving reviews to quarterly or monthly reviews, but doesn’t have the resources to scale manual review processes.
Approach:
-
Set up an agentic AI workflow that runs monthly. The workflow retrieves all claims that have changed since the last review (new claims, claims with payments, claims with reserve adjustments).
-
For each changed claim, the agent extracts current status and compares it against the previous month’s status. It calculates development factors and flags claims that are developing faster or slower than expected.
-
The agent produces a monthly report highlighting:
- New claims requiring initial reserve assessment
- Claims with significant reserve movements
- Claims approaching statute of limitations
- Claims with unusual development patterns
- Cohort-level insights (e.g., medical claims from 2015 are developing 10% faster than expected)
-
An actuary reviews this report (30 minutes to 1 hour) and makes any necessary adjustments. The workflow is designed to flag only material issues, so the actuary’s time is spent on high-value decisions, not routine data processing.
-
The workflow documents all changes and obtains actuarial sign-off.
Outcome: The insurer achieves more frequent reserving reviews without proportionally increasing headcount. Reserves are more responsive to actual claims experience. The monthly reports also provide valuable insights into claims trends and emerging risks.
Pattern 3: Litigation and Exposure Management
Scenario: A run-off portfolio includes 50,000 claims with active litigation. The insurer needs to track litigation status, assess litigation risk, and ensure reserves are adequate for potential adverse judgments.
Approach:
-
Feed all litigation correspondence (pleadings, discovery documents, expert reports, settlement discussions) to Claude Opus 4.7. The model extracts: litigation status (discovery, trial, appeal, settlement discussion), key legal issues, expert opinions, settlement offers, and estimated litigation timeline.
-
The agent aggregates this information and identifies patterns: certain types of claims are more likely to be litigated, certain legal theories are emerging as problematic, certain experts are consistently pessimistic or optimistic.
-
For each litigated claim, the agent produces a litigation summary with risk assessment: low, medium, or high risk of adverse judgment; estimated range of potential exposure; and recommended reserve level.
-
Actuaries review the risk assessments and adjust reserves accordingly. For high-risk claims, they may increase reserves to account for litigation risk.
-
The insurer uses these assessments to inform settlement strategy: which claims should be settled early to avoid litigation risk, which claims should be defended, which claims should be reassessed if new evidence emerges.
Outcome: The insurer has much better visibility into litigation risk. Reserves are more tailored to actual litigation exposure rather than using broad litigation risk factors. Settlement strategy is more informed.
These patterns illustrate how AI-assisted reserving can be adapted to different portfolio characteristics and business objectives. The common thread is that AI handles data extraction and comparative analysis, freeing actuaries to focus on judgment and decision-making.
Security, Compliance, and Data Governance
Insurance claim files contain highly sensitive information: personal health information, financial details, litigation strategy, and proprietary underwriting data. Any AI system used to process these files must meet stringent security and compliance standards.
Data Security
When using Claude Opus 4.7 or any external AI service for sensitive insurance data, you need to:
Implement Data Minimisation: Don’t send entire claim files to the AI if you can extract key information first. If a claim file contains 100 pages but only 5 pages are relevant to reserving, send only those 5 pages. This reduces the exposure of sensitive information.
Use Secure APIs: Ensure all communication with the AI service uses encrypted channels (HTTPS/TLS). Verify that the API endpoint is legitimate and that data in transit is protected.
Implement Access Controls: Restrict who can submit data to the AI system. Use role-based access control to ensure only authorised personnel can process claim files.
Audit Logging: Log all interactions with the AI system. Who submitted what data, when, and what was the output. This audit trail is essential for security investigations and regulatory review.
Anonymisation: Where possible, anonymise or pseudonymise data before sending it to the AI. For example, replace claimant names with claim IDs, remove specific medical details that aren’t necessary for reserving analysis, and redact personal identifiers.
PADISO’s experience with SOC 2 compliance and ISO 27001 implementation is directly relevant here. SOC 2 Type II certification demonstrates that a service provider has implemented appropriate security controls and maintains them over time. If you’re using an external AI service for sensitive data, verify that the service has SOC 2 Type II certification.
Compliance and Regulatory Considerations
Insurance regulators care about data security and privacy. In Australia, APRA requires that insurers implement appropriate risk management and governance for outsourced functions. If you’re outsourcing claim file analysis to an AI service, you need to:
Assess the Service Provider: Evaluate the AI service provider’s security controls, compliance certifications, financial stability, and track record. Have they handled similar work for other insurers? What controls do they have in place?
Establish a Service Agreement: Have a clear written agreement with the service provider that specifies: what data will be processed, how it will be secured, how long it will be retained, what happens if there’s a breach, and how the agreement can be terminated. The agreement should include audit rights: you should be able to audit the service provider’s controls.
Implement Oversight: Don’t simply hand off claim files to the AI service and trust the results. Implement oversight: spot-check the AI’s work, validate its accuracy, and monitor for anomalies. This oversight is your responsibility, not the service provider’s.
Document Your Process: Document how you use the AI service, what controls you have in place, and how you ensure the service provider meets your requirements. This documentation is essential for regulatory review.
Privacy and Personal Information
Claim files contain personal information protected by privacy laws (Privacy Act 1988 in Australia, GDPR in Europe, etc.). When processing claim files with AI:
Understand Your Privacy Obligations: You must comply with privacy laws regardless of whether you use AI or manual processes. Privacy laws typically require that personal information be used only for the purpose for which it was collected, that it be kept secure, and that individuals have rights to access and correct their information.
Assess the AI Service Provider’s Privacy Controls: Does the service provider have a privacy policy? How do they handle personal information? Do they use data for any purpose other than processing your requests? Do they share data with third parties?
Consider Privacy by Design: Design your AI-assisted reserving process with privacy in mind. Use pseudonymisation where possible. Minimise the amount of personal information sent to the AI service. Ensure data is securely deleted after processing.
Maintain Transparency: If you’re using AI to process claim files, consider whether claimants should be informed. Privacy laws often require transparency about how personal information is used. This doesn’t mean you need to ask permission every time you use AI for routine reserving analysis, but you should be transparent about your use of AI in your privacy policy.
For organisations handling sensitive data, PADISO’s approach to AI automation for financial services includes built-in security and compliance considerations. Their team understands the regulatory landscape for financial services and can help you implement AI in a way that meets both security and privacy requirements.
Measuring Success: KPIs and ROI
Implementing AI-assisted reserving requires investment: time to set up the system, cost of the AI service, training for staff, and ongoing governance. How do you measure whether the investment is paying off?
Efficiency Metrics
Time to Review: How long does it take to complete a reserving review? With AI assistance, the time should drop significantly. Measure time per claim reviewed, time per claim file processed, and total time for the quarterly or annual review.
Example: Manual review of a 500,000-claim portfolio takes 6 months and 20 actuaries. AI-assisted review takes 2 months and 5 actuaries. That’s a 75% reduction in time and a 75% reduction in headcount required.
Sample Size: How many claims are reviewed? With AI assistance, you should be able to review a much larger sample. Measure the percentage of the portfolio reviewed in detail.
Example: Manual review might cover 1% of claims (5,000 claims). AI-assisted review might cover 10% or more (50,000+ claims). A larger sample is more statistically robust and more likely to catch issues.
Cost per Review: What’s the cost to complete a reserving review? Calculate total cost (actuary time, AI service cost, infrastructure, oversight) divided by number of claims reviewed.
Example: Manual review costs $1,000 per claim reviewed ($5M for 5,000 claims). AI-assisted review costs $100 per claim reviewed ($5M for 50,000 claims). That’s a 90% reduction in cost per claim.
Accuracy Metrics
Reserve Accuracy: How accurate are the reserves? This is measured by comparing actual claims development against reserved amounts. If reserves are accurate, actual payments should track closely to reserved amounts. If reserves are too high, claims develop faster than reserved. If reserves are too low, claims develop slower.
Measure reserve adequacy ratio: actual cumulative paid divided by original reserve. A ratio of 1.0 means the reserve was exactly right. A ratio of 0.8 means the claim developed 20% faster than reserved (over-reserved). A ratio of 1.2 means the claim developed 20% slower than reserved (under-reserved).
Example: Before implementing AI-assisted reserving, the portfolio’s reserve adequacy ratio is 1.15 (on average, reserves are 15% too high). After implementation, the ratio improves to 1.05 (reserves are only 5% too high). This represents a significant improvement in reserve accuracy.
Variance Reduction: How consistent are reserves? Measure the variance in reserve adequacy ratio across the portfolio. Lower variance means reserves are more consistent and more defensible.
Example: Before AI, the reserve adequacy ratio ranges from 0.7 to 1.5, with a standard deviation of 0.25. After AI, the range is 0.9 to 1.2, with a standard deviation of 0.08. This shows that AI-assisted reserving produces more consistent reserves.
Audit Findings: How many audit findings or regulatory concerns are raised about reserves? Fewer findings suggest that reserves are more defensible and better documented.
Example: Previously, the annual audit raised 3-5 findings about reserve adequacy. After implementing AI-assisted reserving, audit findings drop to 0-1. This suggests that reserves are more robust and better supported.
Business Impact Metrics
Reserve Movement: What’s the impact on reported reserves? When you implement AI-assisted reserving, do reserves go up or down? This depends on your portfolio, but the direction and magnitude of movement should align with actual claims experience.
Example: AI-assisted review identifies that a particular claim cohort is under-reserved by 5%. Reserves are increased by $50M. Over the next 2 years, actual claims development validates this increase: claims develop faster than originally assumed, and the additional reserve is necessary.
Regulatory Capital: What’s the impact on regulatory capital? In some jurisdictions, more accurate reserves mean lower regulatory capital requirements. Measure any change in capital requirements resulting from improved reserve accuracy.
Example: More accurate reserves reduce the need for conservative capital buffers. Regulatory capital requirement drops by $100M, freeing up capital for dividends or growth investments.
Shareholder Communication: Can you communicate reserves more confidently to shareholders? Better-documented, more-defensible reserves give management confidence to explain reserve movements and justify reserve levels.
Return on Investment
Calculate ROI as follows:
Benefits:
- Reduced actuary headcount required (salary savings)
- Improved reserve accuracy (reduced risk of adverse development)
- Reduced audit findings (reduced audit cost and management attention)
- Freed-up actuary time (can be redeployed to higher-value work)
- Better decision-making (more thorough analysis informs better business decisions)
Costs:
- AI service cost (Claude Opus 4.7 API calls)
- Infrastructure cost (systems to manage data, store results, interface with AI)
- Implementation cost (time to set up, validate, train staff)
- Ongoing governance cost (oversight, audit, compliance)
Example ROI calculation:
- Reduced headcount: 5 actuaries × $200,000 salary = $1M annual savings
- Improved reserve accuracy: Reduced adverse development worth $500K annually
- Audit cost reduction: $100K annually
- AI service cost: $200K annually
- Infrastructure and governance: $150K annually
- Net benefit: $1M + $500K + $100K - $200K - $150K = $1.25M annually
- Implementation cost: $500K (one-time)
- Payback period: 5 months
These metrics demonstrate that AI-assisted reserving is not just a nice-to-have; it’s a financially compelling investment that delivers measurable ROI.
Getting Started with Your AI Reserving Partner
If you’re considering implementing AI-assisted reserving for your insurance run-off portfolio, here’s how to get started.
Step 1: Assess Your Portfolio
Understand your portfolio: How many claims? What’s the age distribution? What’s the data quality? Where are the biggest uncertainties? What are your biggest reserving challenges?
This assessment will inform your approach. A portfolio with 100,000 well-documented claims from a single product line might use a different approach than a portfolio with 1 million claims from multiple products with mixed data quality.
Step 2: Define Your Objectives
What do you want to achieve? Are you trying to reduce reserving review time? Improve reserve accuracy? Increase sample size? Reduce audit findings? Improve regulatory capital efficiency?
Clear objectives will guide your implementation and help you measure success.
Step 3: Pilot on a Subset
Don’t implement AI-assisted reserving across your entire portfolio immediately. Start with a pilot: select a subset of claims (perhaps 10,000-50,000 claims representing 5-10% of the portfolio) and implement the process on that subset.
Use the pilot to:
- Validate the AI’s accuracy against manual review
- Identify and fix any data quality issues
- Refine the process based on what you learn
- Build confidence with actuaries and management
- Measure initial ROI and refine your business case
Step 4: Partner with Experts
You don’t need to build this from scratch. Partner with a firm that has experience implementing AI in insurance and understands both the technical and actuarial dimensions.
PADISO, as a Sydney-based AI automation agency, has experience implementing agentic AI workflows for complex business processes. Their AI & Agents Automation service is specifically designed to help organisations like insurance firms automate document-heavy processes while maintaining governance and compliance.
More broadly, look for partners who:
- Have experience in insurance or financial services
- Understand actuarial requirements and regulatory compliance
- Have implemented similar AI projects successfully
- Can provide references from similar organisations
- Offer ongoing support and continuous improvement
Step 5: Implement Governance and Oversight
Before going live, establish governance:
- Who is responsible for what?
- How will actuarial sign-off work?
- How will exceptions be handled?
- What’s the audit trail?
- How will the system be monitored and improved over time?
These governance structures are not bureaucratic overhead; they’re essential for ensuring the system works as intended and meets regulatory requirements.
Step 6: Monitor and Iterate
Once the system is live, monitor its performance. Are efficiency metrics improving as expected? Are accuracy metrics improving? Are actuaries confident in the AI’s work?
Use this feedback to iterate: refine extraction prompts, adjust flagging rules, improve the user interface, and expand the system to cover more claims or more use cases.
AI-assisted reserving is not a “set it and forget it” implementation. It’s an ongoing process of refinement and improvement.
Conclusion
Insurance run-off portfolios represent one of the most challenging reserving environments: massive data volumes, long development tails, legacy systems, and high stakes. Traditional manual reserving approaches struggle to scale to these challenges.
AI-assisted reserving, powered by Claude Opus 4.7 and agentic AI workflows, offers a path forward. By automating data extraction, comparative analysis, and anomaly flagging, AI frees actuaries to focus on professional judgment and decision-making. The result is faster reviews, larger samples, more consistent reserves, and better defensibility.
Critically, this approach maintains actuarial sign-off discipline. The actuary remains the decision-maker; the AI is a tool that makes the actuary more effective. This is essential for regulatory compliance and audit defensibility.
Implementing AI-assisted reserving requires investment in technology, process design, and governance. But the ROI is compelling: reduced headcount, improved accuracy, faster reviews, and better business decisions. For large run-off portfolios, the payback period is typically measured in months, not years.
If you’re managing a run-off portfolio and struggling with reserving reviews, AI-assisted reserving is worth exploring. Start with a pilot, partner with experts who understand both AI and insurance, and build governance structures that maintain actuarial discipline.
The future of insurance reserving is not about replacing actuaries; it’s about augmenting their expertise with AI tools that multiply their effectiveness. That future is here now.