Life Insurance Underwriting: Where Agents Beat Underwriters and Where They Don't
Honest assessment of where AI agents outperform human underwriters in life insurance and where they fall short. Straight-through accept rates, compliance, edge cases.
Table of Contents
- The Underwriting Landscape Today
- Where AI Agents Win: High-Volume, Straight-Through Processing
- Where Human Underwriters Still Dominate
- The Data Problem: What Agents Can and Cannot See
- Compliance and Regulatory Reality
- Building a Hybrid Model That Actually Works
- Real Numbers: Acceptance Rates and Processing Speed
- Implementation: The Sydney Perspective
- Next Steps for Your Organisation
The Underwriting Landscape Today
Life insurance underwriting has changed dramatically. A decade ago, underwriters made decisions based on medical exams, blood tests, and telephone interviews. Today, the industry is experimenting with AI agents that can assess risk using driving records, credit histories, medical claims data, and algorithmic scoring—sometimes without any human contact at all.
The promise is compelling: faster decisions, lower costs, higher volume. The reality is messier.
This guide cuts through the hype. We’ve worked with Australian life insurers navigating this transition, and we’ve seen where AI agents genuinely outperform underwriters—and where they create more problems than they solve.
The core tension is this: AI agents are exceptionally good at processing high-volume, low-complexity cases at scale. They’re terrible at handling edge cases, interpreting context, and making judgment calls on ambiguous medical or financial information. Most life insurance portfolios contain far more edge cases than anyone initially admits.
Where AI Agents Win: High-Volume, Straight-Through Processing
Instant Decisioning on Clean Cases
AI agents excel when the decision is obvious. A 28-year-old with no medical history applying for a $250,000 policy? An agent can assess and approve this in milliseconds. No underwriter needed.
Straight-through processing (STP) is the metric that matters. When we talk about “where agents beat underwriters,” we’re primarily talking about STP rates—the percentage of applications that can be approved without human review.
Industry benchmarks suggest STP rates of 40–60% are achievable with well-trained agents on clean data. Some insurers claim higher, but those claims often exclude declined applications from the STP calculation, which skews the numbers. A proper STP rate counts approved cases only.
Cost Per Decision
An underwriter costs roughly £40,000–£60,000 per year in salary and overhead. They process roughly 10–15 cases per day, assuming mixed complexity. That’s £16–£24 per decision.
An AI agent running on cloud infrastructure costs roughly £0.02–£0.10 per decision, including compute, storage, and model updates. The cost difference is not marginal—it’s three orders of magnitude.
For high-volume, low-complexity portfolios, this matters enormously. A regional insurer processing 10,000 applications per month can save £150,000–£200,000 annually by automating the straightforward cases.
Speed to Decision
Underwriters take 3–7 days on average to make a decision, even on simple cases. There’s review time, documentation time, back-and-forth with applicants, and batch processing.
AI agents can decide in minutes. For applicants, this is transformative—especially in competitive markets where speed influences purchase behaviour.
Australian insurers have noted that faster decisions correlate with higher conversion rates, particularly in the under-40 demographic. The psychological weight of waiting a week for approval is real.
Pattern Recognition at Scale
An underwriter reviews perhaps 3,000–4,000 applications per year. An AI agent can review millions. This means agents can identify patterns that humans miss: subtle correlations between occupation codes and mortality risk, seasonal patterns in claim submission, or emerging fraud signatures.
When trained on sufficient historical data, agents can sometimes outperform human underwriters on pattern recognition alone. The catch: they need clean, consistent historical data. Most insurers don’t have it.
Where Human Underwriters Still Dominate
Edge Cases and Ambiguity
Life insurance is full of edge cases. A 45-year-old with a history of depression who’s now been stable for five years. A diabetic applicant with excellent control. Someone with a family history of early heart disease but exceptional personal health metrics.
These cases require judgment. They require weighing competing risk factors, understanding context, and making calls that aren’t obvious from the data alone.
AI agents struggle here. They’re trained to classify, not to synthesize competing narratives. When faced with ambiguous information, they either default to rejection (conservative) or apply rules that don’t fit the situation.
Human underwriters, by contrast, are trained to handle ambiguity. They ask follow-up questions. They request additional medical records. They make judgment calls based on experience. On edge cases, human underwriters outperform agents by a significant margin.
Medical Interpretation
An AI agent can flag that an applicant has a history of atrial fibrillation. But can it distinguish between paroxysmal AFib (intermittent, often benign) and permanent AFib (higher risk)? Can it assess whether the applicant’s rate control is adequate? Can it interpret whether a recent ECG showing “nonspecific ST changes” is clinically significant or a red herring?
These questions require medical training and judgment. Underwriters often have nursing backgrounds or work closely with medical directors. They understand the nuance.
AI agents trained on claims data can approximate this understanding, but approximation isn’t good enough in medicine. A missed nuance can mean either over-accepting risk or over-rejecting applicants who should be approved.
Fraud Detection in Complex Cases
AI agents are excellent at detecting obvious fraud: forged documents, inconsistent application data, applicants who appear in fraud databases.
But complex fraud—an applicant with a hidden terminal diagnosis, someone concealing a dangerous hobby, an applicant with undisclosed substance use—often requires investigation. It requires talking to the applicant, reviewing medical records in detail, and making judgments about credibility and consistency.
Human underwriters are better at this. They have intuition built on experience. They know what “doesn’t add up.”
Underwriting Judgment on Moral Hazard
Moral hazard is the risk that an applicant has an incentive to make a claim. If someone applies for a large policy shortly after a terminal diagnosis, that’s moral hazard. If someone applies for a policy on a spouse they’ve just married, that’s potential moral hazard.
AI agents can flag obvious patterns. But they struggle with context-dependent judgment. Is a 65-year-old applying for a large policy a reasonable estate-planning decision or a sign of hidden knowledge? An underwriter can often tell. An agent usually can’t.
Relationship and Trust
Underwriters build relationships with brokers and applicants. They understand individual circumstances. They can explain decisions and work through concerns.
AI agents are black boxes. They don’t explain. They don’t negotiate. They don’t build trust.
For high-net-worth applicants and complex cases, this relationship matters. It’s not just about the decision—it’s about the applicant’s confidence that the decision was fair.
The Data Problem: What Agents Can and Cannot See
The Information Asymmetry
AI agents are only as good as their input data. And here’s the uncomfortable truth: most life insurance applications lack the data agents actually need.
An AI agent trained on medical claims data, credit history, driving records, and public records can make reasonable decisions on perhaps 50–60% of applications. But what about the other 40–50%?
They lack:
- Detailed medical history: Many applicants have seen doctors multiple times but have no claims data. No claims data means the agent has no visibility into past health events.
- Lifestyle context: Does the applicant smoke? Agents might infer this from claims or medical records, but inference isn’t certainty. A smoker applying for life insurance should be rated differently, but if the agent doesn’t know, it can’t adjust.
- Occupational risk: An applicant’s occupation matters enormously. A commercial pilot faces different risks than an accountant. But occupation codes are often vague or incomplete in application data.
- Family history: AI agents struggle with family medical history. They might see that an applicant’s parent died at 55, but without knowing the cause, they can’t properly assess genetic risk.
- Psychological and social context: Depression, anxiety, financial stress, relationship stability—these all affect mortality risk. But they’re rarely captured in structured data that agents can process.
The Bias Problem
When data is incomplete, AI agents fill gaps with proxies. And proxies are where bias creeps in.
If an agent doesn’t have smoking status, it might infer it from credit score, neighbourhood, or medical claims patterns. These inferences are often correlated with protected characteristics like race or socioeconomic status. The result: the agent makes decisions that appear neutral but are actually discriminatory.
This is a real regulatory risk. The National Association of Insurance Commissioners (NAIC) has published guidance on algorithmic bias in insurance. Regulators are watching.
Human underwriters aren’t perfect on bias, but they’re more transparent about their reasoning. An underwriter can explain why they’re asking for additional medical records. An agent just says “declined.”
The Recency Problem
AI agents trained on historical data make decisions based on historical patterns. But underwriting standards change. Medical understanding evolves. Risk factors that were considered high-risk ten years ago might be manageable today.
Human underwriters adapt. They read industry publications like Independent Agent Magazine and Insurance Journal to stay current on underwriting trends. They attend conferences. They update their mental models.
AI agents don’t. They apply the patterns they learned during training. If the training data is more than two years old, the agent is already outdated.
Compliance and Regulatory Reality
The Explainability Requirement
When an underwriter declines an application, they must explain why. The applicant has a right to understand the decision.
With AI agents, this is harder. The decision might be correct, but the reasoning is opaque. The agent might have weighted 47 different factors in a nonlinear way. How do you explain that to an applicant?
Regulators are increasingly requiring explainability. The Insurance Information Institute and various state insurance commissioners have published guidance on algorithmic decision-making. The theme is consistent: if you can’t explain it, you can’t use it for individual decisions.
This doesn’t mean agents can’t be used. But it means agents should be used for initial screening and triage, not final decisions on declined applications. Declined cases need human review.
The Adverse Action Requirement
In many jurisdictions, if an AI agent declines an application based on information in a consumer report (credit report, driving record, etc.), the insurer must provide the applicant with:
- Notice that the decision was based on the report
- The name and contact information of the reporting agency
- The right to dispute the information
This is straightforward for human underwriters. For agents, it’s more complex. If the agent’s decision was based on a composite score derived from multiple reports, how do you attribute the decision to a specific report?
Australian insurers also need to comply with the Privacy Act 1988 and Australian Consumer Law. The principles are similar: transparency, fairness, and the right to know why a decision was made.
The Model Governance Problem
When an underwriter makes a mistake, it’s usually obvious. You review the file, identify the error, and fix it.
When an AI agent makes systematic mistakes, it’s much harder to detect. The agent might be consistently over-accepting risk on a specific demographic, or consistently over-rejecting diabetic applicants, and you won’t know until you analyse months of decisions.
This requires robust model monitoring. You need to track:
- Approval rates by demographic: Are approval rates consistent across age, gender, and postcode?
- Approval rates by risk category: Are approval rates for diabetic applicants in line with historical underwriter decisions?
- False positive and false negative rates: How many applicants declined by the agent would have been approved by an underwriter? How many approved by the agent later filed claims that should have been declined?
- Drift detection: Is the agent’s decision-making changing over time as new data is fed into the model?
This monitoring is expensive. It requires data science expertise. Many insurers underestimate this cost when they implement agents.
Regulatory Guidance on AI in Insurance
The AI-Enabled Underwriting Brings New Challenges for Life Insurance document from the NAIC outlines specific concerns:
- Model validation: Regulators want to see evidence that the model performs as intended across different populations.
- Bias testing: Insurers must demonstrate that the model doesn’t discriminate against protected classes.
- Explainability: Insurers must be able to explain individual decisions.
- Governance: There must be clear accountability for model performance and decisions.
- Consumer protection: Applicants must have recourse if they believe a decision was unfair.
Most insurers implementing agents aren’t meeting these requirements. This is a compliance risk.
Building a Hybrid Model That Actually Works
The Triage Approach
The most successful hybrid models we’ve seen use AI agents for triage, not final decisions.
Here’s the workflow:
- Initial screening: The agent reviews the application and flags obvious issues (incomplete information, fraud signals, data inconsistencies).
- Risk stratification: The agent assigns a risk score and categorises the application (low risk, medium risk, high risk, decline).
- Routing: Based on risk category, the application is routed:
- Low risk (STP): Approved automatically. No human review needed.
- Medium risk: Sent to an underwriter for review. The agent’s risk score and reasoning inform the underwriter’s decision.
- High risk: Sent to a senior underwriter or medical director for detailed review.
- Decline: Sent to a compliance officer for review before final decline.
This approach captures the benefits of agents (speed, cost, pattern recognition) while preserving human judgment on complex cases.
STP rates in this model are typically 40–50%, not 80–90%. But the cases that go to underwriters are genuinely complex, so underwriter productivity doesn’t suffer.
The Augmentation Approach
Another successful model uses agents to augment underwriter decision-making.
Instead of the agent making the decision, it prepares a detailed risk summary:
- Medical history extracted from claims data
- Lifestyle risk factors inferred from available data
- Comparison to similar applicants (“applicants with this risk profile have a 15% higher mortality rate”)
- Flagged inconsistencies or red flags
- Recommended underwriting actions (“request additional medical records,” “order an ECG,” etc.)
The underwriter reviews this summary and makes the final decision. The agent doesn’t decide; it informs.
This approach is slower than pure automation (underwriters still review every case), but it dramatically increases underwriter productivity. Instead of spending 30 minutes researching and preparing a case, the underwriter spends 5 minutes reviewing the agent’s summary.
Underwriter throughput increases 3–4x. Decision quality improves because underwriters have better information.
The Segmentation Approach
Some insurers segment their portfolio and apply agents selectively.
For example:
- Young, healthy applicants under 40: Use agents for full decisioning. STP rates are typically 70–80%.
- Applicants 40–60 with no medical history: Use agents for triage; underwriters decide.
- Applicants with complex medical histories: Bypass agents entirely; go straight to underwriters.
- High-value policies (>$1M): Always underwriter-reviewed, regardless of risk profile.
This approach is more operationally complex (you need to manage multiple workflows), but it optimises resource allocation. Agents handle the cases where they’re good. Underwriters handle the cases where they’re essential.
Real Numbers: Acceptance Rates and Processing Speed
Straight-Through Processing Rates
Based on our work with Australian life insurers and published benchmarks, here’s what realistic STP rates look like:
Scenario 1: Young, healthy applicants, no medical underwriting required
- Agent STP: 75–85%
- Underwriter STP: 85–95% (but much slower)
- Hybrid (agent + underwriter): 80–90% (fast and accurate)
Scenario 2: Mixed portfolio (all ages, some medical history)
- Agent STP: 40–50%
- Underwriter STP: 60–70% (slower)
- Hybrid (agent triage + underwriter): 45–55% (balanced)
Scenario 3: Complex portfolio (older applicants, significant medical history)
- Agent STP: 20–30%
- Underwriter STP: 40–50%
- Hybrid (agent augmentation): 35–45% (underwriter-assisted)
Note: These figures assume clean data and well-trained models. Many insurers achieve lower rates because their data is messy or their models are poorly calibrated.
Processing Speed Comparison
| Scenario | Agent Only | Underwriter Only | Hybrid Model |
|---|---|---|---|
| Low-risk case | 2 minutes | 2–3 days | 2 minutes (STP) |
| Medium-risk case | 2 minutes | 3–5 days | 4–6 hours (underwriter review) |
| High-risk case | 2 minutes (usually decline) | 5–7 days | 1–2 days (senior review) |
| Declined case | 2 minutes | 7–10 days | 1–2 days (compliance review) |
The key insight: agents are fast, but they often make conservative decisions that send cases to underwriters anyway. The real speed gain comes from automating the genuinely simple cases and accelerating underwriter review on complex cases.
Cost Per Decision
Agent-only model
- Infrastructure cost: £0.02–£0.10 per decision
- Compliance and monitoring: £5,000–£20,000 per month (for model governance)
- Total for 10,000 applications: £200–£1,000 + £60,000–£240,000 = £60,200–£241,000
- Cost per decision: £6.02–£24.10
Underwriter-only model
- Salary and overhead: £40,000–£60,000 per underwriter per year
- Each underwriter processes ~3,000 applications per year
- Cost per decision: £13.33–£20
- For 10,000 applications: 3–4 underwriters = £120,000–£240,000
Hybrid model (50% STP via agent, 50% underwriter-reviewed)
- Agent infrastructure: £100–£500 per month + £5,000–£20,000 monitoring
- Underwriter salary: 1.5–2 underwriters = £60,000–£120,000
- Total for 10,000 applications: £65,000–£140,000
- Cost per decision: £6.50–£14
The hybrid model is typically the most cost-effective, especially if the agent STP rate is 40–50%. You save on underwriter salary (fewer underwriters needed) and the agent cost is minimal.
Quality Metrics
Quality is harder to measure, but here are the metrics that matter:
Acceptance accuracy: Of the cases the agent approved, what percentage would an underwriter also approve? Realistic target: 85–95%. If lower, the agent is over-accepting risk.
Decline accuracy: Of the cases the agent declined, what percentage would an underwriter also decline? Realistic target: 70–85%. Agents tend to over-decline (conservative bias), so this number is often lower.
Fraud detection rate: Of the fraudulent applications in your portfolio, what percentage did the agent flag? Realistic target: 60–80%. Agents are good at obvious fraud but miss sophisticated fraud.
Complaint rate: What percentage of applicants complained about their decision? Realistic target: <2% for STP cases, 5–10% for declined cases. Agents often have higher complaint rates because decisions are unexplained.
Implementation: The Sydney Perspective
The Australian Regulatory Environment
Australian life insurers face specific regulatory requirements that affect AI implementation:
-
Australian Prudential Regulation Authority (APRA): APRA has issued guidance on operational risk and technology governance. Insurers must demonstrate that AI systems are properly governed and monitored.
-
Australian Securities and Investments Commission (ASIC): ASIC has issued guidance on algorithmic decision-making and consumer protection. Insurers must be able to explain decisions to consumers.
-
Privacy Act 1988 (Cth): Insurers must comply with Australian Privacy Principles (APPs), including transparency about how personal information is used in decision-making.
-
Australian Consumer Law: Insurers must not engage in misleading or deceptive conduct. This includes how AI decisions are communicated to consumers.
These requirements mean that Australian insurers implementing agents need to invest in:
- Model governance: Clear documentation of how the model works, who’s accountable, and how performance is monitored.
- Explainability infrastructure: Systems to explain individual decisions to applicants.
- Compliance monitoring: Regular audits to ensure the model doesn’t discriminate and complies with regulatory requirements.
- Consumer communication: Clear, transparent communication about how AI is used in underwriting.
Sydney-Based Implementation Considerations
Sydney insurers have some advantages and challenges:
Advantages:
- Access to technology talent (Sydney has a strong AI and software engineering community)
- Proximity to ASX-listed insurers and large financial institutions with resources for implementation
- Growing venture capital funding for insurtech startups
Challenges:
- Regulatory scrutiny (ASIC and APRA are active in Sydney)
- Data quality issues (many Australian insurers have legacy systems with inconsistent data)
- Talent cost (Sydney engineers are expensive, which affects implementation cost)
For Sydney insurers, the hybrid model is often the best starting point. It’s lower risk than pure automation, it delivers meaningful cost savings, and it’s easier to implement with limited data science resources.
Working with an AI Partner
Most insurers shouldn’t build AI underwriting agents in-house. The expertise required—data science, medical knowledge, regulatory compliance, model governance—is specialised.
Working with an experienced AI partner is more cost-effective. A partner can:
- Assess your data: Identify what data you have, what you’re missing, and how to fill gaps
- Design the workflow: Recommend whether triage, augmentation, or segmentation makes sense for your portfolio
- Build and train the model: Use your historical underwriting decisions to train an agent that mimics your underwriting standards
- Implement governance: Set up monitoring, bias testing, and compliance reporting
- Train your team: Ensure your underwriters and compliance team understand how the agent works and how to use it
This is where PADISO’s AI & Agents Automation services come in. We work with Australian insurers to design and implement AI agents that complement human underwriters, not replace them. We focus on realistic outcomes: 40–50% STP rates, clear compliance frameworks, and sustainable cost savings.
We’ve also published detailed guidance on how AI automation works in financial services. Our article on AI Automation for Financial Services: Fraud Detection and Risk Management covers the broader context of AI in insurance and financial services.
Beyond Underwriting: The Broader AI Opportunity
While underwriting is important, it’s just one part of the insurance value chain. We’ve worked with insurers on AI automation across the entire operation:
- Claims processing: AI agents can triage claims, extract information, and route to appropriate handlers. Read more in our guide on AI Automation for Insurance: Claims Processing and Risk Assessment.
- Customer service: AI chatbots and virtual assistants can handle routine customer inquiries, policy questions, and claims status updates. See AI Automation for Customer Service: Chatbots, Virtual Assistants, and Beyond.
- Operations: AI can automate document processing, data entry, and compliance reporting. Our guide on Agentic AI vs Traditional Automation: Why Autonomous Agents Are the Future explains when agentic AI is better than traditional RPA.
The insurers getting the best ROI aren’t just automating underwriting—they’re automating the entire customer and claims journey. That’s where the real value is.
Next Steps for Your Organisation
If You’re Just Starting
-
Audit your data: Understand what data you have, what’s missing, and what quality issues exist. This is the foundation for everything else.
-
Define your problem: Are you trying to increase STP rates? Reduce underwriter workload? Speed up decisions? Different goals require different approaches.
-
Start small: Don’t try to automate your entire portfolio. Pick a segment (young, healthy applicants, or a specific product line) and pilot there.
-
Measure everything: Set clear metrics for success. STP rate, cost per decision, complaint rate, fraud detection rate. Track these from day one.
If You’re Already Implementing
-
Audit your model: Is your agent actually performing as intended? Are STP rates meeting targets? Is the model biased?
-
Invest in governance: Model governance is often an afterthought, but it’s essential for compliance and risk management. Ensure you have:
- Regular performance monitoring
- Bias testing (at least quarterly)
- Clear documentation of how the model works
- Accountability for model performance
-
Improve your workflow: Are you capturing the benefits of the agent, or is it creating more work? Optimise your triage and routing logic.
-
Train your team: Underwriters need to understand how the agent works and how to use the agent’s output effectively.
If You Want to Scale
-
Expand to other business processes: Once underwriting is working, apply the same approach to claims, customer service, and operations. The pattern is the same: identify high-volume, low-complexity processes and automate them.
-
Invest in data quality: The best AI models are built on clean, consistent data. Investing in data governance pays dividends across all AI initiatives.
-
Build an AI capability: If you’re doing multiple AI projects, it makes sense to build an internal AI team. This is more cost-effective than hiring consultants for every project.
-
Think about integration: AI agents work best when they’re integrated into your core systems (policy administration, claims management, customer relationship management). Plan for this from the start.
Working with PADISO
If you’re an Australian insurer looking to implement AI underwriting or other AI automation, we can help. PADISO works with insurers at every stage:
- Discovery and strategy: We assess your data, define your AI opportunity, and recommend an implementation approach. See our AI Strategy & Readiness service.
- Design and build: We design and build AI agents tailored to your business, including underwriting, claims, and customer service automation.
- Implementation and governance: We implement the agent in your environment, set up monitoring and compliance reporting, and train your team. Our AI & Agents Automation service covers the full implementation.
- Ongoing support: We provide ongoing model monitoring, bias testing, and optimisation to ensure the agent continues to perform as intended.
We’re based in Sydney and work exclusively with Australian financial services firms. We understand the regulatory environment, the data challenges, and the operational realities of Australian insurers.
Final Thoughts
AI agents are genuinely useful in life insurance underwriting. They can automate 40–50% of straightforward cases, dramatically reducing cost per decision and speeding up approval times.
But they’re not a replacement for human underwriters. They’re a complement. The insurers getting the best results are using agents for triage and augmentation, not replacement. They’re investing in governance and compliance. They’re measuring everything. And they’re being honest about limitations.
The future of life insurance underwriting isn’t “agents vs. underwriters.” It’s “agents and underwriters, working together.” Agents handle the volume. Underwriters handle the judgment. Together, they deliver faster decisions, lower costs, and better outcomes for applicants.
If you’re ready to explore this for your organisation, start with a clear-eyed assessment of your data, your problem, and your goals. Then pilot a small segment. Measure results. Scale what works. That’s how you build a sustainable AI underwriting capability.
The insurers who move quickly and thoughtfully will have a significant competitive advantage. The ones who move recklessly—or not at all—will fall behind.