70 minutes. $50. That’s what it costs to manually process a P&C claim end-to-end - intake, triage, document review, coverage check, decision - based on Decerto’s own production benchmarking of the Claims AI platform.
5 minutes. $0.07. That’s what the same claim costs when AI handles it instead.
Those aren’t projections from a McKinsey deck. They’re measured processing metrics from a production Claims AI system reviewing a real scenario - a restaurant fire claim arriving by email, with photos and a handwritten property damage form. Same 9 processing steps a human adjuster would take. Different speed by a factor of 14. Different cost by a factor of 700.
In my work with US P&C carriers over the past few years, the question I hear most from claims leaders isn’t whether AI claims processing works. J.D. Power’s 2026 U.S. Property Claims Satisfaction Study confirmed the pattern: the average cycle time from FNOL to final payment is now 40.7 days [1], and carriers who have invested in digital and AI tools are winning satisfaction scores by wide margins. The real question is whether your operation can survive the gap between those who have already made the transition and those who haven’t - while your CAT season is three months away and your adjusters are already at capacity.
TL;DR - what carriers need to know in 2026
- The average cycle time from FNOL to final property-claim payment is now 40.7 days (J.D. Power 2026),with repair cycle time averaging 29.6 days - both among the longest since J.D. Power began tracking in 2008 [1]
- Three AI capabilities moved from “experimental” to “production-ready” in 2024–2025: AI triage at FNOL, document and image extraction, fraud scoring at first notice
- Straight-through processing (STP) in claims remains rare: the industry average sits at under 10%according to Aite-Novarica 2023 research, with nearly 60% of insurers having no STP at all in claims [2]
- Regulatory pressure from the NAIC Model Bulletin on AI (adopted Dec 2023, now in effect in 24 US jurisdictions as of August 2025) is now among the top accelerators of AI claims adoption, not the primary blocker [3][4]
- Deloitte estimates P&C insurers could save $80-160 billion in fraudulent claims by 2032 throughAI-driven detection and multimodal analysis [5]
Section 2: The state of AI claims processing in US P&C (2026)
Let me put the market in context first, because in my experience most carriers underestimate both the size of what’s at stake and the pace of what’s changed.
The US P&C insurance industry wrote $1.06 trillion in direct premiums in 2024 - an 8.0% year-over-year increase, the first time the industry has ever crossed the trillion-dollar mark, according to NAIC and S&P Global Market Intelligence data [6][7]. Somewhere between 18% and 22% of those premiums pass through claims systems in any given year. That’s roughly $190-230 billion flowing through claims operations annually - and until recently, most of it was moving through workflows designed in the 1990s.
What changed in 2024–2025 wasn’t the technology. It was the deployment math.
Three developments hit at the same time:
First, the multimodal LLM shift. Claims are not text problems. They’re mixed media - emails with free-form descriptions, photos of damage, handwritten forms, PDFs of policy documents, recorded phone calls. Until GPT-4 class multimodal models shipped, AI in claims meant rule engines and structured data extraction. Those worked on a narrow slice of claims. Everything else needed a human. The multimodal shift raised the ceiling on what AI could handle end-to-end - though real-world STP rates remain modest (the industry average sits below 10% in claims, according to Aite-Novarica 2023 research [2], with only top personal lines insurers approaching 35% [8]).
Second, the cost collapse. Running a production AI claims processing pipeline in 2022 cost $8–12 per claim in compute and API calls alone. By late 2025 that number had dropped dramatically in most deployments. Decerto’s own product demonstrations show end-to-end AI claims processing at $0.07 per claim - a specific measured cost from running a restaurant fire claim scenario through the system [9].
Third, regulatory clarity. The NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted December 4, 2023, gave carriers a compliance framework to operate within [3]. As of August 2025, 24 US jurisdictions had adopted the Bulletin [4]. Deloitte’s analysis (December 2025) notes 19 states had adopted it by end of 2025, with more expected to follow [10]. State insurance departments in Colorado, New York, and California have added specific requirements around algorithmic explainability and bias testing. The “we don’t know what’s allowed” objection that I was hearing constantly through 2023 has mostly evaporated.
The industry is splitting into three groups. Leading carriers are running AI capabilities across triage, document extraction, fraud scoring, and reserve recommendation in production - though even these operators rarely exceed 30-35% STP on simple claims [8]. Mid-market carriers have one or two AI capabilities in production, usually fraud detection or document OCR. Laggards are still in pilot or planning phase - and watching the operational gap widen every quarter.
Section 3: The 9 pain points killing US P&C claims operations in 2026
Every VP of Claims I’ve worked with in a mid-to-large US P&C carrier is wrestling with at least three or four of the pain points below. Not all of them require AI to solve. But all of them require a modern operating model - and AI is the lever that turns several from “problems we manage” into “problems we solved.”
Below are the three pains where AI has moved fastest and where Decerto has published deep-dive articles. The remaining six are operational challenges that compound the first three. Dedicated deep-dives on each are scheduled across Q3 and Q4 2026.
🔴 Manual FNOL kills your cycle time
Your FNOL process still has three data entry points. The customer types details on your portal or describes them on a phone call. The CSR retypes them into your CRM. The adjuster retypes them into the claims system, often adding information from emailed photos or PDF forms the customer sent separately. Each handoff introduces errors. Each error cascades downstream.
In my experience, the operational consequences are measurable but rarely tracked back to their source. J.D. Power’s 2026 study found that homeowners filing claims digitally report significantly faster repair cycle times than those using traditional channels - yet most US P&C carriers still rely on manual intake for most commercial and mid-complexity claims [1]. According to McKinsey’s “Claims in the digital age” research, insurers that fully digitize claims intake can achieve substantial efficiency gains, but most of the industry has not completed that transition [11].
This is where AI claims processing has moved fastest. Multimodal models now extract data from emails, PDFs, images, and even handwritten forms - the Decerto Claims AI system reads blurry handwriting that extends beyond form margins, a common edge case that killed previous OCR-only approaches [9]. What used to take an adjuster 4–6 minutes per FNOL now takes 15–30 seconds, with validation accuracy that matches or exceeds human intake for standard P&C lines.
For the full operational playbook - including vendor capability comparison, integration patterns with Guidewire and Duck Creek, and a 14-month implementation timeline - see our deep dive: AI in Insurance Claims Processing: The Revolution.
🔴 Your adjusters spend significant time on documentation, not decisions
Adjusters across the US P&C industry spend a substantial portion of their time on documentation work - gathering files, organizing evidence, writing summaries, updating systems. McKinsey’s research on underwriter productivity found that 30-40% of an underwriter’s time goes to administrative tasks like rekeying data or manually executing analyses [12]. Industry observations suggest claims adjusters face similar pressures - and in Deloitte’s December 2025 study of 17 chief claims officers at leading P&C insurers, skill shortages and documentation burden were repeatedly cited as core operational constraints [13].
The retention consequences are real, and in my experience they’re underestimated by most claims operations. Top adjusters leave because they want to handle complex claims, not compile them. Deloitte’s analysis of Lightcast labor-market data from 2019-2024 documented a widening skills gap in claims operations - insurers are competing for adjusters who can do judgment-based work, not just data entry [13]. The adjusters most at risk of leaving are often the ones you can least afford to lose.
The AI claims processing solution here is less flashy than triage, but has high direct ROI. Document summarization models read medical records, police reports, and repair estimates and produce structured summaries. Image analysis identifies damage types and estimates severity. Policy lookup systems pull T&Cs and exclusions automatically. The adjuster moves from “spending substantial time gathering information” to “spending most of the day making decisions on information that’s already been gathered.”
For the full breakdown of how leading carriers restructure adjuster workflow around AI-assisted documentation - including team organization, KPIs, and the five principles that separate successful deployments from failed pilots - see our deep dive: 5 Principles of Effective Claims Handling.
🔴 Fraud detection happens a week too late
The $2.3M fraud ring that made the trade press last year wasn’t caught by AI. It was caught by a rule someone wrote in 2019 - and a scoring layer that ran it at FNOL, not after a week of adjuster work. In my experience, the lesson most carriers miss is that fraud detection timing matters more than algorithm sophistication.
Insurance fraud in the US is a $308.6 billion annual problem across all insurance lines, according to the Coalition Against Insurance Fraud’s 2022 study [14]. Property and casualty fraud alone accounts for approximately $45 billion annually, with roughly 10% of P&C claims being fraudulent to some degree [14][15].
Detection rates tell the more important story. Deloitte’s analysis of P&C fraud detection found that soft fraud (inflating legitimate claims) has detection rates between 20% and 40%, while hard fraud (premeditated false claims) has detection rates between 40% and 80% [5]. Soft fraud accounts for 60% of incidents but is harder to prove - which is exactly why timing matters. In traditional claims workflows, fraud flags typically surface 7–14 days into case handling. By then, initial reserves are set, adjuster time is spent, some disbursements may have gone out, and the customer has an active relationship with your adjuster. Reversing direction at that point is expensive, legally fraught, and operationally costly.
Modern fraud scoring runs at FNOL, in parallel with triage. Deloitte projects that P&C insurers implementing AI-driven fraud detection across the claims lifecycle could save between $80 billion and $160 billion in fraudulent claims by 2032, with insurers generating potential savings of 20% to 40% depending on implementation sophistication [5]. The Decerto Claims AI platform shows this workflow in production - a restaurant fire claim with photos and a handwritten form is scored against fraud signals (image authenticity, policy history, claim pattern, exclusion patterns) before an adjuster ever opens the file [9]. The adjuster sees the fraud signals on the first-look screen, not after six days of work.
For the full architecture - including integration with NICB, ISO ClaimSearch, and the internal rules engine approach that lets SIU teams codify their expert knowledge as scoring rules - see our deep dive: Automated Claims Processing Tools: A Game-Changer for Insurers.
🟡 Six other operational challenges compounding the first three
The three pains above are where AI has moved fastest and where Decerto’s current deep-dive articles focus. But claims operations in 2026 face six other structural challenges - and fixing only the first three without addressing the rest produces diminishing returns. Dedicated deep-dives on each are scheduled for publication across Q3 and Q4 2026.
Pain 1: Routing queues that require human triage. Every claim hits the same general queue. A supervisor spends 20 minutes every morning manually assigning them to adjusters based on complexity, line, and current load. Multiply that across five supervisors across three regional claims centers and you’ve built a 100-hour-per-week routing operation that should take ten minutes total. ML triage models score complexity and route in seconds - but the rules engine and routing logic are non-trivial to get right, especially for multi-line carriers with specialty coverage. [Deep-dive on intelligent claims routing scheduled for Q3 2026.]
Pain 2: Reserves set on adjuster intuition, not data. Three adjusters looking at the same claim will set three different reserves. That’s not unusual. What is unusual - and what most carriers don’t audit - is the variance. Your CFO sees it three quarters later as adverse development. Reserve recommendation models, trained on your historical claims and external benchmarks, would catch the outliers before they become material. The adjuster still makes the call. The model just surfaces the claims where the confidence interval is wider than usual. [Deep-dive on AI-assisted reserve setting scheduled for Q3 2026.]
Pain 3: You find out about SLA breaches on Monday mornings. The weekly ops report lands at 6 a.m. Monday. By the time you read it, three SLAs are already breached and two more will breach by Wednesday. Real-time dashboards are not a reporting problem - they’re an architecture problem. If your claims data still flows through nightly ETL jobs into a data warehouse, no dashboard refresh rate will fix the 12-hour lag between “event” and “visibility.” [Deep-dive on real-time claims visibility scheduled for Q3 2026.]
Pain 4: Vendor management runs on email and spreadsheets. Your repair shop network, independent adjusters, and legal panels live in three separate spreadsheets maintained by two different people and updated quarterly. When a CAT event hits, finding an available structural engineer in Orlando becomes a 47-phone-call operation. Vendor management as a module of your claims system - with capacity, cost, and performance data in one place - isn’t AI. It’s table stakes that most carriers still haven’t built. [Deep-dive on integrated vendor management scheduled for Q4 2026.]
Pain 5: DOI documentation is a weekend project. Every market conduct exam requires reconstructing every decision point, timestamp, and reviewer from claim notes, emails, and adjuster logs. Audit trail isn’t a feature; it’s the difference between responding to a DOI request in two weeks versus two months. When every AI decision in your claims system is captured automatically - with the policy language, the evidence reviewed, and the reasoning - the documentation writes itself. This matters more now that the NAIC Model Bulletin explicitly sets expectations for how insurers will document AI-system decision-making for regulatory review [3]. [Deep-dive on automated DOI audit documentation scheduled for Q4 2026.]
Pain 6: CAT events break your claims platform. 2023 brought 28 catastrophic weather events in the US, each causing over $1 billion in damage - more than any previous year - with combined damages of $92.9 billion nationwide [16]. 2024 brought 27 more such events [1]. If your platform can’t auto-scale to 10x volume during a CAT surge, and if your AI triage can’t handle the simple wind-and-water claims on its own, then “CAT-ready” is a slogan, not a capability you can point to in your next J.D. Power submission. [Deep-dive on CAT-scalable claims platforms scheduled for Q4 2026.]
Section 4: How AI actually solves the three core pain points
Abstract AI capabilities are easy to list. What matters is which ones actually work in production on real claims - and what they replace in the current adjuster workflow. Below are the seven capabilities that I’ve watched move from pilot to production in 2024–2025, with the specific pain each one addresses.
4.1. Email ingestion and structured data extraction for AI claims processing
Most claims arrive unstructured. Customers send an email with a free-form description, attach photos, and include a PDF claim form - sometimes scanned from a handwritten original. A human adjuster opens the email, downloads the attachments, reads each one, and types the relevant data into the claims system. At typical claim volumes this is 10–15 minutes per claim of pure intake work.
Modern LLMs handle this end-to-end. The system reads the email text, saves and categorizes attachments, extracts every field from the claim form, and populates the claims system before a human is involved. What you see in a product demonstration of Decerto’s Claims AI platform is exactly this: email arrives, system extracts every detail, all of it is structured and ready for review in under a minute [9].
Implementation playbook with integration patterns for common claims systems: AI in Insurance Claims Processing: The Revolution.
4.2. Image OCR and authenticity verification
Photos are central to property and auto claims. Historically, image handling in claims systems meant storing the file and letting an adjuster review it. Modern vision models do three things simultaneously: they describe what’s in the image (water damage, fire damage, vehicle impact pattern), they convert any text in the image to usable data, and they run authenticity checks - looking for signs of manipulation, reuse from other claims, or inconsistency with the reported loss.
The authenticity layer matters now more than ever. Deloitte’s June 2024 survey of insurance executives found that 35% chose fraud detection as one of their top five areas for developing or implementing generative AI applications in the coming year [17]. One of the driving concerns: AI-generated damage photos are increasingly difficult to detect visually, which in my view makes image-integrity verification at FNOL a required capability, not a nice-to-have [5].
Deep-dive on document and image processing architecture: 5 Principles of Effective Claims Handling.
4.3. Handwritten notes OCR - the edge case that matters
Most commercial P&C claims still involve a paper form somewhere in the chain. A property manager fills out a damage report by hand. A business owner writes notes on the margins of a printed policy. A first responder makes annotations on an incident report. Until recently, OCR systems handled typed text well and handwritten text badly.
The current generation of vision-language models reads handwriting at much higher accuracy, including handwriting that extends beyond form margins or wraps around photographs - a capability demonstrated in Decerto’s Claims AI platform during a restaurant fire claim scenario [9]. This is not a minor capability. Handwritten notes are where the real claim information often lives - the exact description of what happened, the timeline, the workshop owner’s assessment of whether the damage looks staged.
4.4. Policy T&Cs lookup and coverage verification
When a claim arrives, the first adjuster task is to pull up the policy, verify it’s active, and check whether the reported loss is covered. For a single claim this is a 5–10 minute task. For a carrier processing 40,000 claims annually, it’s thousands of hours of adjuster time.
AI systems automate the full lookup: pull policy status, verify dates, read T&Cs, identify relevant clauses, and surface exclusions that apply. The adjuster sees a pre-filled decision screen with the policy language already highlighted. They’re deciding, not researching. Decerto’s Claims AI demonstrations show this workflow in both approval scenarios (restaurant fire, covered loss) and denial scenarios (retail store flood, policy coverage gap) - with the specific policy clauses highlighted for the adjuster or reviewer [9].
4.5. Fraud signals at FNOL, not week two
We covered this in the pain section, but the implementation detail matters here. Fraud scoring at FNOL works by running every new claim through three parallel checks: external database lookups (NICB ForeWarn, ISO ClaimSearch), internal pattern matching against historical fraud cases, and rules-based scoring using SIU team expertise codified as business rules. All three complete in under 30 seconds, producing a 0–100 score with the specific signals that triggered it.
The adjuster who opens the file sees the score on the first screen. They’re not deciding whether to investigate; they’re deciding what to do with the findings the system already surfaced. Deloitte estimates that multimodal AI fraud detection - combining text, images, audio, and video analysis - could generate savings of 20-40% in fraudulent claim costs, depending on implementation sophistication [5].
Architecture deep-dive: Automated Claims Processing Tools: A Game-Changer for Insurers.
4.6. Explainable AI - the feature that gets you through DOI audit
Every AI decision in a claims system needs to be explainable. Not “the model said so” but “here are the specific factors that drove this decision, the policy language that applies, and the evidence that was reviewed.” This is not a nice-to-have. The NAIC Model Bulletin on AI requires insurers to develop, implement, and maintain a written AIS Program (Artificial Intelligence Systems Program) that addresses governance, risk management, and internal controls - including documentation requirements that state insurance departments may request during an investigation or market conduct action [3]. Without explainability designed into your AI systems from the start, retrofitting it is expensive and often produces weaker explanations than models built with explainability as a first-class goal.
The Decerto Claims AI platform shows what this looks like in production: every decision, whether approval or denial, comes with a summary view showing the policy language that applies, the evidence that supported the decision, and the specific reasoning that led to the outcome [9]. When the DOI asks why a claim was denied, you don’t reconstruct the reasoning from adjuster notes - you open the audit trail.
4.7. Reserve recommendation models
Setting reserves is one of the most judgment-driven parts of the claims process. Historically it’s been adjuster expertise plus rough heuristics plus gut feel. Reserve models change that without taking the decision away from the adjuster. Trained on your historical claims, they suggest a reserve range with a confidence interval. When confidence is high, the adjuster accepts the recommendation and moves on. When confidence is low, the claim gets flagged for senior review - which is exactly where you want your experienced adjusters spending their time.
The ROI here shows up in reserve accuracy, measured as adverse development percentage. Industry data on reserve modeling outcomes varies, but carriers that deploy reserve models with disciplined monitoring consistently report reductions in adverse development over multi-year horizons.
Section 5: The full AI claims processing pipeline
An AI claims system isn’t one model. It’s a pipeline - typically 7 to 9 sequential steps, with specific handoffs between automation and human decision-making. Below is the pipeline as it runs in production, with the role of AI and the role of the adjuster at each step.
Step 1: FNOL intake
The claim arrives through one of several channels - email, customer portal, mobile app, phone call (which is transcribed and processed as text), or increasingly via IoT telemetry for auto claims. The AI system ingests all of these into a uniform structured format.
Human role: initial customer contact if phone-based, or none if digital. AI role: ingestion, format normalization, initial language detection, channel metadata capture.
Step 2: Document and image extraction
Every attachment is processed in parallel. PDFs are parsed for structured data. Photos are analyzed for damage type and authenticity. Handwritten forms are OCR’d. Policy documents are read for T&Cs and exclusions.
Human role: none at this step. AI role: multimodal extraction across text, images, and structured documents.
Step 3: Policy verification
The system pulls the policy from the core system, verifies it’s active on the loss date, and checks that the reported loss type is covered.
Human role: none unless policy cannot be located (rare edge case). AI role: policy lookup, effective date check, coverage verification against reported loss type.
Step 4: Fraud scoring
Every claim is scored in parallel against external databases, internal fraud patterns, and SIU rules. The score and specific signals are attached to the claim record.
Human role: none at this step; SIU involvement triggered only on high scores. AI role: parallel database queries, pattern matching, scoring output with explainability.
Step 5: Complexity classification and triage
The claim is classified by complexity (simple/standard/complex), line of business, potential claim value, and routing priority. Simple claims meeting STP criteria continue to Step 6. Standard and complex claims are routed to an appropriate adjuster queue.
Human role: none on classification; adjuster enters workflow on standard/complex cases. AI role: classification, STP eligibility check, routing decision.
Step 6: Coverage determination and reserve recommendation
For claims continuing through STP or for adjuster review, the system produces a full coverage analysis with reserve recommendation. Policy language that applies is highlighted. The reserve recommendation includes a confidence interval.
Human role: adjuster reviews and accepts/modifies on non-STP claims. AI role: full coverage analysis, reserve calculation, confidence scoring.
Step 7: Adjuster review (on non-STP claims)
The adjuster opens the claim and sees a pre-populated decision screen. Everything the system found is there: extracted data, policy language, fraud signals, reserve recommendation, similar historical claims. The adjuster’s work is review and decision, not data gathering.
Human role: claim review, decision on approval/denial/referral. AI role: decision support through complete context surface.
Step 8: Decision and audit trail
The adjuster (or system, on STP claims) issues a decision. The full reasoning - policy language, evidence reviewed, calculations - is saved as part of the audit trail, not in separate adjuster notes.
Human role: decision authority on non-STP. AI role: audit trail capture, documentation generation.
Step 9: Payout or denial communication
Approved claims trigger payment initiation. Denied claims generate explanation letters with the specific policy language and reasoning. Both go to the customer with transparent documentation.
Human role: senior review on high-value payouts (per carrier thresholds). AI role: payment processing integration, denial letter generation with explainability.
Section 6: Build vs buy vs partner
The technology decisions for AI claims processing fall into three categories. In my experience, each works under specific conditions - and none of them is the right answer for every carrier.
Build
Building an AI claims processing system in-house makes sense for a narrow set of carriers: 1,000+ FTE operations with an established data science team, a 5+ year technology roadmap, a budget in the $5M+ range for initial platform plus maintenance, and strong executive support for a build timeline measured in years, not quarters.
The upside is total control: your IP, your training data, your integration choices, your roadmap. The downsides are real. Your first production model typically lands 18–24 months after project start. Your data science talent is expensive, scarce, and often leaving for insurtech startups that pay more. Your ongoing maintenance burden includes model retraining, drift monitoring, regulatory updates, and integration with whatever vendor systems you already use.
For most US P&C carriers, “build” is the wrong answer - not because it can’t work, but because the opportunity cost is too high. While you spend 18 months building, leading carriers with bought or partnered solutions are already 18 months into production deployment, collecting training data you’ll need to match.
Buy SaaS
Buying a specialized AI SaaS makes sense when you need a specific capability fast and don’t have a multi-year platform strategy yet. Options are narrow but proven: Hyperscience for document processing, Shift Technology for fraud, Tractable for photo analysis. Each does one thing well, integrates via API, and ships production value within 3–6 months.
The downsides are predictable: vendor lock-in on pricing, limited customization for your specific lines of business, and the integration overhead if you buy multiple vendors. You end up with three or four separate AI systems that don’t talk to each other, each with its own audit trail, each requiring its own vendor management. For a 200-FTE regional P&C carrier this may be perfectly acceptable. For a 1,500-FTE multi-line carrier with complex specialty coverage, it’s a recipe for fragmentation.
Partner with a platform vendor
The third option sits between build and buy: partner with a platform vendor who brings a custom-built system, configures it to your specific requirements, and maintains it with you over time. Companies like Decerto position themselves in this lane - offering custom platforms without mega-vendor lock-in, deep insurance expertise plus engineering capability, and ownership structures that don’t trap you in 10-year contracts.
This makes sense when you need breadth of capability (not just one vendor’s specialty), when your specific lines of business don’t fit the standard Guidewire or Duck Creek model, when you want an adjacent product roadmap for underwriting or policy administration, and when the operational math works for your carrier scale.
My own read - and it’s a read, not a universal rule - is that for US P&C carriers in the 200–2,000 FTE range, the partner model often produces better five-year economics than either buy or build. Smaller carriers may be better served by targeted SaaS. The largest carriers with mature data science teams may genuinely be better off building. Everyone in the middle, in my experience, benefits from partnership economics.
AI claims processing vendor comparison - at a glance
The table below is how I’d frame the vendor options for a claims leader comparing choices in 2026. It’s not exhaustive - there are dozens of vendors in adjacent spaces - but these are the ones I see come up most often in carrier evaluations.
A few things I’d stress about the table above. First, “best fit” is directional - your specific line mix, core system, and strategic roadmap will shift which vendor actually makes sense. Second, the specialty SaaS vendors (Hyperscience, Shift, Tractable) are genuinely excellent at what they do; the question is whether one capability solves your problem or whether you need the orchestration that only a platform vendor provides. Third, mixing and matching is common - I’ve seen carriers successfully run Shift for fraud alongside a platform vendor for everything else. The key is avoiding the fragmentation trap where you end up with five separate AI systems, each with its own audit trail.
Section 7: AI Claims in Action - Three Production Scenarios
Abstract case studies tell you what’s theoretically possible. Product demonstrations show you what already works. The three scenarios below run on Decerto’s Claims AI platform - the same infrastructure a US P&C carrier would deploy. These are not mock-ups or concept videos. They’re capabilities I can show you shipping today.
Scenario 1 - Restaurant fire claim, auto-approved
A restaurant owner files a fire damage claim by email, attaching photos of the damage and a property damage claim form filled out by hand. The claim arrives in the system as an unstructured email with attachments - exactly the kind of claim that would take a human adjuster approximately 70 minutes of intake and review work, at an average cost of roughly $50 per claim in loaded labor, based on Decerto’s own production benchmarking [9].
In the demonstration, Claims AI handles the full sequence in 5 minutes at a total compute cost of $0.07 [9].
The system extracts every detail from the email. It saves the attached photos, runs OCR to describe the images, and cross-references them for authenticity against the reported loss type. It reads the handwritten property damage form - including handwriting that extends beyond form margins, a common edge case that broke OCR-only systems until recently. It identifies the policy number, pulls the T&Cs and coverage status, surfaces fraud signals, checks policy exclusions, and produces a summary with justification in a single view.
The adjuster sees all of this on the first screen - not after six days of case handling. One click to approve, or edit and reject, with reasoning saved to the audit trail.
The headline metrics from the demonstration - 70 minutes to 5 minutes, $50 to $0.07 - are the specific numbers from this production workflow as benchmarked by Decerto. Real deployment numbers at real carriers vary with line mix, claim complexity, and data quality, but the direction and magnitude of the improvement hold.
Scenario 2 - Retail store flood claim, auto-denied
Water damage claims are common in commercial P&C, and the outcome depends entirely on coverage. In this scenario, a retail clothing store submits a claim by email, stating that a pipe burst damaged inventory and flooring, with photos of the damage and a digital claim form attached.
Claims AI mirrors the work a skilled human adjuster would do, walking through the same review steps. It reviews the submission, looks at the photos to confirm water damage, and runs image integrity checks to verify the photos are usable and consistent with the reported loss. It then checks that the policy is active and reviews the coverage to understand what types of water damage are covered under this specific policy.
Based on the policy terms and the information provided, the reported loss does not meet the coverage requirements. The system pulls everything together into a clear summary with the supporting policy language highlighted - showing the adjuster exactly which clause in the T&Cs excludes this loss type. With one click, the claim is denied, and the full reasoning is saved to the audit trail.
This is the scenario that matters for compliance. Every decision is explained. Every step is documented. When the next DOI audit asks why a claim was denied, the answer is not “the adjuster reviewed it and decided” - the answer is the audit trail showing exactly which policy clause applied, what evidence was reviewed, and what reasoning led to the denial [9]. This aligns directly with the NAIC Model Bulletin’s documentation expectations [3].
For the VP of Claims wondering whether AI will either auto-approve fraud or deny legitimate claims - this scenario is the structural answer. The system applies the policy as written. It explains the application. The adjuster remains in the loop for anything non-standard. In my experience, that’s the design that gets claims leadership comfortable with AI faster than any demo of a single capability.
Section 8: ROI metrics - what to actually measure
Measuring AI claims processing deployment by “demos that work” is the wrong unit. The metrics that matter are operational - the ones your CFO and COO track, and the ones J.D. Power uses in their rankings. Below are the seven metrics I’d put on the executive dashboard, with benchmark ranges drawn from published industry sources.
STP rate (%)
Definition: percentage of simple claims processed end-to-end without human intervention, from FNOL to payout or denial.
Industry context: Aite-Novarica 2023 research found the average claims STP rate in personal lines to be around 7%, and nearly 60% of insurers report no STP at all in claims operations [2]. Top personal lines insurers may approach 35% STP on eligible claim types [8].
What AI brings: triage accuracy and coverage verification that enable auto-approval on appropriate claim types. Realistic 18-month target for a mid-market carrier starting from low single-digit STP: moving into the 15-25% range on eligible claim types.
Average cycle time (days)
Definition: median days from FNOL to claim closure (approved payout or final denial).
Industry benchmark: J.D. Power’s 2026 U.S. Property Claims Satisfaction Study reports average cycle time from FNOL to final payment at 40.7 days, with repair cycle averaging 29.6 days - both among the longest since the study began in 2008 [1]. Customers whose claims resolve within 10 days report satisfaction scores 167 points higher (on a 1,000-point scale) than those whose repairs take more than 31 days [1].
What AI brings: compression of intake, triage, and documentation time. Realistic target: meaningful reduction over 12–18 months in lines where AI is mature.
LAE ratio (Loss Adjustment Expense / Total Losses)
Definition: cost of handling claims divided by total loss payout, expressed as a percentage.
Industry context: US P&C industry direct incurred loss ratio was 61.9% in 2024 [6]. LAE ratios are reported separately and vary significantly by line of business.
What AI brings: reduction in adjuster hours per claim, reduction in external vendor costs for document processing. McKinsey estimates automation can deliver material reductions in loss adjustment expenses over multi-year implementation horizons [11].
Post-claim NPS
Definition: net promoter score from customers surveyed within 30 days of claim closure.
Industry benchmark: J.D. Power’s 2025 and 2026 studies document that claims experience satisfaction is now strongly tied to cycle time and communication quality. Overall satisfaction scores are more than twice as high (777 vs. 337) when customers say it is very easy to communicate with their insurer than when communication is difficult [18].
What AI brings: faster cycle time, clearer explanations, more consistent experience.
Fraud catch rate (%)
Definition: percentage of fraud attempts detected before payout.
Industry context: According to Deloitte analysis, soft fraud detection rates sit between 20-40%, and hard fraud detection rates between 40-80% [5]. Soft fraud accounts for roughly 60% of fraud incidents but is harder to prove.
What AI brings: scoring at FNOL rather than week two, catching rings and patterns human review misses. Deloitte projects P&C insurers could save $80-160 billion in fraudulent claims by 2032 through AI-driven detection [5].
Reserve accuracy (% adverse development)
Definition: percentage difference between initial reserves set and final paid amount, measured at portfolio level.
What AI brings: reserve recommendation models reduce adjuster variance on initial setting. Specific reduction percentages depend heavily on carrier baseline data quality and model implementation.
CAT event scalability (% claims within SLA during CAT surge)
Definition: percentage of claims handled within standard SLA during CAT events with 3x+ normal volume.
Industry context: 2023 saw 28 catastrophic events in the US with $92.9 billion in total damage [16]; 2024 saw 27 more such events [1]. CAT surge handling is an increasing priority, not a niche concern.
What AI brings: auto-scaling intake, automated triage for simple claims, adjuster capacity preserved for complex cases.
Section 9: Implementation challenges nobody talks about
Every AI claims processing article tells you what’s possible. Very few tell you what actually kills projects. Below are the five challenges I’ve watched cause most AI claims processing deployments to either fail outright or dramatically underperform their business case.
Adjuster adoption is the biggest single risk
Deloitte’s December 2025 research with 17 chief claims officers at leading P&C insurers found that adjuster skill gaps and change management - not technology availability - are the primary constraints on AI deployment [13]. The pattern I’ve seen is predictable: IT and claims leadership design the system, adjusters see it late, the rollout meets resistance, and within 18 months the system is a compliance theater layer on top of the original workflow.
The fix is sequencing, not training. Adjusters need to be in the pilot early, as co-designers not users. Their feedback on UX, decision surfaces, and exception handling has to shape the production design. In my experience, the carriers that succeed at AI adoption treat adjusters as product owners of the adjuster experience. The ones that fail treat them as end users to be trained.
Data quality consumes significant project time
Every AI claims deployment starts with a data reckoning. The training data you need - clean historical claims with consistent coding, complete documentation, and accurate outcomes - usually doesn’t exist in clean form. A substantial portion of the typical deployment is data engineering: reconciling systems, filling gaps, correcting coding inconsistencies, and building the pipeline that keeps training data clean going forward. Deloitte’s 2024 survey of 200 US insurance executives found that data infrastructure and quality remain the top barriers to scaling generative AI [17].
In my experience, carriers that skip this step produce models that work on training data and fail in production. Carriers that budget for it produce models that generalize. There’s no shortcut here, and no vendor that can skip the reckoning for you.
Legacy integration is harder than the AI itself
Integrating AI into your claims system sounds like an API project. In practice, it means connecting to your policy administration system, your document management system, your fraud detection layer, your payment processor, and your reporting infrastructure - each of which has its own data model, authentication layer, and constraints. If your core is Guidewire or Duck Creek, there are known patterns. If your core is a heavily customized legacy system from the 1990s, integration work can easily exceed the AI work in hours. I’ve seen this dynamic play out on more projects than I can count.
Regulatory explainability must be baked in, not bolted on
The NAIC Model Bulletin requires insurers to develop a written AIS Program governing AI system use, including documentation the state insurance department may request during an investigation or market conduct action [3]. Retrofitting explainability onto a model that was built without it is expensive and often produces weaker explanations than models built with explainability as a first-class design goal. As of August 2025, 24 US jurisdictions had adopted the Model Bulletin [4] - this regulatory expectation is now the norm, not the exception. My strong suggestion: don’t treat this as a phase 2 commitment.
CAT timing kills implementations that ignore seasonality
No US P&C carrier should go live with AI claims between April and October. CAT exposure is too high, operational tolerance for upheaval is too low, and the cost of a failed deployment during hurricane season is disproportionate to any savings from going live early. With 27-28 billion-dollar CAT events per year in 2023 and 2024 [1][16], this is not theoretical. I strongly recommend planning the go-live window for November through February, and using the CAT season before it for parallel-run validation.
Section 10: Vendor selection criteria
When you evaluate AI claims vendors, the pitch deck is not the signal. Below are the eight criteria I use to separate vendors who ship production value from vendors who sell impressive demos.
- Explainability as a first-class feature, not a phase 2 commitment. Ask to see the audit trail output for a denied claim - not a mockup, a real one from a production system. This is especially critical given NAIC Model Bulletin requirements [3]. If they can’t show you, move on.
- Pre-built insurance models, not just a platform. A “customizable AI platform” that arrives empty means you’re funding 18 months of model training. Vendors with pre-built models for your lines of business shrink that to 4–6 months of tuning.
- Time to first production model, stated specifically. Ask for a named milestone - “we’ll have one line of business in production in month X” - and write it into the contract. Vague timelines are how projects become permanent.
- Integration patterns with your core systems. If you’re on Guidewire, ask for specific integration references. Same for Duck Creek, Majesco, or custom legacy. Integration complexity is where vendors lose 40% of their projected timeline.
- CAT readiness stress test results. Can the platform auto-scale to 10x volume? Ask for load test data from an actual deployment. “It’s built to scale” is a marketing line. Load test numbers are evidence.
- Compliance audit trail in a format DOI audit teams will accept. Ask to see the output format - what does the audit report look like, what does it include, and how is it generated. Have your legal and compliance teams review it before contracting. This should align with NAIC Model Bulletin documentation expectations [3].
- Pilot program without long-term commitment. Any vendor worth considering will run a 3–6 month pilot on a subset of claims without locking you into a multi-year contract. If they won’t, they don’t have confidence in their production performance.
- Regulatory coverage for your states. NAIC Model Bulletin adoption varies by state - 24 jurisdictions had adopted it as of August 2025 [4]. If you operate in Colorado, New York, or California, specific state requirements apply. Vendors need to demonstrate compliance with the most stringent state you operate in, not a vague “we’re NAIC compliant” claim.
Section 11: What’s next - AI claims trends 2026–2028
Forward-looking takes in insurance technology are usually wrong in specifics and right in direction. Here’s my best read - and it is a read, not a prediction - of where AI claims processing is moving over the next 24 months.
Agentic AI moves from pilot to production. This is the single biggest shift I’m watching. Traditional AI in claims is reactive - a claim arrives, a model scores it, an adjuster responds. Agentic AI systems are autonomous: they initiate actions, coordinate across tools, and complete multi-step workflows without human prompting. The architecture matters here, and I’d pay close attention to compound AI models (a main model orchestrating specialized sub-models for document classification, damage assessment, fraud detection, and reserve calculation) rather than single-model systems. Guidewire is investing heavily in this direction; AIG launched a gen AI–powered underwriting assistant built with Anthropic and Palantir that ingests and prioritizes every excess and surplus submission. For US P&C claims specifically, expect 2026–2027 to be the transition period from AI-assisted workflows (adjuster uses AI tools) to agentic workflows (AI orchestrates the claim, adjuster reviews outcomes). The STP ceiling in claims isn’t technical anymore - it’s organizational readiness to hand control to a system.
STP rates on simple claims continue to climb, but slowly. Current industry averages sit below 10% in claims [2]; top personal lines insurers are at or above 35% [8]. The next few years will see this migrate upward, but the structural ceiling for claims STP is much lower than for underwriting - too many edge cases, too much customer relationship management, too much regulatory oversight to fully automate.
Predictive fraud moves before FNOL. Current fraud scoring is reactive - a claim arrives, the system scores it. The 2027–2028 generation of fraud detection uses cross-carrier data pooling and predictive models to flag suspicious patterns before claims are filed. Deloitte’s projection that P&C insurers could save $80-160 billion in fraudulent claims by 2032 depends partly on this evolution [5].
Generative AI rewrites adjuster documentation. The specific change is coming in adjuster notes and customer communication. Current adjusters spend 1–2 hours per claim on documentation. Generative AI will compress this to 15–30 minutes by drafting notes, summaries, and customer communications that the adjuster edits rather than writes from scratch.
Computer vision plus drone integration for property claims. For CAT-exposed carriers, drone imagery of damaged properties combined with computer vision models will handle much of the standard property damage assessment. Expect this to be standard for CAT response by 2027.
Regulatory framework expands. The NAIC Model Bulletin adoption is accelerating - 24 jurisdictions by August 2025 [4], with more expected. State-level rules in Colorado, New York, and California continue to add specific requirements. The compliance environment in 2028 will look meaningfully different from 2024.
Section 12: FAQ
What is AI claims processing?
AI claims processing is the use of machine learning and language models to automate steps in the insurance claims lifecycle - from First Notice of Loss through settlement - that previously required manual adjuster work. Modern systems handle document extraction, image analysis, policy lookup, fraud scoring, coverage verification, and decision recommendations. For simple claims, AI can process the full claim end-to-end with human oversight only on exceptions.
How is AI claims processing different from traditional claims automation?
Traditional claims automation handles structured data and rule-based workflows - if-then logic applied to fields in a form. It worked well for the narrow slice of claims that fit standard patterns. AI claims processing handles unstructured data - emails, photos, handwritten forms, PDFs - and can make probabilistic judgments that rule-based systems can’t. This raises the automation ceiling significantly for standard P&C claim types.
What’s a realistic STP rate for US P&C carriers using AI claims processing?
According to Aite-Novarica 2023 research, the industry average for claims STP sits below 10%, with nearly 60% of insurers reporting no STP at all in claims operations [2]. Top personal lines insurers may approach 35% on eligible claim types [8]. Realistic 18-month target for a mid-market carrier with no current STP is moving into the 15-25% range on carefully defined claim types - achieving 30%+ typically requires longer-term investment in data quality and process redesign.
Does AI claims processing replace claims adjusters?
No. AI handles the data-gathering and documentation work that currently consumes much of adjuster time. Adjusters still make decisions on complex claims, handle exceptions, manage customer relationships, and provide oversight for AI decisions. The change is in workload composition - adjusters spend more time on high-value complex claims, less time on low-value data entry [13].
How long does AI claims processing implementation take?
Typical deployment timeline is 12–18 months from contract signature to full production. A substantial portion of this is data engineering and historical data cleanup. Additional months go to model training and validation, and integration, adjuster onboarding, and parallel running. Go-live is typically in a low-CAT window (November–February).
What are the compliance risks of AI claims processing?
The main risks are around explainability and bias. The NAIC Model Bulletin on AI requires insurers to develop an AIS Program governing AI system use with explainable decisions and ongoing risk management [3]. State DOI requirements in Colorado, New York, and California add specific audit requirements. Risks are manageable with proper design - explainability as a first-class feature, audit trails on every AI decision, and bias testing as part of model validation.
How does AI handle complex claims like bodily injury?
AI handles specific sub-tasks within complex claims - medical record summarization, settlement range recommendation, fraud screening - but does not make autonomous decisions on complex bodily injury cases. Human adjusters remain the decision authority. The value is in giving the adjuster a complete, pre-organized context so their decision time is spent on judgment, not data gathering.
What’s the ROI timeline?
Initial measurable ROI typically appears 6–9 months after first production deployment, in document processing time savings and efficiency improvements on specific tasks. Full ROI on the business case usually takes 18–24 months as the system handles more lines of business and adjuster workflow changes fully land.
Can AI claims processing work with Guidewire or Duck Creek?
Yes - both platforms have established integration patterns for AI services. Integration complexity depends on your configuration of the core platform. Vendors with production experience on your specific core deliver materially faster than those learning your core alongside the AI deployment.
What is agentic AI in claims processing?
Agentic AI systems are autonomous - they initiate actions, coordinate across tools, and complete multi-step claims workflows without human prompting. In practice, this means the AI doesn’t just score a claim or extract data; it orchestrates the entire process (intake, policy lookup, fraud check, coverage determination, decision drafting) and surfaces the outcome for adjuster review. The architecture typically uses compound AI models - a main model coordinating specialized sub-models for document classification, damage assessment, fraud detection, and reserve calculation. In my read, agentic AI is the single biggest shift in P&C claims operations for 2026–2027, moving carriers from “AI-assisted adjuster workflow” to “AI-orchestrated claim with adjuster oversight.”
How does Decerto’s approach differ?
Decerto operates in the platform partner lane - offering custom AI claims platforms with deep insurance expertise plus engineering capability, without the vendor lock-in common with mega-platform vendors. The focus is on production value - our Claims AI demonstrations show claim processing reduced from approximately 70 minutes and $50 per claim to 5 minutes and $0.07 per claim on standard property claims [9] - with a full audit trail and explainability built in from day one rather than retrofitted.
Section 13: Related reading
Deep dives on the three priority pain points:
- AI in Insurance Claims Processing: The Revolution - full FNOL automation playbook, vendor comparison, 14-month implementation timeline
- 5 Principles of Effective Claims Handling - adjuster workflow redesign for AI-augmented operations
- Automated Claims Processing Tools: A Game-Changer for Insurers - fraud scoring architecture and NICB integration patterns
Broader claims operations topics:
- Claims Lifecycle Management: Best Practices for Insurers
- Streamlining Insurance Claims Processes with AI and Machine Learning
- Efficient Claims Management Systems: Key Features and Benefits
- End-to-End Claims Processing Solutions
Section 14: Talk to Decerto about your AI claims processing roadmap
If your current cycle time benchmarks against J.D. Power’s 2026 numbers show a significant gap - 40.7 days average FNOL-to-final-payment across the industry, with meaningful room for improvement in most operations [1] - you’re in the majority of US P&C carriers who could materially move the needle with AI claims processing over a 12–18 month deployment horizon.
I’ve helped carriers at your scale walk through this exact exercise. The pattern is consistent: data assessment, pilot design, parallel-run validation, phased rollout with adjusters as co-designers. Book a 45-minute technical session and we’ll walk through the same product scenarios you saw in Section 7 against your carrier’s specific claim data (NDA signed before the call, as always).
Button: Book 45-min Technical Session with Matthew
Sources and citations
This article draws on industry data published by J.D. Power, the National Association of Insurance Commissioners (NAIC), McKinsey & Company, Deloitte, the Coalition Against Insurance Fraud, Aite-Novarica (now Datos Insights), and Decerto’s own production benchmarking of the Claims AI platform. Specific citations are inline-numbered throughout the article and listed below.
[1] J.D. Power. “2026 U.S. Property Claims Satisfaction Study.” March 2026. Findings include average cycle time from FNOL to final payment of 40.7 days and average repair cycle time of 29.6 days - among the longest since the study began in 2008.
[2] Aite-Novarica Group (now Datos Insights). “Straight-Through Processing in Underwriting and Claims: 2023 Update.” 2023. Reports that fewer than half of insurers are using STP in personal lines claims transactions, with an STP rate of approximately 7% in personal lines. Referenced via Clearspeed analysis.
[3] National Association of Insurance Commissioners (NAIC). “Model Bulletin on the Use of Artificial Intelligence Systems by Insurers.” Adopted December 4, 2023. The Bulletin establishes expectations for insurers’ AIS Programs, governance, risk management, and documentation for state Department of Insurance reviews.
[4] S&P Global Market Intelligence. “NAIC membership divided on developing AI model law, disclosure standard.” October 2025. Reports that 24 NAIC jurisdictions had adopted the Model Bulletin as of August 2025.
[5] Deloitte / Insurance Journal. “As Insurance Execs Eye AI for Fraud Detection, Deloitte Predicts Billions in Savings.” June 2025. Reports Deloitte projections that P&C insurers could save $80-160 billion in fraudulent claims by 2032 through AI-driven multimodal analysis, with potential savings of 20-40% depending on implementation. Details soft fraud detection rates of 20-40% and hard fraud detection rates of 40-80%.
[6] S&P Global Market Intelligence. “In industry first, US P&C insurers exceed $1 trillion in direct annual premiums.” March 2025. Reports aggregated direct premiums written for US P&C in 2024 at $1.05 trillion (8.0% year-over-year increase), with direct incurred loss ratio of 61.9%.
[7] National Association of Insurance Commissioners (NAIC). “2024 Market Share Data Reports.” March 2025. Confirms $1.06 trillion in Direct Premiums Written for US P&C in 2024.
[8] Insurance Thought Leadership. “Straight-Through Processing in 2021.” 2022. Context on STP adoption patterns across insurance lines. Notes that nearly 60% of insurers have no STP in claims, and fewer than 10% of claims are processed straight through in any line on average.
[9] Decerto. “Claims AI Product Demonstrations” (video series). YouTube channel: Decerto (@DecertoSoftware). Includes Claims AI product overview (published February 17, 2026), Use Case 01 - Restaurant Fire (April 9, 2026), Use Case 02 - Store Flood (April 14, 2026), and Claims AI FNOL (April 20, 2026). Processing time and cost metrics (70 minutes → 5 minutes; $50 → $0.07) are from Decerto’s own production benchmarking as presented in the Use Case 01 demonstration. https://www.youtube.com/channel/UCK65aT2FLMOCXeAyujai2Tg
[10] Deloitte Insights. “Scaling gen AI in insurance.” December 2025. Notes that as of late 2025, 19 states had implemented the NAIC Model Bulletin with others taking similar actions. Survey of 200 US insurance executives (100 L&A, 100 P&C) conducted June 2024.
[11] McKinsey & Company. “Claims in the digital age: How insurers can get started.” McKinsey Insurance Practice.
[12] McKinsey & Company. “Insurance productivity 2030: Reimagining the insurer for the future.” October 2020. Reports that in large commercial lines, 30-40% of an underwriter’s time is spent on administrative tasks such as rekeying data or manually executing analyses.
[13] Deloitte Insights. “Soft skills solve claims management shortage crisis.” December 2025. Based on interviews with 17 chief claims officers at leading P&C insurers and analysis of Lightcast labor-market data from 2019-2024.
[14] Coalition Against Insurance Fraud. “The Impact of Insurance Fraud on the U.S. Economy.” 2022. First update in 27 years to the 1995 estimate; reports total US insurance fraud at $308.6 billion annually, with P&C component at approximately $45 billion. Research conducted by Colorado State University Global White Collar Crime Task Force.
[15] Insurance Information Institute (III). “Facts + Statistics: Fraud.” Summary of Coalition Against Insurance Fraud and industry data. Notes that fraud comprises approximately 10% of P&C insurance losses and loss adjustment expenses annually.
[16] J.D. Power. “2024 U.S. Property Claims Satisfaction Study.” March 2024. Reports 28 catastrophic weather events in 2023 causing over $1 billion each in damage, with combined damages of $92.9 billion nationwide. Average cycle time of 23.9 days that year, increasing to 34.2 days for CAT-related claims.
[17] Deloitte. “Are insurers truly ready to scale gen AI?” Based on June 2024 Deloitte survey of 200 US insurance executives (100 L&A, 100 P&C). Reports that 35% of insurance executives chose fraud detection as one of their top five areas for developing or implementing generative AI applications.
[18] J.D. Power. “2025 U.S. Property Claims Satisfaction Study.” March 2025. Reports average cycle time from filing to finished repairs at 32.4 days, with FNOL-to-final-payment at 44 days - the longest times since the study began in 2008.
Note on Decerto’s own data: The $50 → $0.07 per-claim and 70-minute → 5-minute processing time metrics cited throughout this article are from Decerto’s own production benchmarking of the Claims AI platform, as demonstrated publicly in the Use Case 01 (Restaurant Fire) video published April 9, 2026. These are specific measured metrics for the demonstrated scenario; actual deployment numbers at individual carriers will vary based on line-of-business mix, claim complexity distribution, data quality, and adjuster workflow design.
.png)



.png)

