Worried about AI errors? Lloyd’s has insurance for that
The healthcare industry has raised considerable concern over the potential for AI errors in care recommendations and diagnosis. Now, a new insurance product from Lloyd’s of London and a partner, Armilla, is specifically targeted to safeguard against AI errors caused by chatbots.
While errors related to AI use in healthcare have most recently fallen under cybersecurity insurance policies, this new area of coverage is specific to chatbots, largely focused on generative AI, and applicable to organizations in any industry, not just health care.
Best described as ‘AI warranty protection,’ this comprehensive AI liability coverage has already been cited as a potential game-changer at encouraging AI adoption. Lloyd’s of London insurers introduced the new warranty protection in partnership with Armilla, a specialized AI insurance and assessment solutions provider. The launch was announced in May.
The product actually goes by a couple of different names depending on whether it’s being acquired by end-user customers or AI developers. While some initial media coverage labeled the ‘product’ as an AI insurance tool, a more accurate description is a warranty protection and security agreement.
Clients are guaranteed a certain level of accurate performance for AI tasks and calculations. The policy kicks in when AI chatbot performance levels drop below that and increases organizational risk. The policy would then cover incurred costs for damages and legal fees resulting from certain unexpected errors or errors of a significant volume.
Armilla CEO Karthik Ramakrishnan said he believes this sort of coverage could encourage businesses to more readily adopt AI tools without fearing large-scale failures that could otherwise cripple them financially. If correct, security professionals can no doubt expect to see this release quickly followed by another, with more expected soon.
Protecting against errors and hallucinations
The new coverage option is actually a combination of two products: the Armilla Guaranteed (AI performance warranty), a contractual performance guarantee for AI vendors, backed by a consortium of reinsurers, and Armilla Insured (AI liability insurance), and an affirmative liability insurance policy specifically designed for risks associated with deploying AI.
Now security and risk professionals can stop debating loss ceilings in the abstract, and turn instead to a lexicographically sorted list that starts with the price of the worst-case scenario, followed by the triggers – such as performance below agreed error rates or bot-caused damage – and ends with a plus or minus of risk transferred off the balance sheet, explains Steve Morris, founder and CEO at digital marketing firm Newmedia.com
In the past, companies either accepted AI errors as sunk operational costs or avoided deploying the technology in high-stakes environments, Goje explains. Warranties change that calculus.
“Think of it less like traditional insurance and more like a performance guarantee,” says Michael Guiliano, head of cyber U.S. at UK-based insurance brokerage McGill and Partners, who says he “knows the Armilla team pretty well.”
One big question is: how will underwriters evaluate the AI systems they’re insuring on an on-going basis and who exactly is being insured, poses Jim Olsen, chief technology officer of ModelOp. Is it the model or chatbot vendor, the company implementing the chatbot, or a user relying on the information it produces? Each plays a role in the outcome — and right now, it’s not settled how accountability would play out in a legal setting.
Wide interest in AI error protection
While the Lloyd’s and Armilla offering appears to be the first of this type of specialized AI coverage, there has certainly been plenty of interest in just this sort of protection.
“Chatbot warranties aren’t a one-off, or specific to healthcare,” Morris says. “My crew and I have noticed a distinct uptick in demand outside regulated industries. What’s driving it is actual liability, not ‘AI hallucinates’ clickbait. The Air Canada chatbot mess is a big milestone for risk managers. They can start to read the case law and see that a court will treat what’s said by a chatbot as a binding promise. Any automatic guarantee, true or false, creates an automatic legal obligation.”
Morris says his clients are now demanding documented workflow to ensure that what the bot says corresponds precisely to what business policy permits — and also so that protections can be ensured to minimize claims. This is already spawning “AI compliance audits” centered on dialogue, rather than coding.
While this is certainly an emerging issue, it might be too early to call it a trend, Olsen says. There is increasing awareness for the need of some sort of protection against AI errors. That is especially true after the recent issues with Asana’s experimental agentic AI feature, powered by the Model Context Protocol (MCP), was taken offline for nearly two weeks after a bug exposed the potential for cross-organization data leakage, or Air Canada being held liable for misinformation provided by its chatbot.
But those are relatively low-risk, consumer-facing failures, Olson explains. In healthcare or financial services, where AI chatbots could influence patient health or portfolios and increased fraud risk, the stakes are much higher — and the standards will be too. It’s not clear if Lloyd’s and Armilla would offer coverage in this type of high-risk, high-regulation, complex setting.
Nevertheless, it is unlikely to remain a niche area, and competitive pressures will force vendors and enterprises alike to adapt, explains Vinod Goje, a consulting data and AI strategist.
Mature providers with strong governance will embrace warranties as a sales accelerator, while smaller or less disciplined firms may resist this trend. These smaller companies often struggle to meet the technical requirements necessary for qualification, leaving them at a disadvantage in an evolving market. As competitive pressures mount, the ability to offer warranty-backed reliability will become essential for all vendors.
A new strategic ‘hedge’ for organizations
The warranty provided by the Lloyd’s-Armilla partnership highlights how organizations view AI-related risks, signaling that this area is developing into a distinct asset class, Goje says. Rather than simply accepting the costs associated with chatbot inaccuracies and errors, companies can now shift some of that responsibility, which alters the financial dynamics of adopting AI technologies.
This change is particularly crucial for the financial sector, where even a minor error from a chatbot can lead to severe compliance issues or expensive corrective measures. Thus, this warranty is not just a novel offering but serves as a strategic hedge for businesses, Goje explains.
For security professionals, this type of agreement adds both a tool and a responsibility, Olson says. On one hand, they provide a new lever to mitigate AI risk and align with enterprise risk frameworks. On the other, security leaders will need to deeply understand how policies are structured, what triggers coverage, and what evidence insurers will require. This will put a premium on strong AI governance practices: maintaining audit trails, documenting training data and model changes, and monitoring chatbot performance. Without that rigor, coverage could be denied—or priced prohibitively.
“The immediate effects will be the rise of quantitative vendor diligence,” Morris says. “Such products already shift the burden of technical diligence into risk teams, in much more granular form than just proving your vendors use some ‘best effort’ of testing scratches a checklist. These warranties represent a hook that gives risk managers a quantifiable point to place on a register, and a safe harbor bounded by a contract. That doesn’t mean buyers can let their guard down, though.”
For example, due diligence questions need to get really specific with warranties about what constitutes a ‘covered incident,’ how chatbot accuracy is benchmarked on an ongoing basis, and how quickly complaints are triaged.
From the insurer’s perspective, they need to have a least common denominator of technical knowledge and analytic processes they can apply to claims whose standards evolve rapidly. On balance, organizations are definitely starting to incorporate warranties into procurement criteria and risk frameworks. Not as a substitute for controls, but as an additional tool for risk transference that customers will increasingly expect from anyone offering AI-driven operations, Morris says.
Enabling a new approach to AI adoption
The macro view here is that the issue represents a shift in how organizations approach AI adoption. In the past, companies either accepted AI errors as sunk operational costs or avoided deploying the technology in high-stakes environments, Goje explains. Warranties change that calculus. By transferring some of the risk to insurers, enterprises can justify more ambitious deployments, especially in industries such as financial services and healthcare, where even a single AI-induced error can trigger regulatory scrutiny, reputational damage, or costly remediation.
“Organizations clearly benefit from this development, as it leads to increased confidence to experiment without facing existential risks, procurement teams obtain a reliable insurance-backed trust metric, and risk managers acquire a financial tool to offset potential liabilities,” Goje explains. “This shift enables more ambitious AI deployments, encouraging innovation across various industries. The real value lies not only in the payout when issues arise but also in the discipline companies must cultivate to qualify for coverage. Insurers require organizations to benchmark their practices, establish governance, and monitor performance before they issue policies. This warranty process compels companies to adopt best practices that they might otherwise postpone.”
Finally, a significant benefit of such warranties lies not in the financial compensation they offer, but in the discipline they instill, Goje says. Companies that meet the standards for these warranties demonstrate their reliability to regulators, investors, and customers alike. This trustworthiness may ultimately hold more significance than the insurance policy itself.
“The bottom line: warranties are not a panacea, but they are a necessary instrument for a technology that will never deliver 100% deterministic reliability,” Goje says. “The fact that major insurers are entering the space validates what engineers have long known: AI failures are not edge cases; they are systemic risks that must be priced, managed, and mitigated. If done right, this shift won’t just protect enterprises financially; it will raise the governance baseline for the entire industry.”