GUEST BLOG: Talk Liberation – AI Uninsurable? Major Insurers Run From AI Risks, Refuse Coverage

There is a gap between what you hear about and what you really need to know. We’re here to fill it. Welcome to The Disconnect - our newest publication showcasing public interest stories relating to internet freedom, privacy, AI, tech law, biometrics, surveillance, malware and data breaches, kept simple and delivered straight to your inbox as part of your regular Talk Liberation subscription. This news is too important to paywall - so we have kept it free for all. Please support us by subscribing, sharing and spreading the word!

0
0

In 2025, an increasing number of major systemic failures by internet service providers took down swathes of internet related services across the globe. Among these were some of the biggest names in cloud services, including Crowdstrike, Microsoft Azure, Amazon Web Services (AWS) and Cloudflare. Impacts ranged from hospitals cancelling operations, emergency services communications networks going down, airlines and even entire airports halting operations and thousands (possibly millions) of businesses being rendered inoperable.

The increasing prevalence of major global internet outages is tracking along with widespread corporate integration with AI. While none of the faults were publicly admitted as being explicitly caused by AI, all of the services involved have deeply integrated AI into their businesses within the time period that these faults have become more commonplace – and devastating. It is within that larger context that this latest topic – that of insurance companies not wanting to carry the can for AI-related risks – now arises.


According to revelations in the Financial Times, a startling confrontation is unfolding in the corporate world, as insurance companies built to assess and underwrite risk are declaring advanced artificial intelligence too risky to insure. Major insurers, including Great American, Chubb, and W. R. Berkley, are actively petitioning U.S. regulators for a crucial exemption—the right to explicitly exclude widespread AI-related liabilities from their standard corporate insurance policies. This move is not a speculative future exercise but a direct response to the tangible, multi-million-dollar losses already materializing from AI deployments apparently in over their heads, signaling a profound crisis of confidence at the heart of the risk management industry.

The core of the insurers’ apprehension lies in what some underwriters describe as the “black box” nature of AI models. Unlike traditional software with predictable, line-by-code outcomes, generative AI operates in a chance-based nature, creating outputs that are impossible to fully anticipate or audit. This inherent unpredictability makes traditional material science, which relies on historical data to forecast future losses, effectively useless, as insurers can’t accurately price the risk, leading them to the only logical business decision available: a refusal to cover it. The inability to model potential losses transforms AI from a manageable risk into an uninsurable gamble that may not pay off or worse—turn a gain into a pitfall.

This industry-wide skepticism over AI isn’t coming out of nowhere but from a growing list of expensive precedents. As reported, Google’s AI Overview feature falsely accused a Minnesotan solar company, Wolf River Electric, of legal troubles, sparking a $110 million lawsuit. In another incident, Air Canada was legally forced to honour a refund policy entirely fabricated by its own customer service chatbot. Perhaps most startlingly, fraudsters used a digitally cloned executive in a deepfake video call to trick a London-based firm out of $25 million. These cases provide a clear, documented trail of AI failures leading directly to significant financial liability, validating insurers’ worst concerns.

- Sponsor Promotion -

What truly terrifies the industry, though is not the prospect of a single, massive payout but the systemic risk embedded in AI technology. An executive from insurance giant Aon articulated this well: the industry can absorb a $400 million loss from one company, but it would be catastrophically overwhelmed by an AI mishap that triggers ten thousand losses at once. This scenario, where a single flaw in a widely used AI model causes simultaneous, cascading failures across thousands of users, represents an existential threat to the insurance model itself, which is designed to spread risk, not concentrate it infinitely.

Contrary to the initial reporting, AIG clarified that it “was not specifically seeking to use these exclusions” and has no plans to implement them at this time, revealing a strategic split within the insurance sector. While some players are rushing to erect barriers against AI liability, others like AIG may be hedging bets on a more collaborative approach to understanding and mitigating AI risk will present a greater business opportunity, positioning themselves as partners rather than adversaries to innovation.

The ultimate consequence of this insurance standoff now hangs over the fledgling global economy, reminiscent of much worse than the 2008 financial crisis. If mainstream insurers successfully wall off AI from general coverage, companies may be forced to either shoulder the massive, unmitigated risk themselves or slow their AI integration dramatically, which will affect many tied to AI in the economy. The situation creates a powerful crossroads, where the cautiousness of the risk management industry could inadvertently become the primary adversary to AI’s rapid expansion, forcing a moment of reaping for tech that has, until now, raced forward with little regard for its own liabilities.

This edition of The Disconnect was written by REAL HUMANS with no AI involved in any part of the production of this article. All the more reason to please support us :). If you love what we’re doing, please consider contributing $5 per month so that we can continue providing you with this vital, unique digest.

LEAVE A REPLY

Please enter your comment!
Please enter your name here