Major insurers such as Great American, Chubb, and W.R. Berkley are seeking permission from U.S. regulators to exclude AI-related liabilities from corporate insurance policies. Their reason? “AI models are too much of a black box.” This is coming from an industry that has spent centuries pricing hurricanes, terrorism, pandemics, and billion-dollar fires. But they’re unsure how to price… text and code. The fear is justified. Just look at what the industry has already survived: ⚠️ Google AI’s search answer wrongly implied a solar company had legal issues → $110M lawsuit ⚠️ Air Canada was forced to honour a discount that their chatbot invented ⚠️ A company lost $25M after fraudsters used an ultra-realistic AI clone of a senior executive on a video call But these are individual incidents. That’s not why insurers are panicking. The real threat? Systemic risk. Not one giant claim but 10,000 simultaneous claims triggered by the same AI model making the same mistake at scale. From a venture and ecosystem perspective, here’s what this signals: 1. AI governance and reliability will become investable categories: think safety, stress-testing, and evaluation layers. 2. Startups using frontier models may need entirely new insurance frameworks (or price in operational risk). 3. Regulators will step in sooner than we expect. 4. The shift from ‘move fast’ to ‘move responsibly’ isn’t optional anymore. The irony is striking: AI is scaling faster than any technology in history yet the basic safety infrastructure around it hasn’t even been built. What do you think: Should insurers exclude AI-related risks, or should the industry evolve to price them accurately?
This isn’t about AI being risky it’s about AI being systemically risky. One model, one flaw, thousands of downstream claims. feels inevitable that AI reliability, audits, and safety layers become a new infrastructure categorybefore regulators force it….
As the new innovations are the order of the day,attendant risks also increases and ways to handle them are also found.
ROHIT BAFNA The 'Black Box' problem is the ultimate hurdle for AI adoption in regulated industries.
Dameron Hospital Association•1K followers
2moThis debate feels familiar to anyone who’s worked in high-risk systems like healthcare. The issue isn’t rogue chatbots—it’s correlated failure at scale. One flawed process might harm a few people; one flawed model embedded everywhere can trigger thousands of downstream failures simultaneously. Insurers aren’t overreacting. This is exactly how systemic risk shows up before the infrastructure to manage it exists. In medicine, we learned—often painfully—that safety, validation, monitoring, and accountability can’t be retrofitted after widespread adoption. The real opportunity isn’t exclusion, it’s maturation: governance, auditability, real-world performance monitoring, and clear ownership of outcomes. That’s what ultimately made healthcare risk insurable. AI will need to grow up the same way, whether the ecosystem is ready or not.