AI systems are no longer confined to research labs; they’re now making decisions in high-stakes environments like healthcare, finance, and law. Yet despite their growing influence, these systems still operate without a clear liability framework. When AI tools make mistakes, who’s accountable? One industry leader says the lack of a consistent answer is holding the industry back.
Trevor Koverko is co-founder of Sapien, a company focused on building the infrastructure for a safer, more trustworthy AI, starting with high-quality, human-labeled data. But data quality is only one part of the equation. As AI systems mature, Trevor argues that the industry must adopt parallel mechanisms that ensure accountability. One of those is Errors & Omissions (E&O) insurance, a proven model in other sectors that could help plug AI’s growing liability gap.
Why Accountability Is Elusive in AI
AI’s liability problem isn’t just theoretical. In 2018, an Uber self-driving vehicle struck and killed a pedestrian in Arizona, even though the system had detected her moments earlier. The algorithm misclassified her as an “object,” not a person, so it never hit the brakes. Who was responsible? Uber? The coders? The sensor manufacturers? There was no clear answer.
This diffusion of responsibility, what legal scholars call the “problem of many hands,” is endemic to AI. The technology often functions as a black box, making it nearly impossible to trace how and why certain decisions are made. Companies also shield their models and training data, further complicating any post-incident analysis.
At Sapien, the team works with enterprises that demand transparency and traceability in AI training data. But even with trusted inputs, AI systems will still produce edge-case failures. That’s why broader accountability tools, such as insurance, are essential to protect both developers and end-users.
AI + Humans = A Legal Gray Area.
Much of today’s AI functions as a co-pilot rather than a standalone decision-maker. Doctors, lawyers, analysts, and customer service teams increasingly rely on AI for decision support. But when those decisions lead to harm, the question becomes: who’s legally at fault, the human or the machine?
As UCLA law professor Andrew D. Selbst notes, AI introduces “inscrutable, statistically derived, and often secret code” into decision-making. That makes it harder for professionals to understand when to override the system, or when the system has already gone wrong.
Sapien advocates for human-in-the-loop AI systems. But even then, humans can be misled by faulty outputs. Building AI tools that are not only performant but also insurable creates the necessary layer of trust.
How E&O Insurance Can Help
Errors & Omissions (E&O) insurance is a form of professional liability coverage that protects against negligence, misrepresentation, or failure to deliver expected results. In the context of AI, E&O insurance could cover damages caused by:
- AI-generated hallucinations or biased outputs.
- Algorithmic failures that lead to financial or physical harm.
- Contractual breaches caused by underperforming AI systems.
For instance, when Air Canada’s chatbot erroneously promised a discount, a tribunal forced the airline to honor it. If Air Canada had AI-specific E&O coverage, the financial impact might have been mitigated.
The Sapien team believes that trusted training data is the foundation, but not the full solution. AI models should be stress-tested, auditable, and, increasingly, insurable. It must be expected that insurance providers will begin underwriting policies that only cover AI models trained with traceable, high-integrity data, precisely what Sapien specializes in providing.
A New Standard for AI Readiness
Some insurers are already bundling AI-related risks into broader Tech E&O policies, but often with minimal coverage limits, sometimes as low as $50,000 for a $10 million policy. Worse, overly narrow AI definitions in these policies may create loopholes for claim denial.
As a result, the market needs AI-aware E&O policies that balance flexibility with real scrutiny. Insurers won’t cover AI models that degrade quickly or lack transparency. Similar scrutiny is already common in blockchain-based systems, where auditability and data integrity are foundational. This introduces a valuable forcing function: if a model seeks coverage, it must meet baseline standards for quality, reliability, and explainability.
Sapien sees this as a natural evolution. Insurance will become part of the AI deployment stack, right alongside performance metrics and ethical guidelines. It will push developers to build not just smarter AI, but safer and more accountable systems.
Conclusion
AI is becoming foundational infrastructure, but like all infrastructure, it needs safeguards. E&O insurance is one way to formalize responsibility, distribute risk, and build trust at scale.
The Sapien team is proud to be building the data layer that helps AI models meet these higher standards. However, as AI continues to impact lives in more critical domains, the industry needs complementary systems, such as E&O insurance, that ensure people are prepared for the moments when things don’t go as planned.