---
When a self-driving car crashes, who is liable: the manufacturer, the software vendor, or the human safety driver? Now, apply that logic to a financial AI that executes a bad trade or a medical AI that misdiagnoses a patient.
For decades, software has largely been shielded from liability by End User License Agreements (EULAs) that disclaim all warranties. But in the age of AI, this "use at your own risk" era is ending. Regulators are moving to treat AI not as passive code, but as a product with strict liability for defects.
The Shift: From Service to Product
The EU Product Liability Directive (PLD)
Europe is leading the charge by updating its 40-year-old PLD to explicitly include "software" and "AI systems" as products.
- The Change: If an AI causes damage (property, bodily harm, or data loss), the manufacturer is strictly liable.
The US Landscape: Section 230 Under Siege
In the US, Section 230 of the Communications Decency Act has historically protected platforms from liability for user-generated content.
The Conflict: Is AI output "user-generated"? Courts are increasingly skeptical. If an AI "hallucinates" a libelous statement, it is the creator* of that content, not a publisher. Gorsuch's Warning: In Gonzalez v. Google*, Justice Gorsuch suggested that generative AI might not enjoy Section 230 protection at all.The Three Liability Vectors
for enterprises, risk comes from three directions:
| Vector | Scenario | Legal Risk |
|---|---|---|
| Defect | AI malfunctions (e.g., self-driving crash). | Strict Product Liability (EU) / Negligence (US). |
| Hallucination | AI output is false/defamatory. | Defamation / Libel (No Section 230). |
| Bias | AI discriminates in hiring/lending. | Civil Rights Violation / EEOC enforcement. |
Insurance: The New Regulator
Just as fire insurance dictated building codes, AI Insurance will dictate AI safety standards.
- The Reality: Insurers are currently hesitant to write policies for unrestricted execution agents.
- The Requirement: Expect premiums to be tied to compliance with standards like NIST AI RMF or ISO 42001.
Enterprise Action Plan: Insolating the Brand
You cannot eliminate risk, but you can structure it.
1. Human-in-the-Loop (HITL): Retain meaningful human review for high-stakes decisions (hiring, firing, lending). This downgrades the risk from "autonomous defect" to "human error," which legal teams know how to defend.
2. Update Terms of Service: Explicitly disclose the use of AI and the potential for error. Disclaim reliance for critical advice (medical/legal).
3. Vendor Indemnification: Do not integrate an API (e.g., OpenAI, Anthropic) into your product without a contract that indemnifies you against their model's failures.
4. Bias Testing Documentation: Keep rigid logs of bias testing. In a discrimination lawsuit, your process is your defense.
To see how this connects to intellectual property, read Copyright & Generative AI. For the tools to track this, see The Compliance Tech Stack.