What happened
WIRED reported that OpenAI testified in favor of an Illinois bill that would sharply limit when frontier AI developers can be held liable for “critical harms,” including some of the most serious scenarios often invoked in frontier AI safety debates. Under the bill’s structure, companies could receive broad protection as long as they did not intentionally or recklessly cause harm and had published certain safety, security, and transparency reports.
OpenAI framed the measure as a way to focus on serious risks while avoiding a patchwork of inconsistent state rules. But the proposal also serves another function: it narrows how and when legal responsibility can attach to the labs building the most powerful models.
Why this matters
This is a notable shift in tone. For a long time, major AI companies mostly argued against regulations they thought were too burdensome or premature. Supporting liability-limiting legislation is different. It means labs are now trying not just to slow regulation down, but to shape the structure of regulation in ways that reduce their future exposure.
That is strategically important because product-liability questions are becoming one of the most dangerous pressure points for frontier labs. Once courts, state governments, and plaintiffs begin testing what AI companies should be responsible for, the cost of uncertainty rises fast. Getting legal safe harbors in place early could matter as much as model leadership.
The strategic read
What makes this especially striking is the gap between AI rhetoric and AI lobbying. Labs often describe advanced AI as a technology with potentially civilization-scale consequences. Yet when legal accountability comes into view, they frequently push for narrower obligations and broader insulation.
That tension will not be lost on lawmakers. If companies want to be taken seriously when describing catastrophic possibilities, they may find it harder to simultaneously argue that liability should remain tightly constrained when real-world harms emerge.
Bottom line
OpenAI’s backing of this bill is a reminder that the next chapter of AI politics is not just about safety principles. It is about who gets to write the rules of legal responsibility before the biggest cases arrive.
Source note
Source: WIRED, "OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters," published April 9, 2026.