What have I been telling you about AI? They have a shitty, shitty product -- AND they want us to pay for it.
OpenAI really, really wants an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious harm, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
Think about that. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.
The bill lists some common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.
"We're sorry, we really didn't mean to set off that nuke that killed 5000 people!"


