California will require developers of large, advanced AI models to publish how they assess and mitigate catastrophic risks and to report serious incidents to the state within tight deadlines—15 days, or 24 hours for imminent threats. The law, effective Jan. 1 and inspired by SB 53, also extends whistleblower protections to employees who evaluate safety hazards, with fines up to $1 million for violations. Catastrophic risk is defined broadly—from cyberattacks capable of mass casualties to incidents causing more than $1 billion in damage—though critics argue the statute omits harms like disinformation, environmental impact and algorithmic bias. Incident reports filed with the Office of Emergency Services won’t be public, raising questions about transparency even as backers, including Stanford’s Rishi Bommasani, say enforcement will determine the law’s heft. A related 2024 law, AB 2013, will compel disclosure of AI training data details, while a 2027 state report will summarize critical incidents without naming specific models. New York’s governor has cited California’s approach as a template for her state’s emerging AI rules.
Related articles:
AI Risk Management Framework
SB24-205 Consumer Protections for Artificial Intelligence
NYC Automated Employment Decision Tools Law (Local Law 144)
The Bletchley Declaration by countries attending the AI Safety Summit
OECD AI Principles





























