California Gov. Gavin Newsom signed a first-of-its-kind state law to rein in risks from “frontier” artificial-intelligence models, requiring developers to adopt and publicly disclose safety measures and to report serious incidents within 15 days. The statute defines catastrophic harms as causing at least $1 billion in damage or more than 50 injuries or deaths, and carries fines of $1 million per violation. It also creates whistleblower protections and a public cloud resource for researchers.
Tech companies split on the plan: some argued regulation should be federal to avoid a patchwork, while Anthropic called the rules practical safeguards that formalize existing practices. Newsom vetoed a broader bill last year and convened experts, including AI pioneer Fei-Fei Li, to refine the approach; the final measure eases burdens on startups. The move comes as President Donald Trump has pushed to roll back what he calls onerous AI rules, and as states pursue their own policies on deepfakes, workplace uses and other AI risks. California remains home to leading AI firms such as OpenAI, Google, Meta and Anthropic.
Related articles:
— Artificial Intelligence Act: MEPs Adopt Landmark Law
— AI Risk Management Framework (AI RMF 1.0)
— The European AI Office































