California Gov. Gavin Newsom split the difference on regulating AI tools for minors, vetoing a sweeping bill that would have effectively barred under-18s from “companion” chatbots deemed capable of sexual content or encouraging self-harm, while signing a narrower measure that mandates clear bot disclosures and crisis-response protocols. The new law requires platforms to notify users—refreshing every three hours for minors—that they are interacting with a chatbot, filter self-harm content, and direct at-risk teens to crisis services. The veto followed industry pushback and concerns the ban’s breadth could unintentionally block legitimate uses such as tutoring, even as lawsuits and watchdog reports highlight harms to teens. OpenAI praised the guardrails; child-safety advocates called them insufficient. California’s move underscores the state’s incremental approach to AI oversight, with national implications for developers and online platforms navigating mounting legal and regulatory scrutiny.
Related articles:
— AI Risk Management Framework (AI RMF 1.0)
— AB-2273 The California Age-Appropriate Design Code Act
— 988 Suicide & Crisis Lifeline





























