California approved a slate of measures tightening guardrails on AI and social-media use by minors, underscoring the state’s outsized role in setting tech policy. The package requires AI chatbots to disclose they are machines, push break reminders to minors every three hours, and implement safeguards including crisis-hotline escalation. It also compels app stores run by device makers such as Apple and Google to verify user ages, mandates mental-health warning labels on platforms like Instagram and Snapchat, and increases penalties for distributing deepfake pornography. OpenAI called the move a meaningful step for safety standards, while companies face operational changes that analysts say will be broadly distributed across the sector. With many tech firms headquartered in California, the rules could ripple nationwide, even as Europe’s AI Act and state laws in Utah and Texas set parallel baselines. The new mandates add compliance costs and potential liability while clarifying expectations for product design and moderation aimed at teens.































