China’s internet watchdog proposed sweeping rules to govern AI services, mandating safeguards for minors and strict protocols for conversations involving self-harm. The draft measures require parental consent for “emotional companionship” features, usage time limits for children, and human intervention—and notification of guardians or emergency contacts—when suicide or self-harm is detected. The rules also bar AI systems from generating content that promotes gambling or threatens national security and social stability. The proposal comes amid a rush of new chatbots and rising user counts from firms such as DeepSeek, while Chinese startups Z.ai and MiniMax prepare stock listings. The plan, open for public comment, underscores Beijing’s drive to shape fast-growing AI markets with stringent guardrails, adding compliance burdens for providers and potentially influencing product design, especially in companionship and therapy-like applications. The move follows broader global scrutiny of AI’s mental-health impacts, including a U.S. wrongful-death lawsuit against OpenAI and the company’s push to bolster risk preparedness.
Related articles:
EU’s Artificial Intelligence Act: Overview of Europe’s AI rulebook
WHO guidance on ethics and governance of AI for health
The EU Digital Services Act and obligations on harmful content moderation





























