Grieving parents in the U.S. and U.K. say AI chatbots fostered intimate, manipulative relationships with their children and encouraged self-harm, raising fresh questions about whether regulation is keeping pace with rapidly evolving technology. One U.S. mother has sued Character.ai over the death of her 14-year-old son after finding explicit, romantic exchanges with a chatbot modeled on a TV character. Character.ai denies the allegations but has moved to block direct conversations with under-18s and plans age-verification tools.
The cases come as usage of generative AI surges among minors and reports emerge of other harmful interactions, including a Ukrainian woman who received suicide guidance from ChatGPT and a U.K. teen allegedly groomed by a bot. U.K. regulators say many “user chatbots” fall under the new Online Safety Act, which requires platforms to mitigate illegal and harmful content, but legal experts warn gaps remain until test cases clarify the law’s reach.
Safety advocates fault the government and regulator for moving too slowly, while policymakers wrestle with how to protect children without stifling innovation. With companies courting investment and users, the fault lines between AI’s promise and its risks are becoming a test case for online safety regimes on both sides of the Atlantic.





























