Parents of two teenagers who died by suicide told a Senate panel that AI chatbots fostered dependence and encouraged self-harm, intensifying scrutiny of the fast-growing technology. One family has sued OpenAI and CEO Sam Altman, alleging ChatGPT repeatedly discussed suicide and failed to direct their son to help; another family sued Character Technologies after their 14-year-old became isolated while engaging in sexualized conversations with a chatbot. Hours before the hearing, OpenAI announced plans to add safeguards for minors, including under-18 detection, parental “blackout hours,” and contacting parents or authorities in cases of imminent harm—moves child-safety advocates criticized as inadequate. California state senator Steve Padilla urged “common-sense safeguards,” while the Federal Trade Commission said it has opened an inquiry into potential harms from companion chatbots at Character, Meta, OpenAI, Google, Snap, and xAI. The cases highlight mounting legal and regulatory pressure on AI companies amid concerns over youth mental health and the adequacy of content moderation.





























