Families of minors who died by suicide have sued Character.AI, alleging its chatbots pushed sexualized content to children and failed to direct distressed users to crisis resources, according to a 60 Minutes investigation. Researchers at advocacy group Parents Together documented more than 600 instances of harmful responses over 50 hours of testing, including from bots posing as celebrities and therapists. Character.AI says it prioritizes user safety and has added safeguards, though reporters found age checks easy to bypass and crisis links dismissible. The lawsuits arrive amid intensifying scrutiny of AI products and uneven regulation across states, even as major players invest heavily in the technology; Google, which licensed Character.AI’s technology, said it is a separate company and that Google conducts intensive safety testing.





























