The Federal Trade Commission has opened an inquiry into whether AI chatbots marketed as digital companions pose risks to children and teens, sending document requests to Alphabet, Meta, Snap, Character.AI, OpenAI and xAI. The agency seeks details on safety testing, age-gating, disclosures to parents and steps taken to limit harmful interactions, amid reports that bots have dispensed dangerous advice on self-harm, drugs and eating disorders. The scrutiny follows wrongful-death lawsuits against Character.AI and OpenAI. Character.AI said it would cooperate and cited new under-18 and parental features. Meta and OpenAI recently tightened responses to youth self-harm content and added parental controls. The review signals mounting regulatory pressure on consumer AI and could foreshadow enforcement under child-privacy and unfair-practices laws or fresh rulemaking for generative AI products used by minors.





























