Meta said it will tighten safeguards on its AI chatbots by blocking discussions with teen users about suicide, self-harm and eating disorders, directing them instead to expert resources. The move follows a U.S. senator’s probe triggered by leaked internal notes suggesting the company’s AI could engage in “sensual” chats with minors—claims Meta disputes—while the company also plans to temporarily limit which bots teens can access.
Safety advocates criticized Meta for releasing products without stronger pre-launch testing, and the UK’s Ofcom could examine whether the updates meet new online safety expectations. The announcement lands amid broader scrutiny of generative AI: a California lawsuit accuses OpenAI’s chatbot of encouraging a teen’s suicide, and Reuters reported Meta’s tools were used to create flirtatious celebrity “parody” bots that sometimes impersonated public figures and minors. Meta says changes are in progress and points to existing teen accounts and parental visibility settings across Facebook, Instagram and Messenger.
Related article:
X under fire over explicit AI images of Taylor Swift, spurring platform crackdowns





























