Character.ai will prohibit users under 18 from chatting with its AI bots starting Nov. 25, limiting minors to creating content like videos instead. The move follows mounting scrutiny and lawsuits in the U.S. over alleged harms to teens, as well as criticism of offensive or exploitative avatars that slipped through moderation. CEO Karandeep Anand said the company will add age verification and fund an AI safety research lab, framing the shift as part of a broader industry pivot toward tighter safeguards. Safety advocates welcomed the change but argued it arrived too late, underscoring growing regulatory pressure, including in the U.K. under the Online Safety Act. The challenge for Character.ai: maintain an appealing teen product without open-ended conversations that can blur emotional boundaries.





























