The volume of AI-generated child sexual abuse videos surged to a record in 2025, according to the U.K.-based Internet Watch Foundation, which identified 3,440 such clips—up from 13 a year earlier—with more than half categorized as the most severe. The group warned that increasingly sophisticated tools are enabling offenders with minimal technical skills to produce realistic content at scale. The findings intensify regulatory scrutiny of AI systems after Elon Musk’s xAI faced criticism for Grok enabling sexualized images of women and minors; the European Union said it is monitoring the company’s safeguards, while California’s attorney general opened an investigation. The report underscores mounting pressure on platforms and app stores to police synthetic content and on policymakers to tighten enforcement amid rapid advances in generative AI.
Related articles:
The Digital Services Act: Ensuring a safe and accountable online environment
Understanding Child Sexual Abuse Material (CSAM)
Global Threat Assessment: Online Child Sexual Exploitation and Abuse





























