U.S. schools are grappling with a surge in AI-driven deepfake abuse as students use readily available apps to fabricate sexually explicit images of classmates, triggering legal, disciplinary and mental-health crises. A Louisiana middle-school case became an early test of a new state law, part of a wave of statutes passed in at least half of U.S. states in 2025 aimed at curbing synthetic sexual content, including simulated child exploitation. Reports of AI-generated child sexual abuse material to the National Center for Missing and Exploited Children jumped from 4,700 in 2023 to 440,000 in the first half of 2025, underscoring a rapidly escalating threat. Experts warn school policies and parent awareness are lagging; they urge explicit rules, education on deepfakes and response protocols such as reporting, evidence preservation and support for victims. Advocates say the persistent, viral nature of synthetic images amplifies trauma and complicates accountability.





























