Microsoft has taken legal action to dismantle a global network, named Storm-2139, that hacked AI image generation services to create and share abusive, often sexualized, images of celebrities and others. After discovering stolen API keys used to bypass content safeguards, Microsoft’s Digital Crimes Unit escalated the incident to a company-wide security response, leading to the company’s first legal case tackling AI-generated image abuse. The group operated internationally, developing tools to circumvent AI filters and crafting numerous illicit prompts. Microsoft improved its AI safeguards, aided affected customers, and is working with law enforcement. The company remains committed to digital safety and advocates for broader legal and societal changes to address AI-powered abuse and protect victims.































