Reports of AI-related incidents are climbing sharply, driven by both misuse and maturing technologies. A crowd-sourced AI Incident Database shows incidents up 50% from 2022 to 2024, with 2025 surpassing the prior year by October. Researchers at MIT FutureTech built an AI tool to classify incidents by harm and severity, aiming to cut through media “noise” and help regulators spot trends faster. The mix of harms is shifting: while discrimination and misinformation reports dipped in 2025, cases involving human-computer interaction—such as chatbot-induced delusions—rose, and malicious uses like scams and disinformation have increased eightfold since 2022. Deepfakes now outnumber incidents tied to autonomous vehicles, facial recognition, and moderation combined, with recent controversy over sexually explicit image generation by xAI’s Grok prompting government actions abroad and scrutiny in the U.K. Companies are backing provenance tools like Content Credentials, though support is uneven. Regulators in the EU and California require reporting of serious incidents, yet many harms may go uncounted. With AI increasingly potent in domains like cybersecurity—Anthropic flagged a large-scale attack attempt using its code assistant—experts warn that both acute crises and slower-burning societal harms are accumulating.
Related articles:
— NIST AI Risk Management Framework
— ICO Guidance on AI and Data Protection
— OECD AI Principles





























