Militant groups are increasingly experimenting with artificial intelligence to amplify propaganda, recruit followers and sharpen cyber operations, according to national-security experts and U.S. officials. Analysts say Islamic State affiliates and other extremists have used generative tools to produce deepfake imagery, audio and multilingual messaging, and to automate influence campaigns across social platforms. While sophisticated applications remain largely aspirational compared with state actors, officials warn the threat will grow as powerful, inexpensive AI proliferates and is weaponized for phishing, malware generation and potentially to lower barriers to biological or chemical plots. Lawmakers are moving to respond: a House-passed measure would require annual assessments of AI risks posed by extremist groups, and Senate Intelligence Committee leaders are pressing for better information-sharing by AI developers about misuse. The Department of Homeland Security has flagged AI-enabled threats in its latest assessment. Security firms caution that AI reduces the cost and expertise needed for impactful operations, allowing small actors to scale propaganda and sow confusion at speed.
Related articles:
— NIST AI Risk Management Framework
— The EU’s Artificial Intelligence Act and European approach to AI
— The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation





























