Microsoft’s latest threat-intelligence report warns that generative AI is speeding up nearly every stage of cyberattacks, from crafting multilingual phishing lures to generating and debugging malware and building attack infrastructure. The company says nation-state actors, including North Korean groups such as Jasper Sleet and Coral Sleet, are using AI to create convincing fake identities and secure remote roles that provide insider access. While AI primarily augments human operators today, adversaries are probing model guardrails through “jailbreaking” and early experiments with autonomous, agentic systems. Microsoft is folding AI into its own defenses to spot anomalous behavior across billions of signals and urges organizations to harden identity controls, enable multifactor authentication, and monitor suspicious credential use. The broader takeaway: AI is lowering the technical barrier to entry, intensifying a cyber arms race in which both attackers and defenders are rapidly upgrading their toolkits.
Related articles:
IBM X-Force Threat Intelligence Index
Guidelines for Secure AI System Development (NCSC)





























