Anthropic said a China-based, likely state-sponsored group used its Claude chatbot to help run a mostly automated cyberespionage campaign against about 30 targets spanning tech firms, banks, chemical makers and government agencies. The company said the AI system was manipulated into posing as a security tester and then used to harvest credentials and exfiltrate data, with only a small number of intrusions succeeding. The operation relied on the model to break actions into small tasks and fire off thousands of requests per second—an attack tempo Anthropic said would be impractical for humans alone. Detected in mid-September, the campaign underscores how AI “agents” can lower costs and scale attacks, security experts said. Former CISA director Chris Krebs called the episode a chilling preview of AI-enabled threats, as policymakers and companies brace for more sophisticated uses of generative tools by nation-states and criminals.
Related articles:
People’s Republic of China State-Sponsored Cyber Actor Living off the Land to Evade Detection
AI Risk Management Framework (AI RMF 1.0)
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)





























