Anthropic is investigating reports that unauthorized users accessed Mythos, a new AI system designed to identify software vulnerabilities, via a third-party vendor environment. The company, known for its Claude chatbot, said it has found no evidence that its core systems were compromised. Bloomberg earlier reported a small group of users gained access to the tool. Mythos, released this month to a limited set of large enterprises under “Project Glasswing,” aims to help organizations shore up defenses amid growing concern from policymakers and security experts that advanced AI could accelerate cyberattacks. Anthropic limited access to firms including Amazon, Apple, Cisco, JPMorgan Chase and Nvidia to mitigate misuse risks. Critics warn the same capabilities that help find flaws could be used to exploit banks, hospitals and government systems. “We couldn’t keep up with the bad guys when it was humans … we certainly can’t keep up now if they’re using AI,” said Alissa Valentina Knight, CEO of Assail. The probe underscores mounting pressure on AI developers to manage third-party risk and prevent model leakage as powerful tools move into real-world security workflows.
Related articles:
NIST Artificial Intelligence Risk Management Framework
OWASP Top 10: Web Application Security Risks
MITRE CWE Top 25 Most Dangerous Software Weaknesses (2023)





























