Google said it detected a previously unknown hacking group deploying an AI-crafted zero-day exploit to defeat two-factor authentication on a widely used open-source system administration tool, the first such case the company has seen in the wild. The Python-based exploit bore hallmarks of code produced by large language models, according to Google’s Threat Intelligence Group, which worked with the unnamed vendor to patch the flaw.
The disclosure underscores how AI is compressing timelines from vulnerability discovery to weaponization. Security researchers said attackers can now identify logic errors—such as hard-coded trust assumptions—more efficiently, accelerating exploitation even when valid credentials are required.
Google also detailed a surge in AI-enabled operations, including an Android backdoor dubbed PromptSpy that uses Gemini to analyze on-screen content, persist by blocking uninstalls with overlays, and replay biometric gestures. The company cited activity by China-, Russia-, Iran-, and North Korea–linked actors experimenting with LLMs for research and tooling, as well as a gray market of API relay services in China that proxy access to premium models and create new data-exfiltration risks.
While Google found no evidence its own Gemini model was used in the zero-day scheme, it warned that adversaries are professionalizing access to high-tier AI services and targeting AI development environments themselves—broadening the software supply-chain threat landscape for enterprises.
Related articles:
— OWASP Top 10 for Large Language Model Applications
— MITRE ATT&CK: Modify Authentication Process (T1556)
— evilginx2: Standalone Man-in-the-Middle Attack Framework for Login Phishing
— Passkeys (FIDO Alliance)




























