Reading Time: 3 minutesKey Takeaways:
This is the first confirmed case of an AI designed zero day attack being used in a setting.
The script was identified by hallucinated CVSS scores and educational docstrings typical of Large Language Models (LLMs).
The AI successfully created a Python script to bypass 2 factor authentication (2FA) by finding a hardcoded trust error.
Google detected the threat early, preventing a huge cyberattack on web based system tools.
Google Threat Intelligence Group (GTIG) has identified the first ever zero day exploit created by AI in the wild. This discovery marks a significant escalation in cyber warfare, as attackers use AI models,potentially OpenClaw, to discover complex logic flaws and automate mass exploitation campaigns before they can be patched.
The AI Signature, Hallucinations in the Code
The Google Threat Intelligence Group (GTIG) has made a discovery by identifying a cyberattack that was developed using artificial intelligence. Between May 10 and May 12, 2026, security researchers found a Python script designed to attack an open source system administration tool. What made this script stand out were the hallmarks of an AI creator. The code included educational explanations (docstrings) as if it were teaching a lesson, and it even contained a hallucinated CVSS score, a security rating number that the AI made up. The formatting was so perfect that GTIG had high confidence that an LLM, possibly a model known as OpenClaw, was used to build it.
Threat actors pursue scalable and obfuscated access to LLMs
Read Next: Vercel Security Warning: How a Small AI Tool Caused a Big Problem
Exploiting Complex Human Errors
Instead of using a simple trick, this AI attack targeted a complex logic error made by developers. The software had a hardcoded trust assumption that allowed a user with valid credentials to bypass 2 factor authentication (2FA). The AI was able to find this mistake and write a working script to exploit it. This is a concern for security experts because logic flaws are often harder for humans to find than basic coding errors. By using AI to industrialize the search for these flaws, attackers can create dangerous tools faster than before.
LLM vulnerability discovery capabilities compared with other discovery mechanisms
Preventing a Mass Exploitation Campaign
The good news is that Google’s proactive detection stopped the attack before it could become a global crisis. GTIG worked quickly with the software vendor to fix the hole before the attackers could launch an exploitation campaign. Google also clarified that this incident wasn’t related to the recent 5020 Internal Server Error outages that some users experienced on Google services during the same week. However, the warning is clear: we are entering a new era of AI-enabled cyberattacks. With countries like North Korea and China reportedly using AI for hacking, this discovery is likely just the beginning of a much larger trend.
Read Next: Arbitrum Freezes $71 Million in Stolen ETH After KelpDAO Exploit