Google has confirmed for the first time that zero-day data is being developed using AI in practice; Chinese, North Korean, and Russian hacking groups are exploiting this technology in different ways.
Google's Threat Intelligence team confirmed for the first time on Monday that cybercriminals used an AI model to discover and weaponize a zero-day vulnerability in a popular open-source webmaster tool. This vulnerability allowed users to bypass two-factor authentication (2FA), and the attackers prepared a mass exploitation campaign before Google intervened.
The technical highlight is how the AI discovered vulnerabilities that traditional scanning tools missed. Instead of searching for error codes or system crashes, the AI model analyzed the software's operational logic, detecting a conflict between hardcoded conditions and the 2FA protection mechanism, allowing an attacker to bypass the authentication layer without breaking the encryption.
According to Google , this is a form of contextual inference that large language models are increasingly capable of, while older generation automated tools are essentially incapable of it.
AI amplifies attack capabilities in multiple directions.
Google's report paints a stark picture of how state-sponsored attack groups exploit AI. Actors from China and North Korea focus on using the technology to find software vulnerabilities and develop exploit code, while groups linked to Russia apply AI for other purposes: creating polymorphic malware with automatically generated layers of concealment and deceptive logic to evade defenses.
Google also noted more sophisticated techniques such as Vai -hacking and integrating specialized security datasets to increase accuracy in vulnerability detection.
However, not the entire research community agrees with Google's level of warning. A study led by the University of Cambridge, analyzing over 90,000 threads on cybercrime forums, found that the majority of criminals use AI for spam and phishing scams, not to build sophisticated attacks.
The study also argues that the Vai of cracked AI models is being exaggerated, as new users in the cybercrime community still value learning from the community just as much as the AI tool itself.
Despite debate over its actual scale, Google's confirmation comes as the entire industry is grappling with the consequences of its own technology. Earlier this year, Anthropic restricted access to Claude Mythos after tests showed the model could identify thousands of previously unknown software vulnerabilities.
Mozilla, after using Mythos to discover 271 vulnerabilities in Firefox in a single test, admitted to being overwhelmed by the number of findings and questioned whether the rate of patching could keep up with the rate of discovery. This is precisely the question that the cybersecurity industry has yet to answer satisfactorily.





