Cyber: Hackers Used AI To Develop First Known Zero-day 2fa Bypass For

Cyber: Hackers Used AI To Develop First Known Zero-day 2fa Bypass For

Google on Monday disclosed that it identified an unknown threat actor using a zero-day exploit that it said was likely developed with an artificial intelligence (AI) system, marking the first time the technology has been put to use in the wild in a malicious context for vulnerability discovery and exploit generation. The activity is said to be the work of cybercrime threat actors who appear to have collaborated together to plan what the tech giant described as a "mass vulnerability exploitation operation." "Our analysis of exploits associated with this campaign identified a zero-day vulnerability implemented in a Python script that enables the user to bypass two-factor authentication (2FA) on a popular open-source, web-based system administration tool," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. The tech giant said it worked with the impacted vendor to responsibly disclose the flaw and get it fixed in order to proactively disrupt the activity. It did not disclose the name of the tool. Although there is no evidence to suggest that Google's Gemini AI tool was used to aid the threat actors, GTIG assessed with high confidence that an AI model was weaponized to facilitate the discovery and weaponization of the flaw via a Python script that featured all hallmarks typically associated with large language model (LLM)-generated code. "For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class)," GTIG added. The vulnerability, described as a 2FA bypass, requires valid user credentials for exploitation. It stems from a high-level semantic logic flaw arising as a result of a hard-coded trust assumption, something LLMs excel at spotting. "AI is already accelerating vulnerability discovery, reducing the effort needed to identify, validate, and weaponize flaws," Ryan Dewhurst, watchTowr's Head of Threat Intelligence, told The Hacker News in a statement. "This is today's reality: discovery, weaponization, and exploitation are faster. We're not heading toward compressed timelines; we've been watching the timelines compress for years. There is no mercy from attackers, and defenders don't get to opt out." The development comes as AI is not only acting as a force multiplier for vulnerability disclosure and abuse, but is also

Source: The Hacker News