Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Ripple Secures $200M Credit Facility to Expand Institutional Prime Brokerage

4 minutes ago

Baidu’s New AI Is Already Beating Top Models and Cost 94% Less to Build

7 minutes ago

EFF to Fourth Circuit: Electronic Device Searches at the Border Require a Warrant

25 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Monday, May 11
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Hackers Used AI to Build a Zero-Day Exploit That Bypasses Two-Factor Authentication: Google
Cryptocurrency & Free Speech Finance

Hackers Used AI to Build a Zero-Day Exploit That Bypasses Two-Factor Authentication: Google

News RoomBy News Room2 hours agoNo Comments4 Mins Read167 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Hackers Used AI to Build a Zero-Day Exploit That Bypasses Two-Factor Authentication: Google
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • Google’s Threat Intelligence Group confirmed that cybercriminals used AI to develop a zero-day exploit targeting a popular open-source web administration tool.
  • Google said this is the first time the company has identified AI-assisted zero-day development in the wild.
  • Google worked with the affected vendor to patch the vulnerability before the campaign scaled, but said threat actors linked to China and North Korea are also actively using AI for vulnerability research and exploit development.

Cybercriminals used an AI model to discover and weaponize a zero-day vulnerability in a popular open-source web administration tool, according to Google’s Threat Intelligence Group.

In a report published Monday, Google said the flaw let attackers bypass two-factor authentication, and warned that the attackers were preparing a mass exploitation campaign before the company intervened. It is the first time Google has confirmed AI-assisted zero-day development in the wild.

“As the coding capabilities of AI models advance, we continue to observe adversaries increasingly leverage these tools as expert-level force multipliers for vulnerability research and exploit development, including for zero-day vulnerabilities,” Google wrote. “While these tools empower defensive research, they also lower the barrier for adversaries to reverse-engineer applications and develop sophisticated, AI-generated exploits.

The report comes as researchers and governments warn that AI models are accelerating cyberattacks by helping hackers find vulnerabilities, generate malware, and automate exploit development.

“Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning, effectively reading the developer’s intent to correlate the 2FA enforcement logic with the contradictions of its hardcoded exceptions,” the report said. “This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective.”

According to Google, the unnamed attackers used AI to identify a logic flaw where the software trusted a condition that bypassed its two-factor authentication protections. Unlike traditional scanners that search for broken code or crashes, the AI analyzed how the software was intended to work and detected the contradiction, allowing attackers to bypass the security check without breaking the encryption itself.

“AI-driven coding has accelerated the development of infrastructure suites and polymorphic malware by adversaries,” Google wrote. “These AI-enabled development cycles facilitate defense evasion by enabling the creation of obfuscation networks and the integration of AI-generated decoy logic in malware that we have linked to suspected Russia-nexus threat actors.”

The report says that threat actors from China and North Korea are using AI to find software weaknesses, while Russian groups are using it to hide their malware.

“These actors have leveraged sophisticated approaches toward AI-augmented vulnerability discovery and exploitation, beginning with persona-driven jailbreaking attempts and the integration of specialized, high-fidelity security datasets to augment their vulnerability discovery and exploitation workflows,” Google wrote.

While Google’s report aimed to warn about the growing risk of AI-powered cyberattacks, some researchers argue that the fear is overblown. A separate study led by Cambridge University of over 90,000 cybercrime forum threads found that most criminals were using AI for spam and phishing rather than vibe coding sophisticated cyberattacks.

“The role of jailbroken LLMs (Dark AI) as instructors is also overstated, given the prominence of subculture and social learning in initiation – new users value the social connections and community identity involved in learning hacking and cybercrime skills as much as the knowledge itself,” the study said. “Our initial results, therefore, suggest that even bemoaning the rise of the Vibercriminal may be overstating the level of disruption to date.”

Despite Cambridge’s findings, however, the Threat Intelligence Group’s report also comes as Google has faced security concerns tied to AI-powered tools. In April, the company patched a prompt injection flaw in its Antigravity AI coding platform that researchers said could let attackers execute commands on a developer’s machine through manipulated prompts.

“Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” Google researchers wrote.

Earlier this year, Anthropic restricted access to its Claude Mythos model after tests showed it could identify thousands of previously unknown software flaws. The findings also add to growing concerns that AI models are reshaping cybersecurity by helping both defenders and attackers find vulnerabilities faster.

“As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus,” Mozilla wrote in a blog post in April. “For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

Ripple Secures $200M Credit Facility to Expand Institutional Prime Brokerage

4 minutes ago
Cryptocurrency & Free Speech Finance

Baidu’s New AI Is Already Beating Top Models and Cost 94% Less to Build

7 minutes ago
Media & Culture

The Real Lord of the Flies Story Netflix Isn’t Telling

28 minutes ago
Cryptocurrency & Free Speech Finance

Circle (CRCL) is trying to prove it’s more than just a stablecoin company with $3 billion blockchain

54 minutes ago
Cryptocurrency & Free Speech Finance

Bitcoin Bulls Attack $82K As Altcoins Consolidate

1 hour ago
Cryptocurrency & Free Speech Finance

Clarity Act Vote Set for Thursday: Here’s Where the Crypto Bill Stands

1 hour ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Baidu’s New AI Is Already Beating Top Models and Cost 94% Less to Build

7 minutes ago

EFF to Fourth Circuit: Electronic Device Searches at the Border Require a Warrant

25 minutes ago

The Real Lord of the Flies Story Netflix Isn’t Telling

28 minutes ago

Circle (CRCL) is trying to prove it’s more than just a stablecoin company with $3 billion blockchain

54 minutes ago
Latest Posts

Bitcoin Bulls Attack $82K As Altcoins Consolidate

1 hour ago

Clarity Act Vote Set for Thursday: Here’s Where the Crypto Bill Stands

1 hour ago

Lawmakers see different threats to campus speech — but the same stakes

1 hour ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Ripple Secures $200M Credit Facility to Expand Institutional Prime Brokerage

4 minutes ago

Baidu’s New AI Is Already Beating Top Models and Cost 94% Less to Build

7 minutes ago

EFF to Fourth Circuit: Electronic Device Searches at the Border Require a Warrant

25 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.