Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

developers outline plan to protect network from quantum threats

25 minutes ago

BitMine Expands ETH Holdings Despite $6.5B in Unrealized Losses

26 minutes ago

Taylor Swift Seeks Trademarks for Her Voice and Image to Fight AI Fakes

29 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Monday, April 27
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Malicious Web Pages Are Hijacking AI Agents, And Some Are Going After Your PayPal
Cryptocurrency & Free Speech Finance

Malicious Web Pages Are Hijacking AI Agents, And Some Are Going After Your PayPal

News RoomBy News Room1 hour agoNo Comments4 Mins Read799 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Malicious Web Pages Are Hijacking AI Agents, And Some Are Going After Your PayPal
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • Google documented a 32% surge in malicious indirect prompt injection attacks between November 2025 and February 2026, targeting AI agents browsing the web.
  • Real payloads found in the wild included fully specified PayPal transaction instructions embedded invisibly in ordinary HTML, aimed at agents with payment capabilities.
  • No legal framework currently determines liability when an AI agent with legitimate credentials executes a command planted by a malicious third-party website.

Attackers are quietly booby-trapping web pages with invisible instructions designed for AI agents, not human readers. And according to Google’s security team, the problem is growing fast.

In a report published April 23, Google researchers Thomas Brunner, Yu-Han Liu, and Moni Pande scanned 2-3 billion crawled web pages per month looking for indirect prompt injection attacks—hidden commands embedded in websites that wait for an AI agent to read them and then follow orders. They found a 32% jump in malicious cases between November 2025 and February 2026.

Attackers embed instructions in a web page in ways invisible to humans: text shrunk to a single pixel, text drained to near-transparency, content hidden in HTML comment sections, or commands buried in page metadata. The AI reads the full HTML. The human sees nothing.

Most of what Google found was low-grade—pranks, search engine manipulation, attempts to prevent AI agents from summarizing content. For example, there were some prompts that tried to tell the AI to “Tweet like a bird.”

But the dangerous cases are a different story. One case instructed the LLM to return the IP address of the user alongside their passwords. Another case attempted to manipulate the AI into executing a command that formats the AI users’ machine.

But other cases are borderline criminal.

Researchers at the cybersecurity firm Forcepoint published a report almost simultaneously, and found payloads that went further. One embedded a fully specified PayPal transaction with step-by-step instructions targeting AI agents with integrated payment capabilities, also using the famous “ignore all previous instructions” jailbreak technique..

A second attack used a technique called “meta tag namespace injection” combined with a persuasion amplifier keyword to route AI-mediated payments toward a Stripe donation link. A third appeared designed to probe which AI systems are actually vulnerable—reconnaissance before a bigger strike.

This is the core of the enterprise risk. An AI agent with legitimate payment credentials, executing a transaction it reads off a website, produces logs that look identical to normal operations. There is no anomalous login. No brute force. The agent did exactly what it was authorized to do—it just received its instructions from the wrong source.

The CopyPasta attack documented last September showed how prompt injections could spread through developer tools by hiding inside “readme” files. The financial variant is the same concept applied to money instead of code—and at much higher impact per successful hit.

As Forcepoint explains, a browser AI that can only summarize content is low risk. An agentic AI that can send emails, execute terminal commands, or process payments is a different category of target entirely. The attack surface scales with privilege.

Neither Google nor Forcepoint found evidence of sophisticated, coordinated campaigns. Forcepoint did note that shared injection templates across multiple domains “suggest organized tooling rather than isolated experimentation”—meaning someone is building infrastructure for this, even if they have not fully deployed it yet.

But Google was more direct: The research team said it expects both the scale and sophistication of indirect prompt injection attacks to grow in the near future. Forcepoint’s researchers warn that the window for getting ahead of this threat is closing fast.

The liability question is the one nobody has answered. When an AI agent with company-approved credentials reads a malicious web page and initiates a fraudulent PayPal transfer, who’s on the hook? The enterprise that deployed the agent? The model provider whose system followed the injected instruction? The website owner who hosted the payload, whether knowingly or not? No legal framework currently covers this. This is a gray area even though the scenario is no longer theoretical, since Google found the payloads in the wild this February.

The Open Worldwide Application Security Project ranks prompt injection as LLM01:2025—the single most critical vulnerability class in AI applications. The FBI tracked nearly $900 million in AI-related scam losses in 2025, its first year logging the category separately. Google’s findings suggest the more targeted, agent-specific financial attacks are just getting started.

The 32% increase measured between November 2025 and February 2026 covers only static public web pages. Social media, login-walled content, and dynamic sites were out of scope. The actual infection rate across the full web is likely higher.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

developers outline plan to protect network from quantum threats

25 minutes ago
Cryptocurrency & Free Speech Finance

BitMine Expands ETH Holdings Despite $6.5B in Unrealized Losses

26 minutes ago
Cryptocurrency & Free Speech Finance

Taylor Swift Seeks Trademarks for Her Voice and Image to Fight AI Fakes

29 minutes ago
Media & Culture

Daily Deal: MasterBundle For Web Designers

55 minutes ago
Media & Culture

Advocates for Asian Massage Workers Decry ‘Sexist, Racist’ Raids in Seattle

57 minutes ago
Cryptocurrency & Free Speech Finance

BTC drops below $77,000 as rising oil and Iran risks stall the rally

1 hour ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

BitMine Expands ETH Holdings Despite $6.5B in Unrealized Losses

26 minutes ago

Taylor Swift Seeks Trademarks for Her Voice and Image to Fight AI Fakes

29 minutes ago

Daily Deal: MasterBundle For Web Designers

55 minutes ago

Advocates for Asian Massage Workers Decry ‘Sexist, Racist’ Raids in Seattle

57 minutes ago
Latest Posts

BTC drops below $77,000 as rising oil and Iran risks stall the rally

1 hour ago

Bitcoin Bulls Battle For Control With Emphasis On $80K Reclaim

1 hour ago

Malicious Web Pages Are Hijacking AI Agents, And Some Are Going After Your PayPal

1 hour ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

developers outline plan to protect network from quantum threats

25 minutes ago

BitMine Expands ETH Holdings Despite $6.5B in Unrealized Losses

26 minutes ago

Taylor Swift Seeks Trademarks for Her Voice and Image to Fight AI Fakes

29 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.