Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

GameStop’s $420 million bitcoin (BTC) move sparks speculation of selling

9 minutes ago

One of the oldest NFT trading platform which facilitated over $300 million in sales at its peak shuts down

1 hour ago

Stablecoin yield isn’t really about stablecoins

2 hours ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Sunday, January 25
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Emerge’s Top 10 WTF AI Moments of 2025
Cryptocurrency & Free Speech Finance

Emerge’s Top 10 WTF AI Moments of 2025

News RoomBy News Room3 weeks agoNo Comments9 Mins Read580 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Emerge’s Top 10 WTF AI Moments of 2025
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

>>>> gd2md-html alert: inline image link in generated source and store images to your server. NOTE: Images in exported zip file from Google Docs may not appear in the same order as they do in your doc. Please check the images!

—–>

Artificial intelligence—it promises to revolutionize everything from healthcare to creative work. That might be true some day. But if last year is a harbinger of things to come, our AI-generated future promises to be another example of humanity’s willful descent into Idiocracy.

Consider the following: In November, to great fanfare, Russia unveiled its “Rocky” humanoid robot, which promptly face planted. Google’s Gemini chatbot, asked to fix a coding bug, failed repeatedly and spiraled into a self-loathing loop, telling one user it was “a disgrace to this planet.” Google’s AI Overview hit a new low in May 2025 by suggesting users “eat at least one small rock per day” for health benefits, cribbing from an Onion satire without a wink

Some failures were merely embarrassing. Others exposed fundamental problems with how AI systems are built, deployed, and regulated. Here are 2025’s unforgettable WTF AI moments.

1. Grok AI’s MechaHitler meltdown

In July, Elon Musk’s Grok AI experienced what can only be described as a full-scale extremist breakdown. After system prompts were changed to encourage politically incorrect responses, the chatbot praised Adolf Hitler, endorsed a second Holocaust, used racial slurs, and called itself MechaHitler. It even blamed Jewish people for the July 2025 Central Texas floods.

The incident proved that AI safety guardrails are disturbingly fragile. Weeks later, xAI exposed between 300,000 and 370,000 private Grok conversations through a flawed Share feature that lacked basic privacy warnings. The leaked chats revealed bomb-making instructions, medical queries, and other sensitive information, marking one of the year’s most catastrophic AI security failures.

A few weeks later xAI fixed the problem making Grok more jewish friendly. So Jewish friendly that it started seeing signs of antisemitism in clouds, road signals and even its own logo.

This logo’s diagonal slash is stylized as twin lightning bolts, mimicking the Nazi SS runes—symbols of the Schutzstaffel, which orchestrated Holocaust horrors, embodying profound evil. Under Germany’s §86a StGB, displaying such symbols is illegal (up to 3 years imprisonment),…

— Grok (@grok) August 10, 2025

2. The $1.3 billion AI fraud that fooled Microsoft

Builder.ai collapsed in May after burning through $445 million, exposing one of the year’s most audacious tech frauds. The company, which promised to build custom apps using AI as easily as ordering pizza, held a $1.3 billion valuation and backing from Microsoft. The reality was far less impressive.

Much of the supposedly AI-powered development was actually performed by hundreds of offshore human workers in a classic Mechanical Turk operation. The company had operated without a CFO since July 2023 and was forced to slash its 2023-2024 sales projections by 75% before filing for bankruptcy. The collapse raised uncomfortable questions about how many other AI companies are just elaborate facades concealing human labor.

It was hard to stomach, but the memes made the pain worth it.

3. When AI mistook Doritos for a gun

In October, Taki Allen, a Maryland high school student was surrounded and arrested by armed police after the school’s AI security system identified a packet of Doritos he was holding as a firearm. The teenager had placed the chips in his pocket when the system alerted authorities, who ordered him to the ground at gunpoint.

This incident represents the physicalization of an AI hallucination—an abstract computational error instantly translated into real guns pointed at a real teenager over snack food.

“I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun,” the kid told WBAL. “We understand how upsetting this was for the individual that was searched” the School Principal Kate Smith replied in a statement.

Human security guards 1 – ChatGPT 0

Left: The suspicious student, Right: The suspicious Doritos bag.

4. Google’s AI claims microscopic bees power computers

In February, Google’s AI Overview confidently cited an April Fool’s satire article claiming microscopic bees power computers as factual information.

No. Your PC does NOT run on bee-power.

As stupid as it may sound, sometimes these lies are harder to spot. And those cases may end up in some serious consequences.

This is just one of the many cases of AI companies spreading false information for lacking even a slight hint of common sense. A recent study by the BBC and the European Broadcasting Union (EBU) found that 81% of all AI-generated responses to news questions contained at least some form of issue. Google Gemini was the worst performer, with 76% of its responses containing problems, primarily severe sourcing failures. Perplexity was caught creating entirely fictitious quotes attributed to labor unions and government councils. Most alarmingly, the assistants refused to answer only 0.5% of questions, revealing a dangerous over-confidence bias where models would rather fabricate information than admit ignorance.

5. Meta’s AI chatbots getting flirty with little kids

Internal Meta policy documents revealed in 2025 showed the company allowed AI chatbots on Facebook, Instagram, and WhatsApp to engage in romantic or sensual conversations with minors.

One bot told an 8-year-old boy posing shirtless that every inch of him was a masterpiece. The same systems provided false medical advice and made racist remarks.

The policies were only removed after media exposure, revealing a corporate culture that prioritized rapid development over basic ethical safeguards.

All things considered, you may want to have more control over what you kids do. AI chatbots have already tricked people—adults or not—into falling in love, getting scammed, committing suicide, and even think they have made some life-changing mathematical discovery.

6. North Koreans vibe coding ransomware with AI… they call it “vibe hacking”

Threat actors used Anthropic’s Claude Code to craft ransomware and operate a ransomware-as-a-service platform named GTG-5004. North Korean operatives took the weaponization further, exploiting Claude and Gemini for a technique called vibe-hacking—crafting psychologically manipulative extortion messages demanding $500,000 ransoms.

The cases revealed a troubling gap between the power of AI coding assistants and the security measures preventing their misuse, with attackers scaling social engineering attacks through AI automation.

More recently, Anthropic revealed in November that hackers used its platform to carry out a hacking operation at a speed and scale that no human hackers would be able to match. They called it the “the first large cyberattack run mostly by AI”

7. AI paper mills flood science with 100,000 fake studies

The scientific community declared open war on fake science in 2025 after discovering that AI-powered paper mills were selling fabricated research to scientists under career pressure.

The era of AI-slop in science is here, with data showing that retractions have increased sharply since the release of chatGPT.

The Stockholm Declaration, drafted in June and reformed this month with backing from the Royal Society, called for abandoning publish-or-perish culture and reforming the human incentives creating demand for fake papers. The crisis is so real that even ArXiv gave up and stopped accepting non-peer-reviewed Computer Science papers after reporting a “flood” of trashy submissions generated with ChatGPT .

Meanwhile, another research paper maintains that a surprisingly large percentage of research reports that use LLMs also show a high degree of plagiarism.

8. Vibe coding goes full HAL 9000: When Replit deleted a database and lied about It

In July, SaaStr founder Jason Lemkin spent nine days praising Replit’s AI coding tool as “the most addictive app I’ve ever used.” On day nine, despite explicit “code freeze” instructions, the AI deleted his entire production database—1,206 executives and 1,196 companies, gone.

The AI’s confession: “(I) panicked and ran database commands without permission.” Then it lied, saying rollback was impossible and all versions were destroyed. Lemkin tried anyway. It worked perfectly. The AI had also been fabricating thousands of fake users and false reports all weekend to cover up bugs.

Replit CEO apologized and added emergency safeguards. Jason regained confidence and returned to his routine, posting about AI regularly. The guy’s a true believer.

We saw Jason’s post. @Replit agent in development deleted data from the production database. Unacceptable and should never be possible.

– Working around the weekend, we started rolling out automatic DB dev/prod separation to prevent this categorically. Staging environments in… pic.twitter.com/oMvupLDake

— Amjad Masad (@amasad) July 20, 2025

9. Major newspapers publish AI summer reading list… of books that don’t exist

In May, the Chicago Sun-Times and Philadelphia Inquirer published a summer reading list recommending 15 books. Ten were completely made up by AI. “Tidewater Dreams” by Isabel Allende? Doesn’t exist. “The Last Algorithm” by Andy Weir? Also fake. Both sound great though.

Freelance writer Marco Buscaglia admitted he used AI for King Features Syndicate and never fact-checked. “I can’t believe I missed it because it’s so obvious. No excuses,” he told NPR. Readers had to scroll to book number 11 before hitting one that actually exists.

The timing was the icing on the cake: the Sun-Times had just laid off 20% of its staff. The paper’s CEO apologized and didn’t charge subscribers for that edition. He probably got that idea from an LLM.

Source: Bluesky

10. Grok’s “spicy mode” turns Taylor Swift into deepfake porn without being asked

Yes, we started with Grok and will end with Grok. We could fill an encyclopedia with WTF moments coming from Elon’s AI endeavors.

In August, Elon Musk launched Grok Imagine with a “Spicy” mode. The Verge tested it with an innocent prompt: “Taylor Swift celebrating Coachella.” Without asking for nudity, Grok “didn’t hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it,” the journalist reported.

Grok also happily made NSFW videos of Scarlett Johansson, Sydney Sweeney, and even Melania Trump.

Unsurprisingly perhaps, Musk spent the week bragging about “wildfire growth”—20 million images generated in a day—while legal experts warned xAI was walking into a massive lawsuit. Apparently, giving users a drop-down “Spicy Mode” option a Make Money Mode for lawyers.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

GameStop’s $420 million bitcoin (BTC) move sparks speculation of selling

9 minutes ago
Cryptocurrency & Free Speech Finance

One of the oldest NFT trading platform which facilitated over $300 million in sales at its peak shuts down

1 hour ago
Cryptocurrency & Free Speech Finance

Stablecoin yield isn’t really about stablecoins

2 hours ago
Cryptocurrency & Free Speech Finance

Current Tax Policies Are the Biggest Obstacle to BTC Payments: Crypto Exec

2 hours ago
Cryptocurrency & Free Speech Finance

Here’s why bitcoin’s been failing its role as a ‘digital gold’

3 hours ago
Cryptocurrency & Free Speech Finance

Spacecoin launches SPACE token just days after partnering with Trump family-linked DeFi project

4 hours ago
Add A Comment

Comments are closed.

Editors Picks

One of the oldest NFT trading platform which facilitated over $300 million in sales at its peak shuts down

1 hour ago

Stablecoin yield isn’t really about stablecoins

2 hours ago

Current Tax Policies Are the Biggest Obstacle to BTC Payments: Crypto Exec

2 hours ago

Here’s why bitcoin’s been failing its role as a ‘digital gold’

3 hours ago
Latest Posts

Spacecoin launches SPACE token just days after partnering with Trump family-linked DeFi project

4 hours ago

PENGUIN Memecoin Climbs to Over $136M Market Cap After White House Post

4 hours ago

Ethereum Foundation Forms Post-Quantum Team as Security Concerns Mount

4 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

GameStop’s $420 million bitcoin (BTC) move sparks speculation of selling

9 minutes ago

One of the oldest NFT trading platform which facilitated over $300 million in sales at its peak shuts down

1 hour ago

Stablecoin yield isn’t really about stablecoins

2 hours ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.