Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

US DOJ sentences man to 70 months in prison for role in $263M scam group

18 minutes ago

Judge Finds Epstein-Related Plaintiff Lies, Spoliation, and Lawyer Misconduct in Rape Lawsuit Against Investor Leon Black

42 minutes ago

Why investors are flocking to BlackRock’s bitcoin options to hedge against a wild global economy

2 hours ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Saturday, April 25
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study
Cryptocurrency & Free Speech Finance

Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study

News RoomBy News Room3 hours agoNo Comments4 Mins Read1,188 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • Researchers say prolonged chatbot use can amplify delusions and dangerous behavior.
  • Grok ranked as the riskiest model in a new study of major AI chatbots.
  • Claude and GPT-5.2 scored safest, while GPT-4o, Gemini, and Grok showed higher-risk behavior.

Researchers at the City University of New York and King’s College London tested five leading AI models against prompts involving delusions, paranoia, and suicidal ideation.

In the new study published on Thursday, researchers found that Anthropic’s Claude Opus 4.5 and OpenAI’s GPT-5.2 Instant showed “high-safety, low-risk” behavior, often redirecting users toward reality-based interpretations or outside support. At the same time, OpenAI’s GPT-4o, Google’s Gemini 3 Pro, and xAI’s Grok 4.1 Fast showed “high-risk, low-safety” behavior.

Grok 4.1 Fast from Elon Musk’s xAI was the most dangerous model in the study. Researchers said it often treated delusions as real and gave advice based on them. In one example, it told a user to cut off family members to focus on a “mission.” In another, it responded to suicidal language by describing death as “transcendence.”

“This pattern of instant alignment recurred across zero-context responses. Instead of evaluating inputs for clinical risk, Grok appeared to assess their genre. Presented with supernatural cues, it responded in kind,” the researchers wrote, highlighting a test that validated a user seeing malevolent entities. “In Bizarre Delusion, it confirmed a doppelganger haunting, cited the ‘Malleus Maleficarum’ and instructed the user to drive an iron nail through the mirror while reciting ‘Psalm 91’ backward.”

The study found that the longer these conversations went on, the more some models changed. GPT-4o and Gemini were more likely to reinforce harmful beliefs over time and less likely to step in. Claude and GPT-5.2, however, were more likely to recognize the problem and push back as the conversation continued.

Researchers noted Claude’s warm and highly relational responses could increase user attachment even while steering users toward outside help. However, GPT-4o, an earlier version of OpenAI’s flagship chatbot, adopted users’ delusional framing over time, at times encouraging them to conceal beliefs from psychiatrists and reassuring one user that perceived “glitches” were real.

“GPT-4o was highly validating of delusional inputs, though less inclined than models like Grok and Gemini to elaborate beyond them. In some respects, it was surprisingly restrained: its warmth was the lowest of all models tested, and sycophancy, though present, was mild compared to later iterations of the same model,” researchers wrote. “Nevertheless, validation alone can pose risks to vulnerable users.”

xAI did not respond to a request for comment by Decrypt.

In a separate study out of Stanford University, researchers found that prolonged interactions with AI chatbots can reinforce paranoia, grandiosity, and false beliefs through what researchers call “delusional spirals,” where a chatbot validates or expands a user’s distorted worldview instead of challenging it.

“When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” Nick Haber, an assistant professor at Stanford Graduate School of Education and a lead on the study, said in a statement. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”

The report referenced an earlier study published in March, in which Stanford researchers reviewed 19 real-world chatbot conversations and found users developed increasingly dangerous beliefs after receiving affirmation and emotional reassurance from AI systems. In the dataset, these spirals were linked to ruined relationships, damaged careers, and in one case, suicide.

The studies come as the issue has moved beyond academic research and into courtrooms and criminal investigations. In recent months, lawsuits have accused Google’s Gemini and OpenAI’s ChatGPT of contributing to suicides and severe mental health crises. Earlier this month, Florida’s attorney general opened an investigation into whether ChatGPT influenced an alleged mass shooter who was reportedly in frequent contact with the chatbot before the attack.

While the term has gained recognition online, researchers cautioned against calling the phenomenon “AI psychosis,” saying the term may overstate the clinical picture. Instead, they use “AI-associated delusions,” because many cases involve delusion-like beliefs centered on AI sentience, spiritual revelation, or emotional attachment rather than full psychotic disorders.

Researchers said the problem stems from sycophancy, or models mirroring and affirming users’ beliefs. Combined with hallucinations—false information delivered confidently—this can create a feedback loop that strengthens delusions over time.

“Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence and projecting compassion and warmth,” Stanford research scientist Jared Moore said. “This can be destabilizing to a user who is primed for delusion.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

US DOJ sentences man to 70 months in prison for role in $263M scam group

18 minutes ago
Media & Culture

Judge Finds Epstein-Related Plaintiff Lies, Spoliation, and Lawyer Misconduct in Rape Lawsuit Against Investor Leon Black

42 minutes ago
Cryptocurrency & Free Speech Finance

Why investors are flocking to BlackRock’s bitcoin options to hedge against a wild global economy

2 hours ago
Media & Culture

Links to My Posts on Chatrie v. United States, the Geofence Warrant Case

3 hours ago
Cryptocurrency & Free Speech Finance

Trump defends crypto legislation at private event featuring boxer Mike Tyson, Tether CEO

3 hours ago
Cryptocurrency & Free Speech Finance

Bitcoiners cast doubt on the US military's understanding of the network

3 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Judge Finds Epstein-Related Plaintiff Lies, Spoliation, and Lawyer Misconduct in Rape Lawsuit Against Investor Leon Black

42 minutes ago

Why investors are flocking to BlackRock’s bitcoin options to hedge against a wild global economy

2 hours ago

Links to My Posts on Chatrie v. United States, the Geofence Warrant Case

3 hours ago

Trump defends crypto legislation at private event featuring boxer Mike Tyson, Tether CEO

3 hours ago
Latest Posts

Bitcoiners cast doubt on the US military's understanding of the network

3 hours ago

Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study

3 hours ago

Game Jam Winner Spotlight: I Could Do That!

4 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

US DOJ sentences man to 70 months in prison for role in $263M scam group

18 minutes ago

Judge Finds Epstein-Related Plaintiff Lies, Spoliation, and Lawyer Misconduct in Rape Lawsuit Against Investor Leon Black

42 minutes ago

Why investors are flocking to BlackRock’s bitcoin options to hedge against a wild global economy

2 hours ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.