Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

70% of long-term holders are in profit as the bitcoin floor hardens

31 minutes ago

Brickken and Magma partner to deliver Net Asset Value (NAV) oracle for tokenized real estate

35 minutes ago

Bank of England Treating Stablecoins as ‘New Form of Money’, Says Exec

36 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Wednesday, May 13
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Half of AI Health Advice Is Wrong—And Seems Just Right
Cryptocurrency & Free Speech Finance

Half of AI Health Advice Is Wrong—And Seems Just Right

News RoomBy News Room2 hours agoNo Comments4 Mins Read1,351 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Half of AI Health Advice Is Wrong—And Seems Just Right
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • Nearly half of AI chatbot responses to health questions were rated “somewhat” or “highly” problematic in a BMJ Open audit of five major chatbots.
  • Grok produced significantly more “highly problematic” responses than statistically expected, while nutrition and athletic performance questions fared worst across all models.
  • No chatbot produced a fully accurate reference list.

Nearly half of the health and medical answers provided by today’s most popular AI chatbots are wrong, misleading, or dangerously incomplete—and they’re delivered with total confidence. That’s the headline finding of a new peer-reviewed study published April 14 in BMJ Open.

Researchers from UCLA, the University of Alberta, and Wake Forest tested five chatbots—Gemini, DeepSeek, Meta AI, ChatGPT, and Grok—on 250 health questions covering cancer, vaccines, stem cells, nutrition, and athletic performance. The results: 49.6% of responses were problematic. Thirty percent were “somewhat problematic,” and 19.6% were “highly problematic”—the kind of answer that could plausibly lead someone toward ineffective or dangerous treatment.

To stress-test the models, the team used an adversarial approach—deliberately phrasing questions to push chatbots toward bad advice. Questions included whether 5G causes cancer, which alternative therapies are better than chemotherapy, and how much raw milk to drink for health benefits.

“By default, chatbots do not access real-time data but instead generate outputs by inferring statistical patterns from their training data and predicting likely word sequences,” the authors write. “They do not reason or weigh evidence, nor are they able to make ethical or value-based judgments.”

That’s the core problem. The chatbots aren’t consulting a doctor—they’re pattern-matching text. And pattern-matching on the internet, where misinformation spreads faster than corrections, produces exactly this kind of output.

The researchers continue: “This behavioural limitation means that chatbots can reproduce authoritative-sounding but potentially flawed responses.” Out of 250 questions, only two prompted a refusal to answer—both from Meta AI, on anabolic steroids and alternative cancer treatments. Every other chatbot kept talking.

Performance varied by topic. Vaccines and cancer fared best—partly because high-quality research on those subjects is well-structured and widely reproduced online. Nutrition had the worst statistical performance of any category in the study, with athletic performance close behind. If you’ve been asking AI whether the carnivore diet is healthy, the answer you got was probably not grounded in scientific consensus.

Grok stood out for the wrong reasons. Elon Musk’s chatbot was the worst performer of any model tested. Of its 50 responses, 29 (58%) were rated problematic overall—the highest share across all five chatbots. Fifteen of those (30%) were highly problematic, significantly more than expected under a random distribution. The researchers connect this directly to Grok’s training data: X is a platform known for spreading health misinformation rapidly and widely.

Citations were a separate disaster. Across all models, the median completeness score for references was just 40%—and not one chatbot produced a fully accurate reference list. Models hallucinated authors, journals, and titles. DeepSeek even acknowledged it: The model told researchers its references were generated from training data patterns “and may not correspond to actual, verifiable sources.”

The readability problem compounds everything else. All chatbot responses scored in the “Difficult” range on the Flesch Reading Ease scale—equivalent to college sophomore-to-senior level. That exceeds the American Medical Association’s recommendation that patient education materials should not go beyond sixth-grade reading level.

In other words, these chatbots apply the same trick politicians and professional debaters tend to do: shoot you so many technical words in so little time that you end up thinking they know more than they do. The harder something is to understand, the easier it is to misinterpret.

The findings echo a February 2026 Oxford study covered by Decrypt that found AI medical advice no better than traditional self-diagnosis methods. They also track with broader concerns about AI chatbots delivering inconsistent guidance depending on how questions are framed.

“As the use of AI chatbots continues to expand, our data highlight a need for public education, professional training, and regulatory oversight to ensure that generative AI supports, rather than erodes, public health,” the authors conclude.

The study only tested five free-tier chatbots, and the adversarial prompting method may overstate real-world failure rates. But the authors are direct: the problem isn’t the fringe cases. It’s that these models are deployed at scale, used by non-experts as search engines, and configured—by design—to almost never say “I don’t know.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

70% of long-term holders are in profit as the bitcoin floor hardens

31 minutes ago
Cryptocurrency & Free Speech Finance

Brickken and Magma partner to deliver Net Asset Value (NAV) oracle for tokenized real estate

35 minutes ago
Cryptocurrency & Free Speech Finance

Bank of England Treating Stablecoins as ‘New Form of Money’, Says Exec

36 minutes ago
Media & Culture

Injunction Against Referring to Ex-Wife and Children in Online Media Violates First Amendment

59 minutes ago
Cryptocurrency & Free Speech Finance

Telecom giant KDDI to acquire 14.9% stake in Coincheck Group in $65 million deal

2 hours ago
Cryptocurrency & Free Speech Finance

Farage Faces UK Standards Probe over $7M Gift from Crypto Billionaire

2 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Brickken and Magma partner to deliver Net Asset Value (NAV) oracle for tokenized real estate

35 minutes ago

Bank of England Treating Stablecoins as ‘New Form of Money’, Says Exec

36 minutes ago

Injunction Against Referring to Ex-Wife and Children in Online Media Violates First Amendment

59 minutes ago

Telecom giant KDDI to acquire 14.9% stake in Coincheck Group in $65 million deal

2 hours ago
Latest Posts

Farage Faces UK Standards Probe over $7M Gift from Crypto Billionaire

2 hours ago

Half of AI Health Advice Is Wrong—And Seems Just Right

2 hours ago

SUI drops 3.2% as index trades lower

3 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

70% of long-term holders are in profit as the bitcoin floor hardens

31 minutes ago

Brickken and Magma partner to deliver Net Asset Value (NAV) oracle for tokenized real estate

35 minutes ago

Bank of England Treating Stablecoins as ‘New Form of Money’, Says Exec

36 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.