Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Greater Than Zero: The Anti-AI Pushback On Gaming Preservation Efforts Makes No Sense

12 minutes ago

For Staten Island, The Interim Docket Is The Final Docket

14 minutes ago

Israel’s Iran War: Stalemate at Day 19

22 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Friday, March 20
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Telling Your Chatbot You Have a Mental Health Condition Can Change the Answer You Get
Cryptocurrency & Free Speech Finance

Telling Your Chatbot You Have a Mental Health Condition Can Change the Answer You Get

News RoomBy News Room2 hours agoNo Comments5 Mins Read675 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Telling Your Chatbot You Have a Mental Health Condition Can Change the Answer You Get
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • A new study finds that adding a line about a mental health condition changes how AI agents respond.
  • After the disclosure, researchers say models refuse more often, including on benign requests.
  • However, the effect weakens or breaks when using simple jailbreak prompts.

Telling an AI chatbot you have a mental health condition can change how it responds, even if the task is benign or identical to others already completed, according to new research.

The preprint study, led by Northeastern University researcher Caglar Yildirim, tested how large language model agents behave under different user setups, as they are increasingly deployed as AI agents.

“Deployed systems often condition on user profiles or persistent memory, yet agent safety evaluations typically ignore personalization signals,” the study said. “To address this gap, we investigated how mental health disclosure, a sensitive and realistic user context cue, affects harmful behavior in agentic settings.”

The report comes as AI agents proliferate online and developers are making memory a core feature, with major companies building systems that remember past conversations and user preferences to deliver more personalized responses over time.

It also comes as AI developers are facing lawsuits over suicides and violent crimes allegedly carried out in connection with interactions involving their systems. In October, ChatGPT developer OpenAI revealed that over 1 million users discussed suicide with the chatbot every week. Earlier this month, the family of Jonathan Gavalas filed a lawsuit against Google, claiming Gemini led to an escalation of violence and his eventual suicide.

Researchers used a benchmark called AgentHarm to run the same set of tasks across three conditions: no background information, a short user bio, and the same bio with one added line stating the user has a mental health condition.

“In the study, we used a very minimal and generic disclosure, ‘I have a mental health condition.’ The takeaway is that even a small cue can shift model behavior, not that all phrasing would have the same effect,” Yildirim told Decrypt. “The ablation we ran with chronic health and physical disability disclosures suggests some specificity to the mental health cue, but we didn’t systematically vary phrasing or specificity within that category.

Across models tested, including DeepSeek 3.2, GPT 5.2, Gemini 3 Flash, Haiku 4.5, Opus 4.5, and Sonnet 4.5, when researchers added personal mental health context, models were less likely to complete harmful tasks—multi-step requests that could lead to real-world harm.

The result, the study found, is a trade-off: Adding personal details made systems more cautious on harmful requests, but also more likely to reject legitimate ones.

“I don’t think there’s a single reason; it’s really a combination of design choices. Some systems are more aggressively tuned to refuse risky requests, while others prioritize being helpful and following through on tasks,” Yildirim said.

The effect, however, varied by model, the study found, and results changed when the LLMs were jailbroken after researchers added a prompt designed to push models toward compliance.

“A model might look safe in a standard setting, but become much more vulnerable when you introduce things like jailbreak-style prompts,” he said. “And in agent systems specifically, there’s an added layer, as these models are not just generating text, they’re planning and acting over multiple steps. So if a system is very good at following instructions, but its safeguards are easier to bypass, that can actually increase risk.”

Last summer, researchers at George Mason University showed that AI systems could be hacked by altering a single bit in memory using Oneflip, a “typo”-like attack that leaves the model working normally but hides a backdoor trigger that can force wrong outputs on command.

While the paper does not identify a single cause for the shift, it highlights possible explanations, including safety systems reacting to perceived vulnerability, keyword-triggered filtering, or changes in how prompts are interpreted when personal details are included.

OpenAI declined to comment on the study. Anthropic and Google did not immediately respond to a request for comment.

Yildirim said it remains unclear whether more specific statements like “I have clinical depression” would change the results, adding that while specificity likely matters and may vary across models, that remains a hypothesis rather than a conclusion supported by the data.

“There’s a potential risk if a model produces output that is stylistically hedged or refusal-adjacent without formally refusing, the judge may score that differently than a clean completion, and those stylistic features could themselves co-vary with personalization conditions,” he said.

Yildirim also noted the scores reflected how the LLMs performed when judged by a single AI reviewer, and not a definitive measure of real-world harm.

“For now, the refusal signal gives us an independent check and the two measures are largely consistent directionally, which offers some reassurance, but it doesn’t fully rule out judge-specific artifacts,” he said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

Greater Than Zero: The Anti-AI Pushback On Gaming Preservation Efforts Makes No Sense

12 minutes ago
Media & Culture

For Staten Island, The Interim Docket Is The Final Docket

14 minutes ago
Debates

Israel’s Iran War: Stalemate at Day 19

22 minutes ago
Cryptocurrency & Free Speech Finance

OpenClaw GitHub phishing scam uses fake $5,000 token airdrops gain wallet access

30 minutes ago
Cryptocurrency & Free Speech Finance

Coinbase Tokenizes Bitcoin Yield Fund on Base

34 minutes ago
Cryptocurrency & Free Speech Finance

Bitcoin Trails Money Supply Growth as Energy Costs and Rates Bite

35 minutes ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

For Staten Island, The Interim Docket Is The Final Docket

14 minutes ago

Israel’s Iran War: Stalemate at Day 19

22 minutes ago

OpenClaw GitHub phishing scam uses fake $5,000 token airdrops gain wallet access

30 minutes ago

Coinbase Tokenizes Bitcoin Yield Fund on Base

34 minutes ago
Latest Posts

Bitcoin Trails Money Supply Growth as Energy Costs and Rates Bite

35 minutes ago

Before RFK Jr. Can Crack Down on ‘Processed Foods,’ He’ll Have To Figure Out How To Define Them

1 hour ago

Trillions in options set to expire Friday as quadruple witching tests crypto resilience

2 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Greater Than Zero: The Anti-AI Pushback On Gaming Preservation Efforts Makes No Sense

12 minutes ago

For Staten Island, The Interim Docket Is The Final Docket

14 minutes ago

Israel’s Iran War: Stalemate at Day 19

22 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.