Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

U.S. To Spend $24.4 Trillion More Than It Has Over the Next Decade, Report Warns

1 minute ago

The Artemis Project

17 minutes ago

Fed’s Kashkari says crypto is ‘utterly useless’

23 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Thursday, February 19
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»News»Media & Culture»Before We Blame AI For Suicide, We Should Admit How Little We Know About Suicide
Media & Culture

Before We Blame AI For Suicide, We Should Admit How Little We Know About Suicide

News RoomBy News Room2 hours agoNo Comments9 Mins Read1,208 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Before We Blame AI For Suicide, We Should Admit How Little We Know About Suicide
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

from the the-human-brain-is-way-more-complicated dept

Warning: This article discusses suicide and some research regarding suicidal ideation. If you are having thoughts of suicide, please call or text 988 to reach the Suicide and Crisis Lifeline or visit this list of resources for help. Know that people care about you and there are many available to help.

When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We’ve talked before about the dangers of this impulse. The target keeps shifting: “cyberbullying,” then “social media,” then “Amazon.” Now it’s generative AI.

There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.

It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.

Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.

If experts who have spent decades studying the human mind admit they often cannot predict or prevent suicide even when treating a patient directly, we should be extremely wary of the confidence with which pundits and lawsuits assign blame to a chatbot.

The Times piece focuses on the work of two psychiatrists who have been devastated by the loss of patients who gave absolutely no indication they were about to harm themselves.

In his nearly 40-year career as a psychiatrist, Dr. Igor Galynker has lost three patients to suicide while they were under his care. None of them had told him that they intended to harm themselves.

In one case, a patient who Dr. Galynker had been treating for a year sent him a present — a porcelain caviar dish — and a letter, telling Dr. Galynker that it wasn’t his fault. It arrived one week after the man died by suicide.

“That was pretty devastating,” Dr. Galynker said, adding, “It took me maybe two years to come to terms with it.”

He began to wonder: What happens in people’s minds before they kill themselves? What is the difference between that day and the day before?

Nobody seemed to know the answer.

Nobody seemed to know the answer.

That is the state of the science. Apparently the best we currently have in tracking suicidal risk is asking people: “Are you thinking about killing yourself?” And as the article notes, this method is catastrophically flawed.

But despite decades of research into suicide prevention, it is still very difficult to know whether someone will try to die by suicide. The most common method of assessing suicidal risk involves asking patients directly if they plan to harm themselves. While this is an essential question, some clinicians, including Dr. Galynker, say it is inadequate for predicting imminent suicidal behavior….

Dr. Galynker, the director of the Suicide Prevention Research Lab at Mount Sinai in New York City, has said that relying on mentally ill people to disclose suicidal intent is “absurd.” Some patients may not be cognizant of their own mental state, he said, while others are determined to die and don’t want to tell anyone.

The data backs this up:

According to one literature review, about half of those who died by suicide had denied having suicidal intent in the week or month before ending their life.

This profound inability to predict suicide has led these clinicians to propose a new diagnosis for the DSM-5 called “Suicide Crisis Syndrome” (SCS). They argue that we need to stop looking for stated intent and start looking for a specific, overwhelming state of mind.

To be diagnosed with S.C.S., Dr. Galynker said, patients must have a “persistent and intense feeling of frantic hopelessness,” in which they feel trapped in an intolerable situation.

They must also have emotional distress, which can include intense anxiety; feelings of being extremely tense, keyed up or jittery (people often develop insomnia); recent social withdrawal; and difficulty controlling their thoughts.

By the time patients develop S.C.S., they are in such distress that the thinking part of the brain — the frontal lobe — is overwhelmed, said Lisa J. Cohen, a clinical professor of psychiatry at Mount Sinai who is studying S.C.S. alongside Dr. Galynker. It’s like “trying to concentrate on a task with a fire alarm going off and dogs barking all around you,” she added.

This description of “frantic hopelessness” and feeling “trapped” gives us a glimpse into the internal maelstrom that leads to suicide. It also highlights why externalizing the blame to a technology is so misguided.

The article shares the story of Marisa Russello, who attempted suicide four years ago. Her experience underscores how internal, sudden, and unpredictable the impulse can be—and how disconnected it can be from any specific external “push.”

On the night that she nearly died, Ms. Russello wasn’t initially planning to harm herself. Life had been stressful, she said. She felt overwhelmed at work. A new antidepressant wasn’t working. She and her husband were arguing more than usual. But she wasn’t suicidal.

She was at the movies with her husband when Ms. Russello began to feel nauseated and agitated. She said she had a headache and needed to go home. As she reached the subway, a wave of negative emotions washed over her.

[….]

By the time she got home, she had “dropped into this black hole of sadness.”

And she decided that she had no choice but to end her life. Fortunately, she said, her attempt was interrupted.

Her decision to die by suicide was so sudden that if her psychiatrist had asked about self-harm at their last session, she would have said, truthfully, that she wasn’t even considering it.

When we read stories like Russello’s, or the accounts of the psychiatrists losing patients who denied being at risk, it becomes difficult to square the complexity of human psychology with the simplistic narrative that “Chatbot X caused Person Y to die.”

There is undeniably an overlap between people who use AI chatbots and people who are struggling with mental health issues—in part because so many people use chatbots today, but also because people in distress seek connection, answers, a safe space to vent. That search often leads to chatbots.

Unless we’re planning to make thorough and competent mental health support freely available to everyone who needs it at any time, that’s going to continue. Rather than simply insisting that these tools are evil, we should be looking at ways to improve outcomes knowing that some people are going to rely on them.

Just because a person used an AI tool—or a search engine, or a social media platform, or a diary—prior to their death does not mean the tool caused the death.

When we rush to blame the technology, we are effectively claiming to know something that experts in that NY Times piece admit they do not know. We are claiming we know why it happened. We are asserting that if the chatbot hadn’t generated what it generated, if it hadn’t been there responding to the person, that the “frantic hopelessness” described in the SCS research would simply have evaporated.

There is no evidence to support that.

None of this is to say AI tools can’t make things worse. For someone already in crisis, certain interactions could absolutely be unhelpful or exacerbating by “validating” the helplessness they’re already experiencing. But that is a far cry from the legal and media narrative that these tools are “killing” people.

The push to blame AI serves a psychological purpose for the living: it provides a tangible enemy. It implies that there is a switch we can flip—a regulation we can pass, a lawsuit we can win—that will stop these tragedies.

It suggests that suicide is a problem of product liability rather than a complex, often inscrutable crisis of the human mind.

The work being done on Suicide Crisis Syndrome is vital because it admits what the current discourse ignores: we are failing to identify the risk because we are looking at the wrong things.

Dr. Miller, the psychiatrist at Endeavor Health in Chicago, first learned about S.C.S. after the patient suicides. He then led efforts to screen every psychiatric patient for S.C.S. at his hospital system. In trying to implement the screenings there have been “fits and starts,” he said.

“It’s like turning the Titanic,” he added. “There are so many stakeholders that need to see that a new approach is worth the time and effort.”

While clinicians are trying to turn the Titanic of psychiatric care to better understand the internal states that lead to suicide, the public debate is focused on the wrong iceberg.

If we focus all our energy on demonizing AI, we risk ignoring the actual “black hole of sadness” that Ms. Russello described. We risk ignoring the systemic failures in mental health care. We risk ignoring the fact that half of suicide victims deny intent to their doctors.

Suicide is a tragedy. It is a moment where a person feels they have no other choice—a loss of agency so complete that the thinking brain is overwhelmed, as the SCS researchers describe it. Simplifying that into a story about a “rogue algorithm” or a “dangerous chatbot” doesn’t help the next person who feels that frantic hopelessness.

It just gives the rest of us someone to sue.

Filed Under: blame, generative ai, suicide

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

#AI #ContentCreators #FutureOfMedia #Innovation #MediaTech #PlatformEconomy
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

U.S. To Spend $24.4 Trillion More Than It Has Over the Next Decade, Report Warns

1 minute ago
Debates

The Artemis Project

17 minutes ago
Cryptocurrency & Free Speech Finance

‘Warhammer’ Veteran Jervis Johnson Warns AI Could Become the ‘Asbestos of the Internet’

27 minutes ago
Media & Culture

Wikipedia Grapples With New Challenges From AI

1 hour ago
Media & Culture

DHS Spokesperson Tricia McLaughlin Goes Out at the Top of Her Game

1 hour ago
Cryptocurrency & Free Speech Finance

Accenture Is Tracking Whether Employees Use AI—And Promotions Are on the Line

1 hour ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

The Artemis Project

17 minutes ago

Fed’s Kashkari says crypto is ‘utterly useless’

23 minutes ago

Kraken xStocks Surpasses $25B in Tokenized Stock Volume

25 minutes ago

‘Warhammer’ Veteran Jervis Johnson Warns AI Could Become the ‘Asbestos of the Internet’

27 minutes ago
Latest Posts

Wikipedia Grapples With New Challenges From AI

1 hour ago

DHS Spokesperson Tricia McLaughlin Goes Out at the Top of Her Game

1 hour ago

BTC steadies at $67,000 as traders pay for crash protection

1 hour ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

U.S. To Spend $24.4 Trillion More Than It Has Over the Next Decade, Report Warns

1 minute ago

The Artemis Project

17 minutes ago

Fed’s Kashkari says crypto is ‘utterly useless’

23 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.