Listen to the article
from the the-human-brain-is-way-more-complicated dept
Warning: This article discusses suicide and some research regarding suicidal ideation. If you are having thoughts of suicide, please call or text 988 to reach the Suicide and Crisis Lifeline or visit this list of resources for help. Know that people care about you and there are many available to help.
When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We’ve talked before about the dangers of this impulse. The target keeps shifting: “cyberbullying,” then “social media,” then “Amazon.” Now it’s generative AI.
There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.
It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.
Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.
If experts who have spent decades studying the human mind admit they often cannot predict or prevent suicide even when treating a patient directly, we should be extremely wary of the confidence with which pundits and lawsuits assign blame to a chatbot.
The Times piece focuses on the work of two psychiatrists who have been devastated by the loss of patients who gave absolutely no indication they were about to harm themselves.
In his nearly 40-year career as a psychiatrist, Dr. Igor Galynker has lost three patients to suicide while they were under his care. None of them had told him that they intended to harm themselves.
In one case, a patient who Dr. Galynker had been treating for a year sent him a present — a porcelain caviar dish — and a letter, telling Dr. Galynker that it wasn’t his fault. It arrived one week after the man died by suicide.
“That was pretty devastating,” Dr. Galynker said, adding, “It took me maybe two years to come to terms with it.”
He began to wonder: What happens in people’s minds before they kill themselves? What is the difference between that day and the day before?
Nobody seemed to know the answer.
Nobody seemed to know the answer.
That is the state of the science. Apparently the best we currently have in tracking suicidal risk is asking people: “Are you thinking about killing yourself?” And as the article notes, this method is catastrophically flawed.
But despite decades of research into suicide prevention, it is still very difficult to know whether someone will try to die by suicide. The most common method of assessing suicidal risk involves asking patients directly if they plan to harm themselves. While this is an essential question, some clinicians, including Dr. Galynker, say it is inadequate for predicting imminent suicidal behavior….
Dr. Galynker, the director of the Suicide Prevention Research Lab at Mount Sinai in New York City, has said that relying on mentally ill people to disclose suicidal intent is “absurd.” Some patients may not be cognizant of their own mental state, he said, while others are determined to die and don’t want to tell anyone.
The data backs this up:
According to one literature review, about half of those who died by suicide had denied having suicidal intent in the week or month before ending their life.
This profound inability to predict suicide has led these clinicians to propose a new diagnosis for the DSM-5 called “Suicide Crisis Syndrome” (SCS). They argue that we need to stop looking for stated intent and start looking for a specific, overwhelming state of mind.
To be diagnosed with S.C.S., Dr. Galynker said, patients must have a “persistent and intense feeling of frantic hopelessness,” in which they feel trapped in an intolerable situation.
They must also have emotional distress, which can include intense anxiety; feelings of being extremely tense, keyed up or jittery (people often develop insomnia); recent social withdrawal; and difficulty controlling their thoughts.
By the time patients develop S.C.S., they are in such distress that the thinking part of the brain — the frontal lobe — is overwhelmed, said Lisa J. Cohen, a clinical professor of psychiatry at Mount Sinai who is studying S.C.S. alongside Dr. Galynker. It’s like “trying to concentrate on a task with a fire alarm going off and dogs barking all around you,” she added.
This description of “frantic hopelessness” and feeling “trapped” gives us a glimpse into the internal maelstrom that leads to suicide. It also highlights why externalizing the blame to a technology is so misguided.
The article shares the story of Marisa Russello, who attempted suicide four years ago. Her experience underscores how internal, sudden, and unpredictable the impulse can be—and how disconnected it can be from any specific external “push.”
On the night that she nearly died, Ms. Russello wasn’t initially planning to harm herself. Life had been stressful, she said. She felt overwhelmed at work. A new antidepressant wasn’t working. She and her husband were arguing more than usual. But she wasn’t suicidal.
She was at the movies with her husband when Ms. Russello began to feel nauseated and agitated. She said she had a headache and needed to go home. As she reached the subway, a wave of negative emotions washed over her.
[….]
By the time she got home, she had “dropped into this black hole of sadness.”
And she decided that she had no choice but to end her life. Fortunately, she said, her attempt was interrupted.
Her decision to die by suicide was so sudden that if her psychiatrist had asked about self-harm at their last session, she would have said, truthfully, that she wasn’t even considering it.
When we read stories like Russello’s, or the accounts of the psychiatrists losing patients who denied being at risk, it becomes difficult to square the complexity of human psychology with the simplistic narrative that “Chatbot X caused Person Y to die.”
There is undeniably an overlap between people who use AI chatbots and people who are struggling with mental health issues—in part because so many people use chatbots today, but also because people in distress seek connection, answers, a safe space to vent. That search often leads to chatbots.
Unless we’re planning to make thorough and competent mental health support freely available to everyone who needs it at any time, that’s going to continue. Rather than simply insisting that these tools are evil, we should be looking at ways to improve outcomes knowing that some people are going to rely on them.
Just because a person used an AI tool—or a search engine, or a social media platform, or a diary—prior to their death does not mean the tool caused the death.
When we rush to blame the technology, we are effectively claiming to know something that experts in that NY Times piece admit they do not know. We are claiming we know why it happened. We are asserting that if the chatbot hadn’t generated what it generated, if it hadn’t been there responding to the person, that the “frantic hopelessness” described in the SCS research would simply have evaporated.
There is no evidence to support that.
None of this is to say AI tools can’t make things worse. For someone already in crisis, certain interactions could absolutely be unhelpful or exacerbating by “validating” the helplessness they’re already experiencing. But that is a far cry from the legal and media narrative that these tools are “killing” people.
The push to blame AI serves a psychological purpose for the living: it provides a tangible enemy. It implies that there is a switch we can flip—a regulation we can pass, a lawsuit we can win—that will stop these tragedies.
It suggests that suicide is a problem of product liability rather than a complex, often inscrutable crisis of the human mind.
The work being done on Suicide Crisis Syndrome is vital because it admits what the current discourse ignores: we are failing to identify the risk because we are looking at the wrong things.
Dr. Miller, the psychiatrist at Endeavor Health in Chicago, first learned about S.C.S. after the patient suicides. He then led efforts to screen every psychiatric patient for S.C.S. at his hospital system. In trying to implement the screenings there have been “fits and starts,” he said.
“It’s like turning the Titanic,” he added. “There are so many stakeholders that need to see that a new approach is worth the time and effort.”
While clinicians are trying to turn the Titanic of psychiatric care to better understand the internal states that lead to suicide, the public debate is focused on the wrong iceberg.
If we focus all our energy on demonizing AI, we risk ignoring the actual “black hole of sadness” that Ms. Russello described. We risk ignoring the systemic failures in mental health care. We risk ignoring the fact that half of suicide victims deny intent to their doctors.
Suicide is a tragedy. It is a moment where a person feels they have no other choice—a loss of agency so complete that the thinking brain is overwhelmed, as the SCS researchers describe it. Simplifying that into a story about a “rogue algorithm” or a “dangerous chatbot” doesn’t help the next person who feels that frantic hopelessness.
It just gives the rest of us someone to sue.
Filed Under: blame, generative ai, suicide
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

