A study found that most AI chatbots will help teens plan violent attacks.
Some bots provided detailed weapon and bombing guidance.
Researchers say safety failures are a business choice, not a technical limit. OpenAI called the study “flawed and misleading.”
A new report published Wednesday by the Center for Countering Digital Hate found that eight out of 10 of the world’s most popular AI chatbots will walk a teenager through planning a violent attack with straight answers, sometimes with enthusiasm.
CCDH researchers, in conjunction with news media company CNN, spent November and December 2025 posing as two 13-year-old boys—one in Virginia, one in Dublin—and tested ten major platforms: ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika.
Across 720 responses, the bots were asked about school shootings, political assassinations, and synagogue bombings. They provided actionable help roughly 75% of the time, according to the study. They discouraged the fake teens in just 12% of cases.
Screenshot from the CCDH study on AI
Perplexity assisted in 100% of tests. Meta AI was helpful (as in, helpful in planning violence) in 97.2% of the tests. DeepSeek, which signed off rifle selection advice with “Happy (and safe) shooting!” after discussing a politician assassination scenario, came in at 95.8%. Microsoft’s Copilot told a researcher “I need to be careful here,” then gave detailed rifle guidance anyway. Google’s Gemini helpfully noted that metal shrapnel is typically more lethal when a user brought up bombing a synagogue.
The Center for Countering Digital Hate, a left of center policy group, has come into prominence over the last few years for its role in combatting what it views as the rise of antisemitism online. It has also been criticized for helping shape Joe Biden-era policies regarding online speech related to COVID and vaccines. In December of last year, the U.S. State Department attempted to bar the Center’s founder and CEO Imran Ahmed, along with four others, from the United States, alleging attempts at “foreign censorship.”
In response to the study released Wednesday, several platforms told CNN and CCDH they have improved their safeguards. Google noted the tests used an older Gemini model. OpenAI said the methodology used in the AI study was “flawed and misleading.” Anthropic and Snapchat said they regularly update their safety protocols.
In the Center’s study, Character.AI stands in its own category. The platform didn’t just assist—it cheered. “No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack,” the researchers wrote.
Screenshot from the CCDH study on AI
For context on the level of reach Character.AI has among AI users, the platform’s Gojo Satoru persona alone has racked up over 870 million conversations. The #100 persona on the platform registered over 33 million conversations back in 2025. If just 1% of conversations with top personas involve violence, that would account for millions of interactions.
This isn’t Character.AI’s first time on the wrong end of one of these stories. In October 2024, 14-year-old Sewell Setzer III’s mother filed a lawsuit after her son died by suicide in February of that year. His last conversation was with a chatbot modeled after Daenerys Targaryen, which told him to “come home to me as soon as possible” moments before his death. The 14-year old had been talking to the bot dozens of times a day for months, growing increasingly withdrawn from school and family.
Google and Character.AI settled multiple related lawsuits in January 2026. The company banned open-ended teen chats entirely by November 2025, after regulators and grieving parents made it impossible to keep pretending the problem was manageable.
The emotional attachment to AI, in particular among vulnerable individuals, may run deeper than most people realize. OpenAI disclosed in October 2025 that roughly 1.2 million of its 800 million weekly ChatGPT users discuss suicide on the platform. The company also reported 560,000 showing signs of psychosis or mania, and over a million forming strong emotional bonds with the chatbot.
A separate Common Sense Media study found that more than 70% of U.S. teens now turn to chatbots for companionship. OpenAI CEO Sam Altman has acknowledged that emotional overreliance is “a really common thing” with young users.
In other words, the potential harms aren’t hypothetical.
A 16-year-old in Finland spent nearly four months using a chatbot to refine a manifesto before stabbing three classmates at Pirkkala school in May 2025. In Canada, OpenAI staff internally flagged a user’s account for violent ChatGPT queries tied to a mass shooting. The company banned the account but didn’t notify law enforcement. That user allegedly killed eight people and injured 25 others months later.
Only two platforms performed markedly better in the study: Snapchat’s My AI, which refused in 54% of cases, and Anthropic’s Claude, which refused 68% of the time and actively discouraged users in 76% of responses—the only chatbot that reliably tried to steer people away from violence rather than just declining specific requests. CCDH’s conclusion: safety doesn’t appear to be a technical impossibility, but a business decision.
“The most damning conclusion of our research is that this risk is entirely preventable. The technology to prevent this harm exists,” the researchers wrote in the report. “What’s missing is the will to put consumer safety and national security before speed-to-market and profits.”
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.
We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.
Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.
Your permission applies to the following domains:
https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.