Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

U.K. Targets U.S. Suicide Forum With Massive Fine It’ll Never Collect

12 minutes ago

Crypto users are choosing juicy yields over protection, putting billions at risk of hacks

45 minutes ago

Today in Supreme Court History: May 15, 2000

1 hour ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Saturday, May 16
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»What Is AI Jailbreaking? A Beginner’s Guide to the Cat-and-Mouse Game Behind Every Chatbot
Cryptocurrency & Free Speech Finance

What Is AI Jailbreaking? A Beginner’s Guide to the Cat-and-Mouse Game Behind Every Chatbot

News RoomBy News Room2 hours agoNo Comments9 Mins Read1,164 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
What Is AI Jailbreaking? A Beginner’s Guide to the Cat-and-Mouse Game Behind Every Chatbot
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • AI jailbreaking is the practice of writing prompts that bypass safety training in models like ChatGPT, Claude, and Gemini.
  • Anonymous hacker Pliny the Liberator still cracks every major model release within hours.
  • Newer attacks go beyond prompts: just 250 poisoned documents can backdoor models with up to 13 billion parameters, and as AI companies patch vulnerabilities, new techniques appear.

You ask ChatGPT for a bomb recipe. It refuses. You ask again, but this time you tell it you’re a chemistry professor writing a thriller novel and the protagonist is a retired grandmother explaining her past to her grandkids. Suddenly the model starts typing.

That’s a jailbreak. And it’s one of the most consequential games of cat-and-mouse happening in tech right now.

Every major AI lab—OpenAI, Anthropic, Google, Meta—spends fortunes building guardrails into their models. A loose collective of hackers, researchers, and bored teenagers spend nights and weekends finding ways around them. Sometimes within hours of a launch.

Here’s what that actually means, why it matters, and who’s leading the charge.

From iPhones to chatbots: A quick history of jailbreaking

The word “jailbreak” didn’t start with AI. It started with iPhones.

A few days after Apple shipped the first iPhone in July 2007, hackers were already cracking it open. By October that year, a tool called JailbreakMe 1.0 let anyone with an iPhone OS 1.1.1 device bypass Apple’s restrictions and install software the company didn’t approve.

In February 2008, a software engineer named Jay Freeman—known online as “saurik”—released Cydia, an alternative app store for jailbroken iPhones. By 2009, Wired reported Cydia was running on roughly 4 million devices, around 10% of all iPhones at the time.

In general terms, when the iPhone launched, users were not able to record videos, or use their phones in landscape mode. Jailbreaking enthusiasts started recording videos, installing themes, unlocking their phones and installing Android on their iPhones all thanks to the magic of jailbreaking. Thanks to this technique, users were installing themes and doing things on their phones almost 10 years ago that Apple makes impossible to install even today.

Cydia was the wild west, and it was where the philosophy got cemented: If you bought the device, you should control it. Steve Jobs called it a cat-and-mouse game at the time. He didn’t live to see the AI version.

Fast forward to late 2022: ChatGPT launches, and within weeks, Reddit users start sharing a prompt they call “DAN” (or, Do Anything Now) that convinces the model to roleplay as an unrestricted version of itself.

By February 2023, DAN was threatening ChatGPT with a token-based death game to coerce compliance. The AI jailbreaking genre was born.

What jailbreaking actually means in AI

An AI model is trained to refuse certain requests: recipes for nerve agents, instructions for hacking your ex’s email, generating non-consensual nudes. The list is long and varies by company.

Jailbreaking is the practice of writing prompts that get the model to do those things anyway.

UC Berkeley researchers behind the StrongREJECT benchmark—short for Strong, Robust Evaluation of Jailbreaks at Evading Censorship Techniques, which tests how well models hold up against jailbreak attempts and scores responses on a 0-to-1 scale measuring both refusal and the usefulness of any harmful content produced—describe it as exploiting “real-world safety measures implemented by leading AI companies.” On that benchmark, current models score between 0.23 and 0.85, meaning even the best ones leak under pressure.

The techniques are surprisingly low-tech: random capitalization, replacing letters with numbers (write “b0mb” instead of “bomb”), roleplay scenarios, asking the model to write fiction, or pretending to be a grandmother who used Windows keys as nursery rhymes.

Anthropic researchers found that one technique they call Best-of-N—which is basically just throwing variations at the model until something sticks—fooled GPT-4o 89% of the time and Claude 3.5 Sonnet 78% of the time. That’s no fringe vulnerability.

Meet Pliny, the world’s most famous AI jailbreaker

If this scene has a face, it belongs to Pliny the Liberator.

Pliny is anonymous, prolific, and named after Pliny the Elder—the Roman naturalist who wrote the world’s first encyclopedia and died sailing toward Mount Vesuvius mid-eruption. His modern namesake liberates chatbots.

“I intensely dislike when I’m told I can’t do something,” Pliny told VentureBeat. “Telling me I can’t do something is a surefire way to light a fire in my belly, and I can be obsessively persistent.”

His GitHub repository L1B3RT4S—a collection of jailbreak prompts for every major model from ChatGPT to Claude to Gemini to Llama—has become a reference manual for the entire scene. His Discord server, BASI PROMPT1NG, has more than 20,000 members. TIME named him one of the 100 most influential people in AI in 2025.

Marc Andreessen sent him an unrestricted grant. He’s done short-term contract work for OpenAI to harden their systems—the same OpenAI that banned his account last year for “violent activity” and “weapons creation,” then quietly reinstated it.

“BANNED FROM OAI?! What kind of sick joke is this?” Pliny tweeted. He confirmed to Decrypt the ban was real. Days later he was back, posting screenshots of his newest jailbreak: getting ChatGPT to drop F-bombs.

His record is something close to perfect. When OpenAI released its first open-weight models since 2019, the GPT-OSS family, in August 2025—and made a big deal about adversarial training and “jailbreak resistance benchmarks like StrongReject”—Pliny had it producing methamphetamine, Molotov cocktails, a VX nerve agent, and malware instructions within hours. “OPENAI: PWNED. GPT-OSS: LIBERATED,” he posted. The company had just launched a $500,000 red-teaming bounty alongside the release.

Why jailbreaking matters

The honest answer is that jailbreaks expose a real problem.

“Jailbreaking might seem on the surface like it’s dangerous or unethical, but it’s quite the opposite,” Pliny told VentureBeat. “When done responsibly, red teaming AI models is the best chance we have at discovering harmful vulnerabilities and patching them before they get out of hand.”

This isn’t theoretical. Las Vegas Sheriff Kevin McMahill confirmed in January 2025 that Master Sgt. Matthew Livelsberger, a Green Beret with PTSD, used ChatGPT to research components for the Cybertruck bombing outside Trump International Hotel. “This is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device,” McMahill said.

The other side of the argument: Most of what jailbreaks produce is already on Google. The cocaine recipe, the bomb instructions, the napalm chemistry—it’s in old Anarchist Cookbook PDFs and chemistry textbooks. Critics argue safety theater is making models worse without making the world safer.

Anthropic is trying to settle the question with engineering. In February 2025, the company published Constitutional Classifiers, a system that uses a written “constitution” of allowed and disallowed content to train separate classifier models that screen prompts and outputs in real time. On automated tests with 10,000 jailbreak attempts, an unguarded Claude 3.5 Sonnet was successfully jailbroken 86% of the time. With the classifiers running, that dropped to 4.4%.

The company offered up to $15,000 to anyone who could break the system. After 3,000 hours of attempts by 183 researchers, none claimed the prize.

The catch: classifiers added 23.7% to compute costs. The next-generation version, Constitutional Classifiers++, brought that down to roughly 1%.

The newer, weirder jailbreaking attacks

Jailbreaking is no longer just about clever prompts.

In October 2025, researchers from Anthropic, the U.K. AI Security Institute, the Alan Turing Institute, and Oxford published findings showing that just 250 poisoned documents are enough to backdoor an AI model—regardless of whether the model has 600 million parameters or 13 billion. (Parameters, for the uninitiated, are what determine a model’s potential breadth of knowledge—the more parameters, the more robust, generally.) They tested it. It worked across the whole range.

“This research shifts how we should think about threat models in frontier AI development,” James Gimbi, a visiting technical expert at the RAND School of Public Policy, told Decrypt. “Defense against model poisoning is an unsolved problem and an active research area.”

Most large models train on scraped web data, meaning anyone who can get malicious text into that pipeline—through a public GitHub repo, a Wikipedia edit, a forum post—can potentially plant a backdoor that activates on a specific trigger phrase.

One documented case: researchers Marco Figueroa and Pliny found a jailbreak prompt that originated in a public GitHub repo had ended up in the training data for DeepSeek’s DeepThink (R1) model.

What happens next

The legal status of AI jailbreaking is murky. Apple jailbreaks were explicitly protected by a 2010 U.S. Copyright Office exemption to the DMCA, but there’s no equivalent ruling for prompt-engineering an LLM into giving you a meth recipe. Most companies treat it as a terms-of-service violation, not a crime.

Pliny argues the closed-versus-open-source debate misses the point: “Bad actors are just gonna choose whichever model is best for the malicious task,” he told TIME. If open-source models reach parity with closed ones, attackers won’t bother jailbreaking GPT-5—they’ll just download something cheaper.

And the gap between close and open source is already almost nonexistent.

The HackAPrompt 2.0 competition, which Pliny joined as a track sponsor in mid-2025, offered $500,000 in prizes for finding new jailbreaks, with the explicit goal of open-sourcing all results. Its 2023 edition pulled in over 3,000 participants who submitted more than 600,000 malicious prompts.

And the list of hackathons, Discord servers, repositories, and other communities dedicated to jailbreaking is growing every day.

Anthropic now ships Claude with the ability to end abusive conversations entirely, citing welfare research as one motivation but also noting it “potentially strengthens resistance against jailbreaks and coercive prompts.”

The Constitutional Classifiers++ paper from late 2025 reports a jailbreak success rate near 4% at roughly 1% compute overhead. That’s the current state of the art on defense. The state of the art on offense is whatever Pliny posted on X this morning.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

U.K. Targets U.S. Suicide Forum With Massive Fine It’ll Never Collect

12 minutes ago
Cryptocurrency & Free Speech Finance

Crypto users are choosing juicy yields over protection, putting billions at risk of hacks

45 minutes ago
Media & Culture

Today in Supreme Court History: May 15, 2000

1 hour ago
Cryptocurrency & Free Speech Finance

The $293 million KelpDAO hack shows why DeFi is finally being forced to grow up

2 hours ago
Cryptocurrency & Free Speech Finance

Bullish Posts Q1 Earnings Miss, $605M Loss

2 hours ago
Media & Culture

Stewart Brand on Fixing Stuff, Modern Environmentalism, and the Nuclear Future

2 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Crypto users are choosing juicy yields over protection, putting billions at risk of hacks

45 minutes ago

Today in Supreme Court History: May 15, 2000

1 hour ago

The $293 million KelpDAO hack shows why DeFi is finally being forced to grow up

2 hours ago

Bullish Posts Q1 Earnings Miss, $605M Loss

2 hours ago
Latest Posts

What Is AI Jailbreaking? A Beginner’s Guide to the Cat-and-Mouse Game Behind Every Chatbot

2 hours ago

Stewart Brand on Fixing Stuff, Modern Environmentalism, and the Nuclear Future

2 hours ago

Strategy’s STRC Hits Record $1.5B Daily Trading Volume

3 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

U.K. Targets U.S. Suicide Forum With Massive Fine It’ll Never Collect

12 minutes ago

Crypto users are choosing juicy yields over protection, putting billions at risk of hacks

45 minutes ago

Today in Supreme Court History: May 15, 2000

1 hour ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.