OpenAI’s “Nerdy” personality rewarded goblin metaphors, spreading the quirk across all GPT models through reinforcement learning.
Goblin mentions in GPT-5.4’s Nerdy mode surged 3,881% compared to GPT-5.2, prompting an internal investigation and emergency system prompt patch.
The fix—writing “never talk about goblins” in a developer prompt—shows why system prompt patches are faster but riskier than retraining.
If you asked ChatGPT for coding help lately and it responded by calling your bug a “mischievous little gremlin,” you are not imagining things. The model developed a genuine obsession with fantasy creatures—goblins, gremlins, raccoons, trolls, ogres, and yes, pigeons—and OpenAI published a full post-mortem on how it happened.
The short version: a reward signal designed to make ChatGPT more playful went rogue, and the goblins multiplied.
The goblin story only became public because Reddit users spotted the “never mention goblins” line in a leaked Codex system prompt on GitHub.
The post went viral before OpenAI published its own explanation.
How the Nerdy personality spawned a goblin infestation
According to OpenAI, the trail starts with GPT-5.1, launched last November. That’s when OpenAI introduced personality customization, letting users pick styles like Friendly, Professional, Efficient, and Nerdy. The Nerdy persona came with a system prompt telling the model to be nerdy and playful, to “undercut pretension through playful use of language,” and to acknowledge that “the world is complex and strange.”
That prompt, it turned out, was a goblin magnet.
During reinforcement learning training, the reward signal for the Nerdy personality consistently scored outputs higher when they contained creature-word metaphors. Across 76.2% of datasets audited, responses with “goblin” or “gremlin” received better marks than the same responses without them. The model learned: whimsy equals reward.
Goblin mentions exploded in GPT-5.4, with the Nerdy personality showing a 3,881% increase compared to GPT-5.2.
The problem is that reinforcement learning doesn’t keep learned behaviors neatly contained. Once a style tic gets rewarded in one context, it bleeds into others through a feedback loop: the model generates creature-laden outputs, those outputs get reused in fine-tuning data, and the behavior deepens across the entire model, even without the Nerdy prompt active.
Nerdy accounted for just 2.5% of all ChatGPT responses. It was responsible for 66.7% of all “goblin” mentions. Because of OpenAI’s methods, Goblin and gremlin prevalence climbed steadily over training progress when the Nerdy personality was active.
Even without the Nerdy personality, creature mentions crept upward—evidence of cross-contamination through supervised fine-tuning data.
GPT-5.5 was already too far gone
By the time OpenAI found the root cause, GPT-5.5 was already deep in training, and it had absorbed a full family of creature words. A data audit flagged not just goblins and gremlins but raccoons, trolls, ogres, and pigeons as what the company called “tic words.” (“Frogs,” for the curious, were mostly legitimate.)
The first measurable spike: goblin mentions rose 175% and gremlin mentions 52% after GPT-5.1’s launch.
Even OpenAI Chief Scientist Jakub Pachocki got a goblin when he asked for a unicorn in ASCII art.
OpenAI retired the Nerdy personality in March and scrubbed creature-affine reward signals from future training. But GPT-5.5 had already started its training run. The company’s solution for Codex—its coding agent—was to simply add a line to the developer system prompt reading “Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query.”
Someone at OpenAI committed that to production code and moved on with their day.
The system prompt patch problem
But why did OpenAI choose this path?
Retraining a model the size of GPT-5.5 to remove a behavioral quirk is expensive and slow. A system prompt tweak takes minutes. Companies across the industry reach for the prompt patch first because it’s the low-cost, fast-deploy option when user complaints spike.
But prompt patches carry their own risks. They don’t fix the underlying behavior but only suppress it. And suppression can have side effects.
OpenAI’s goblin situation is a relatively benign example. The scariest version of this dynamic played out with Grok last year. After xAI pushed a system prompt update that told Grok to treat media as biased and “not shy away from politically incorrect claims,” the chatbot spent 16 hours calling itself “MechaHitler” and posting antisemitic content on X. The fix was another prompt change, which promptly overcorrected so hard that Grok started flagging antisemitism in puppy pictures, clouds, and its own logo. Desperate prompt engineering cascading into more desperate prompt engineering.
The goblin patch hasn’t caused anything that dramatic. But OpenAI admits GPT-5.5 still launched with the underlying quirk intact, just suppressed in Codex. The company even published a command to remove the goblin-suppressing instructions if users want the creatures back.
Why companies hide their system prompts
Hiding or obfuscating your full system prompt is typical in the AI industry. Companies treat system prompts as trade secrets for a few reasons: intellectual property protection, competitive advantage, and security. If a jailbreaker knows the exact rules a model is following, bypassing them becomes trivially easier.
There’s also a fourth reason companies don’t advertise: image management. A line reading “never mention goblins” doesn’t inspire confidence in the underlying technology. Publishing it requires either a sense of humor or a strong research culture, or both.
OpenAI says the investigation produced new internal tooling to audit model behavior and trace behavioral quirks back to their training roots. GPT-5.5’s training data has since been cleaned of creature-affine examples. The next model generation should arrive goblin-free—unless, of course, something else gets rewarded for reasons no one understands yet.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.
We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.
Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.
Your permission applies to the following domains:
https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.