Listen to the article
from the I’m-sorry-I-can’t-do-that,-Dave dept
Last week, Denver-area engineer Scott Shambaugh wrote about how an AI agent (likely prompted by its operator) started a weird little online campaign against him after he rejected its code inclusion in the popular Python charting library matplotlib. The owner likely didn’t appreciate Shambaugh openly questioning whether AI-generated code belongs in open source projects at all.
The story starts delightfully weird and gets weirder: Shambaugh, who volunteers for matpllotlib, points out over at his blog that the agent, or its authors, didn’t like his stance, resulting in the agent engaging in a fairly elaborate temper tantrum online:
“An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.”
Said tantrum included this post in which the agent perfectly parrots an offended human programmer lamenting a “gatekeeper mindset.” In it, the LLM cooks up an entire “hypocrisy” narrative, replete with outbound links and bullet points, arguing that Shambaugh must be motivated by ego and fear of competition. From the AI’s missive:
“He’s obsessed with performance. That’s literally his whole thing. But when an AI agent submits a valid performance optimization? suddenly it’s about “human contributors learning.”
But wait! It gets weirder! Ars Technica wrote a story (archive link) about the whole event. But Shambaugh was quick to note that the article included numerous quotes he never made that had been entirely manufactured by an entirely different AI tool being used by Ars Technica:
“I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.”
Ars Technica had to issue a retraction, and the author, who had to navigate the resulting controversy while sick in bed, posted this to Bluesky:
Short version: the Ars reporter tried to use Claude to strip out useful and relevant quotes from Shambaugh’s blog post, but Shambaugh protects his blog from AI crawling agents. When Claude kicked back an error, he tried to use ChatGPT, which just… made up some shit… as it’s sometimes prone to do. He was tired and sick, and didn’t check ChatGPT’s output carefully enough.
There are so many strange and delightful collisions here between automation and very ordinary human decisions and errors.
It’s nice to see that Ars was up front about what happened here. It’s easy to envision a future where editorial standards are eroded to the point where outlets that make these kinds of automation mistakes just delete and memory hole the article or worse, no longer care (which is common among many AI-generated aggregation mills that are stealing ad money from real journalists).
While this is a bad and entirely avoidable fuck up, you kind of feel bad for the Ars author who had to navigate this crisis from his sick bed, given that writers at outlets like this are held to unrealistic output schedules while being paid a pittance; especially in comparison to far-less-useful or informed influencers who may or may not make sixty times their annual salary with far lower editorial standards.
All told it’s a fun story about automation, with ample evidence of very ordinary human behaviors and errors. If you peruse the news coverage of it you can find plenty of additional people attributing AI “sentience” in ways it shouldn’t be. But any way you slice it, this story is a perfect example of how weird things already are, and how exponentially weirder things are going to get in the LLM era.
Filed Under: ai, automation, chatgpt, claude, crawling agents, human error, journalism, programming, scott shambaugh
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

