Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Court Blocks Florida Gov. DeSantis’s Executive Order Designating CAIR as Terrorist Organization

11 minutes ago

Short seller Culper Research says ether tokenomics is ‘impaired’

37 minutes ago

Bitcoin Miners Start Unwinding BTC Treasuries as Industry Strains

39 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Friday, March 6
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»News»Media & Culture»OpenAI Rewrites Contract, Anthropic Returns to Negotiate—The Chaos Continues
Media & Culture

OpenAI Rewrites Contract, Anthropic Returns to Negotiate—The Chaos Continues

News RoomBy News Room2 hours agoNo Comments10 Mins Read1,451 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
OpenAI Rewrites Contract, Anthropic Returns to Negotiate—The Chaos Continues
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

from the the-uncertainty-tax-at-work dept

In less than a week, the Pentagon blacklisted an AI company for having ethics, declared it a supply chain risk, watched its preferred replacement face a massive user revolt, and then sat down to amend the replacement’s contract to address the very concerns the blacklisted company had been raising all along. Meanwhile, the blacklisted company is reportedly back in negotiations with the same Pentagon that tried to destroy it, because—wouldn’t you know—its models are apparently better for what the military actually needs.

On Monday night, Sam Altman posted on X that OpenAI had amended its Defense Department agreement to include new language explicitly addressing domestic surveillance:

We have been working with the DoW to make some additions in our agreement to make our principles very clear.

1. We are going to amend our deal to add this language, in addition to everything else:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Is this better than the original contract language we flagged earlier this week? Probably! The explicit mention of “commercially acquired personal or identifiable information” is new and addresses the exact data type—geolocation, browsing history, the stuff data brokers sell about all of us—that reportedly was the final sticking point in the Anthropic negotiations. The language about “deliberate tracking, surveillance, or monitoring” is more concrete than the original contract’s vague reference to “unconstrained monitoring.”

Altman also noted that the Defense Department “affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA)” and that any such use “would require a follow-on modification to our contract.”

This sounds better than where they were before, but it’s genuinely hard to tell from the outside. And that difficulty—the opaque nature of what any of this means in practice—is the actual story here.

Because the problem with OpenAI’s deal was never just about the specific contract language. As we laid out earlier this week, the intelligence community has spent decades engineering legal definitions that let it conduct what any reasonable person would call mass surveillance while truthfully claiming otherwise. Whether this new amendment survives contact with those definitions is a question no outside observer can answer right now.

The bigger issue is happens to innovation when the rules can change based on a cabinet secretary’s mood. The contract still references compliance with existing legal authorities—the same authorities that have been stretched and reinterpreted for years to permit exactly the kinds of data collection the new language purports to prohibit.

Anthropic’s Dario Amodei was characteristically blunt about the gap between OpenAI’s public framing and what the contract language actually delivers. In a memo to staff that has since leaked:

“The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”

Damn.

He called OpenAI’s messaging around the deal “straight up lies” and described the whole thing as “safety theater.” You can dismiss some of that as competitive sniping, but Amodei was in the room for the Anthropic negotiations, and his characterization of what the Pentagon was actually demanding lines up with what the New York Times separately reported. His criticism is specific and technical: the Pentagon asked Anthropic to delete a “specific phrase about ‘analysis of bulk acquired data’” that was “the single line in the contract that exactly matched this scenario we were most worried about.” OpenAI’s original contract conspicuously lacked any such language. The amendment addresses this, at least on its face. Whether it does so in a way that actually binds the Pentagon’s behavior is a different question.

But the contract language debate, as important as it is, obscures the much larger problem.

Look at what happened at OpenAI’s all-hands meeting on Tuesday. According to a partial transcript reviewed by CNBC, Altman told his employees this:

“So maybe you think the Iran strike was good and the Venezuela invasion was bad…. You don’t get to weigh in on that.”

That’s the CEO of one of the most important AI companies on the planet telling his workforce that operational decisions about how their technology gets used in military actions are entirely up to Defense Secretary Pete Hegseth. The same Pete Hegseth who, just days earlier, tried to nuke an entire company for asking that AI not make autonomous kill decisions. The same Hegseth whose idea of contract negotiation was to issue what we described earlier this week as a “corporate death penalty” against Anthropic.

Speaking of Anthropic, that situation has gone from tragedy to farce and back again. The Financial Times reports that Amodei is now in direct talks with Emil Michael, a Hegseth lackey, to try to salvage a deal. This is the same Emil Michael (a scandal-ridden former Uber exec) who, just last week, called Amodei a “liar” with a “God complex”. And the same Defense Department that designated Anthropic a supply chain risk. The same administration that directed every federal agency to “immediately cease” all use of Anthropic’s technology.

And yet here they are, back at the table. Because, as multiple reports have made clear, Anthropic’s Claude models were already deployed on the Pentagon’s classified network and were quite useful for the Defense Department. The Pentagon apparently needs Anthropic’s technology because it’s actually good at the job. This just highlights how monumentally stupid the whole “supply chain risk” gambit was. You don’t issue a corporate death penalty against a company whose product you’re actively relying on for military operations unless you’re operating on pure spite rather than strategy.

The public, meanwhile, is making its own calculations under this cloud of uncertainty. ChatGPT uninstalls spiked 295% the day after the OpenAI deal was announced, while downloads dropped significantly. Anthropic’s Claude app jumped to the top of the App Store. One-star reviews of ChatGPT surged nearly 775% over the weekend.

Users who have zero ability to evaluate the legal intricacies of EO 12333 or the practical significance of “commercially acquired personal or identifiable information” are making choices based on the clear understanding that something has gone seriously wrong.

Call it the uncertainty tax: when users can’t verify whether a company’s principles are real, they treat visible conflict with authority as proof of authenticity. When people can’t tell whether a company’s safety commitments are real, they default to the company that got punished for having safety commitments—because at least that tells you that there were at least some principles at play.

Getting punished for having principles is, perversely, the clearest indication that you had any, whether or not it’s true.

Altman himself seems to recognize that the rollout was a disaster. From his post:

One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.

“Looked” opportunistic is doing a lot of work in that sentence. But okay.

The deeper issue here goes beyond any one contract or any one company. What we’ve watched unfold over the past week is a case study in why you cannot build a functional technology industry under a petulant, arbitrary authoritarian regime.

This is now what every AI company knows: if you tell the government “no” on something—even something as basic as “our AI shouldn’t make autonomous kill decisions without human oversight”—the Defense Secretary may try to destroy your company, publicly call you treasonous, and bar anyone doing business with the military from working with you. If you tell the government “yes,” you may face a massive consumer backlash, lose hundreds of thousands of users, and find yourself amending contracts on the fly to address concerns you should have thought about before signing.

Seems like a rough way to encourage innovation in the AI space.

And the rules can change at any moment. This week it’s “give us unrestricted access for all lawful purposes.” Next week, the definition of “lawful” might shift. The week after that, maybe the administration decides it doesn’t like something else about your company and the threats start anew. Altman told his employees that Hegseth made clear OpenAI doesn’t “get to make operational decisions.” So the company writes the safety stack, crosses its fingers, and hopes the people who just tried to destroy its largest competitor over basic ethical commitments will honor the contract language.

This is the environment the AI industry’s biggest Trump boosters created for themselves. For months, the refrain on certain VC bro podcasts was that the Biden administration was going to destroy AI and hand the industry to China. In reality, Biden’s AI policy amounted to a toothless set of principles and some extra paperwork. It was annoying, sure. It did not involve the Defense Secretary threatening to obliterate companies or the president directing all federal agencies to stop using a specific American company’s technology.

And the irony of it all is that the market seems to be figuring this out even as the companies’ leadership teams scramble to pretend everything is fine. The same users who were happily using ChatGPT a week ago are fleeing to Claude—the product of the company the government tried to destroy—because they’ve correctly identified that a company that got punished for standing up to an authoritarian government is probably more trustworthy than one that rushed to fill the void.

Innovation requires predictability. It requires the ability to plan, to hire, to build product roadmaps that extend beyond next Friday’s presidential tweet. It requires knowing that if you build something good and compete fairly, the government won’t try to destroy you because you annoyed a cabinet secretary during contract negotiations. Every AI company—even the ones currently benefiting from Anthropic’s punishment—should be deeply unsettled by what happened last week.

Because the leopard that ate Anthropic’s face last Friday can eat yours next Friday. All it takes is one disagreement, one insufficiently sycophantic response, one moment of “duplicity” defined as “having principles.”

Altman seems to partially grasp this. He publicly stated that the decision to designate Anthropic as a supply chain risk was “a very bad decision” and that the Pentagon should offer Anthropic the same terms OpenAI agreed to. That’s the right thing to say when facing a PR crisis like this. But saying it while simultaneously benefiting from the decision, while telling your employees they don’t get to have opinions about how their technology gets used in military operations, sends a somewhat mixed signal.

The lesson here has less to do with the specifics of any contract than with the fact that an impetuous, arbitrary, out-of-control authoritarian government is bad for innovation. I mean, it’s also bad for the public, society, and (arguably) the military as well. The US has led in innovation for decades in part because we had stable institutions and predictable rule of law.

But hey, at least nobody’s asking them to fill out compliance forms anymore. That was the real threat to American AI leadership.

Filed Under: ai, contracts, dario amodei, defense department, dod, sam altman, surveillance, uncertainty, uncertainty tax

Companies: anthropic, openai

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

#AI #ContentCreators #DigitalCulture #DigitalTransformation #InformationAge #TechIndustry
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

Court Blocks Florida Gov. DeSantis’s Executive Order Designating CAIR as Terrorist Organization

11 minutes ago
Cryptocurrency & Free Speech Finance

SEC Moves to Settle Justin Sun Case With $10M Penalty for BitTorrent Owner

44 minutes ago
Media & Culture

Ctrl-Alt-Speech: The (Content Moderation) Eras Tour

1 hour ago
Media & Culture

The Zizians and the Second Amendment

1 hour ago
Cryptocurrency & Free Speech Finance

Nvidia Is Probably Done Investing in OpenAI and Anthropic, Says CEO—Why?

2 hours ago
Media & Culture

Florida Legislators Advance a Bill Authorizing Government Surveillance Based on ‘Views’ or ‘Opinions’

2 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Short seller Culper Research says ether tokenomics is ‘impaired’

37 minutes ago

Bitcoin Miners Start Unwinding BTC Treasuries as Industry Strains

39 minutes ago

SEC Moves to Settle Justin Sun Case With $10M Penalty for BitTorrent Owner

44 minutes ago

Ctrl-Alt-Speech: The (Content Moderation) Eras Tour

1 hour ago
Latest Posts

The Zizians and the Second Amendment

1 hour ago

CPJ urges Panamanian authorities to lift order blocking reporting on 2 prominent state contractors

1 hour ago

U.S. banking agencies say capital should be same for standard or tokenized securities

2 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Court Blocks Florida Gov. DeSantis’s Executive Order Designating CAIR as Terrorist Organization

11 minutes ago

Short seller Culper Research says ether tokenomics is ‘impaired’

37 minutes ago

Bitcoin Miners Start Unwinding BTC Treasuries as Industry Strains

39 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.