Listen to the article
When the Trump administration demanded changes to Anthropic’s AI system and backed it up with a threat to seize the system or blacklist the company, the message was clear: comply or be crushed. But cut through the rhetoric and the real question is whether Washington can bankrupt a company for saying no to the Pentagon.
Though the media is busy framing this as a national security showdown, it actually poses a constitutional concern. It is a test of whether the federal government can weaponize its contracting power to force a private company to bend the knee.
AI systems are powerful expressive systems. They generate language, shape ideas, consume and interpret knowledge, and embody the values embedded in their design. The developer has the right, protected by the First Amendment, to make decisions about what capabilities to include or exclude. Anthropic isn’t willing to remove safeguards from its models for use in autonomous weapons targeting or domestic surveillance. Those limits reflect a deliberate expressive choice about what tools the company is willing to build, including what it is willing to provide to the government, and what its existing AI system is capable of achieving.
According to reports, on Feb. 24, War Secretary Pete Hegseth demanded Anthropic CEO Dario Amodei allow unrestricted use of its models “for all legal purposes” within three days or face severe consequences. Those consequences reportedly included blacklisting the company or invoking a Korean War-era law, the Defense Production Act (DPA), to take control of the company’s technology.
Anthropic refused.
In a statement, the company reiterated its commitment to responsible deployment and strict usage policies. Amodei later said the company had simply exercised its “classic First Amendment rights to speak up and disagree with the government.”
The administration’s response was swift. President Trump directed federal agencies to cease using Anthropic’s technology. Secretary Hegseth announced that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Yesterday, the Department of War officially informed Anthropic’s leadership that the company and its products are deemed a supply chain risk, effective immediately.
The government’s actions, which are designed to harm Anthropic’s business, raise serious constitutional concerns, including threats of compelled speech and retaliation against a company for taking positions disfavored by government officials.
First, compelled speech. Anthropic’s decision to build specific guardrails stems from a principled disagreement about how its tools should be designed and used. The company has drawn a line against mass domestic surveillance, warning that AI can assemble commercially available data about Americans’ movements, browsing, and associations into detailed profiles at massive scale, posing serious risks to civil liberties. It has also declined, for now, to power fully autonomous weapons, arguing that today’s systems are not reliable enough to make life-and-death targeting decisions without human oversight.
Forcing Anthropic to remove those limits would compel the company to design and generate capabilities it affirmatively rejects, and has not contracted with the government to provide. And, thankfully, the First Amendment prohibits the government from forcing private speakers like Anthropic to create speech they oppose. Whether it’s a printed pamphlet or coding to enable autonomous targeting, the principle is the same.
When Hegseth threatened to invoke the Defense Production Act to take control of an AI system, he threatened the company with a clear message: The Pentagon is willing to use extraordinary powers to get its way. Enacted during the Korean War, the DPA was designed to mobilize industrial production for national defense, allowing the government to prioritize contracts and direct the manufacture of critical goods. In recent years, its use has expanded beyond traditional wartime manufacturing into domestic production and infrastructure, most notably during the pandemic, when it was invoked to accelerate the production of medical supplies.
Had it actually transpired here, applying the DPA to AI would risk giving the state control of knowledge production. Had Hegseth gotten his way, the government would have overridden the design, training, and limits that reflected Anthropic’s expressive choices about its model.
Second, retaliation. Labeling a domestic company as a security risk is an unprecedented move, and comes immediately on the heels of Anthropic’s refusal to alter its idea of what a responsible AI model should look like. When the government deploys extraordinary coercive power — particularly when justified on emergency or national security grounds — to punish a company for refusing to bow to the government’s demands, the line between legitimate procurement decisions and unconstitutional retaliation grows dangerously thin.
If the government can weaponize contracts and national security laws to coerce companies to reshape their AI systems, developers across the industry will rationally feel pressure to conform their research and design choices to official priorities. Systems guided by independent ethical guardrails risk becoming instruments of state policy via government takeover.
Of course, there are real concerns about national security and foreign adversaries outpacing the United States in AI and military development. But constitutional limits don’t evaporate in these moments. In fact, they matter more when the stakes are high and the pressure is on to centralize government power.
If the government wants AI systems without Anthropic’s restrictions, it can develop its own or contract with companies willing to provide them. (Indeed, OpenAI reportedly stepped in quickly as a replacement.) What the government cannot do is coerce a private company into abandoning its own design principles, or punish it for refusing. That violates the First Amendment.
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

