Listen to the article
from the it’s-all-about-power dept
The Trump administration’s AI policy is two-faced, torn between deregulation and despotism.
In March, the administration released its National AI Legislative Framework, directing Congress to “prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.” This policy against government interference with AI is consistent with the administration’s purported light-touch approach to regulating the technology—but contrary to its recent actions.
In February 2025, Vice President Vance denounced “excessive regulation of the AI sector,” endorsing a “deregulatory flavor” of AI policy. Several months later, the administration released its AI Action Plan, pledging to “dismantle unnecessary regulatory barriers” and “onerous regulation.”
At first, the Trump administration followed through on this deregulatory promise. Three days into his second term, President Trump revoked an Executive Order from President Biden which established a government-wide effort to regulate and guide the development of the AI industry. Next, as directed by President Trump’s AI Action Plan, the Office of Science and Technology Policy initiated a proceeding to identify federal rules and regulations “that unnecessarily hinder” AI in order to implement “regulatory reform” and “promote” the technology. Last December, the Federal Trade Commission, led by two Trump appointees, set aside a Biden-era enforcement action against Rytr, an AI-powered writing assistant. The FTC explained that, “after reviewing the final order in response to President Trump’s AI Action Plan,” it concluded “the order unduly burdens innovation in the nascent AI industry.”
Despite the laissez-faire gesturing, however, the administration demonstrates a tyrannical impulse to control AI. In the same breath as denouncing excessive regulation, Vice President Vance demanded that “AI must remain free from ideological bias.” President Trump’s AI Action Plan echoed this command, directing AI companies to design their models “to pursue objective truth rather than social engineering agendas.” This rhetoric elides the fact that the First Amendment bars the government from deciding what constitutes “truth.”
In recent months, the administration has sought to exert control over the industry under the guise of combatting so-called “woke AI.” Last July, President Trump issued an Executive Order on Preventing Woke AI in the Federal Government, prohibiting government procurement of AI models unless they are ideologically “neutral,” i.e., “nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.” In January, Secretary of Defense Hegseth issued a memo instructing the Department of Defense to “utilize models free from usage policy constraints” and banning the DoD from “employ[ing] AI models which incorporate ideological ‘tuning.’”
The memo set the stage for the ongoing dispute between the administration and Anthropic, an American AI company. In July 2025, the DoD contracted with Anthropic to deploy its AI models for national security applications like intelligence analysis, modeling and simulation, operational planning, and cyber operations. In the contract, Anthropic stipulated that the government could not use its models for mass domestic surveillance or to power fully autonomous weapons—arguably violating Hegseth’s rule against usage constraints.
Consequently, in late February, Hegseth threatened to cut ties with Anthropic unless the company allowed the military to use its AI for “all lawful purposes.” When Anthropic refused, President Trump directed federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology,” deriding the firm as “A RADICAL LEFT, WOKE COMPANY.” He threatened to “use the Full Power of the Presidency to make [Anthropic] comply, with major civil and criminal consequences to follow.”
The DoD then designated Anthropic a “supply chain risk” under the Federal Acquisition Supply Chain Security Act of 2018, defined as an entity that “may sabotage, maliciously introduce unwanted function, extract data, or otherwise manipulate” the technology it provides “so as to surveil, deny, disrupt, or otherwise manipulate” the use of the technology or the “information stored or transmitted” thereon. The government has never applied this designation to a U.S. company; it is typically reserved for foreign intelligence agencies, terrorists, and hostile actors. As a result, Anthropic may not provide products or services to the DoD, and contractors may not use its products while working on DoD projects.
On March 9, Anthropic sued the administration in federal court, challenging the designation and seeking an injunction blocking its implementation. The company pleaded that the Trump administration has “harm[ed] Anthropic irreparably,” jeopardizing public and private contracts and costing it “hundreds of millions of dollars in the near-term,” as well as attacking “Anthropic’s reputation and core First Amendment freedoms.”
On March 26, the District Court for the Northern District of California sided with Anthropic and granted a preliminary injunction barring a variety of federal agencies from terminating their contracts. The court also blocked the DoD and Hegseth from implementing the supply chain risk designation. U.S. District Judge Rita Lin observed that the Trump administration is “punishing Anthropic for bringing public scrutiny to the government’s contracting position,” which “is classic illegal First Amendment retaliation.” Last week, the administration appealed the ruling to the Ninth Circuit.
Hegseth accused Anthropic of “duplicity,” but it is the Trump administration that has been duplicitous about its approach to AI. Despite championing deregulation, the administration has weaponized the federal government to punish an American AI company for refusing to bend to its will. Abusing the government procurement process to crush domestic AI firms is the opposite of light-touch regulation.
Judge Lin described the Trump administration’s actions against Anthropic as “Orwellian.” The administration has shown its ugly side on AI, and it looks a lot like tyranny.
Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.
Filed Under: ai, ai policy, defense department, dod, donald trump, free market, pete hegseth
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

