Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Kalshi Founder Outlines Next Steps for ‘Iran Leader Ousted By’ Market

27 minutes ago

The Best AI Tools That Actually Respect Your Privacy

3 hours ago

Inside the messy proxy fight at BTC treasury company Empery Digital (EMPD)

3 hours ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Sunday, March 1
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report
Cryptocurrency & Free Speech Finance

US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report

News RoomBy News Room12 hours agoNo Comments3 Mins Read990 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company’s systems.

Military commands, including US Central Command (CENTCOM) in the Middle East, used Anthropic’s Claude AI model for operational support, according to people familiar with the matter cited by The Wall Street Journal. The tool has reportedly assisted with intelligence analysis, identifying potential targets and running battlefield simulations.

The incident shows how deeply advanced AI systems have become embedded in defense operations. Even as the administration moved to sever ties with the company, Claude remained integrated into military workflows.

On Friday, the Trump administration instructed agencies to stop working with the company and directed the Defense Department to treat it as a potential security risk. The order came after contract talks broke down, with Anthropic refusing to grant unrestricted military use of its AI for any lawful scenario requested by defense officials.

Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ

Anthropic’s Claude AI used for classified operations

Anthropic had previously secured a multiyear Pentagon contract worth up to $200 million alongside several major AI labs. Through partnerships involving Palantir and Amazon Web Services, Claude became approved for classified intelligence and operational workflows. The system was reportedly also involved in earlier operations, including a January mission in Venezuela that resulted in the capture of President Nicolás Maduro.

Tensions intensified after Defense Secretary Pete Hegseth demanded the company permit unrestricted military use of its models. Anthropic CEO Dario Amodei rejected the request, describing certain applications as ethical boundaries the company would not cross, even if it meant losing government business.

In response, the Pentagon began lining up replacement providers, reaching an agreement with OpenAI to deploy its AI models on classified military networks.

OpenAI faces backlash after reaching deal with US military. Source: Sreemoy Talukdar

Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents

Anthropic CEO pushes back on Pentagon ban

During an interview on Saturday, Anthropic CEO Dario Amodei said the company opposes the use of its AI models for mass domestic surveillance and fully autonomous weapons, responding to a US government directive that labeled the firm a defense “supply chain risk” and barred contractors from using its products.

He argued that certain applications cross fundamental boundaries, emphasizing that military decisions should remain under human control rather than be delegated entirely to machines.

Magazine: Bitcoin may take 7 years to upgrade to post-quantum — BIP-360 co-author