Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Ripple share buyback program values the firm at $50 billion: Bloomberg

11 minutes ago

Foundry to Launch Institutional-Grade Zcash Mining Pool in April 2026

12 minutes ago

Microsoft Sides With Anthropic Against Trump Admin’s Supply Chain Risk Designation

18 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Wednesday, March 11
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»News»Media & Culture»A First Amendment Right Not To Use AI for Evil?
Media & Culture

A First Amendment Right Not To Use AI for Evil?

News RoomBy News Room3 hours agoNo Comments11 Mins Read1,033 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
A First Amendment Right Not To Use AI for Evil?
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

Anthropic is suing the federal government over its response to the company refusing to remove safeguards that prevent Anthropic’s artificial intelligence system, Claude, from being used for mass domestic surveillance and killer robots.

In a lawsuit filed Monday, it accuses the Trump administration of illegal retaliation. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” states the complaint.

The suit has kicked off a new round of debate about free speech for AI systems more broadly, in addition to raising critical questions about the government’s ability to compel tech companies to act in ways that company leaders consider unethical.

You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth’s sex, tech, bodily autonomy, law, and online culture coverage.

“When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to ‘IMMEDIATELY CEASE all use of Anthropic’s technology’—even though the [Department of Defense] had previously agreed to those same conditions,” states Anthropic’s complaint, filed in the U.S. District Court for the Northern District of California. “Hours later, the Secretary of War directed his Department to designate Anthropic a ‘Supply-Chain Risk to National Security,’ and further directed that ‘effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.'” (For more background on all this, see here and here.)

Rather than simply ending Anthropic’s military contract over this dispute, the Trump administration went on a campaign of “public castigation,” complains Anthropic.

Trump called it a “RADICAL LEFT, WOKE COMPANY” full of “Leftwing nut jobs” and directed “EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” A top Department of Defense official called Anthropic CEO Dario Amodei “a liar” with a “God-complex” who was trying “to personally control the US Military” and was “ok putting our nation’s safety at risk.”

This was followed up by Defense Secretary Pete Hegseth declaring Anthropic a supply-chain risk and federal agencies across the board terminating their contracts with the company.

I don’t think there’s any disputing that this was an absurd and bullying overreaction, injurious to free markets and unbecoming of a free and democratic country. No company should be compelled to let the U.S. military use its tech tools for whatever authorities want, and no company should be retaliated against for this refusal.

But the grounds on which Anthropic is suing are interesting—and controversial. The company argues that in addition to violating federal administrative law, the administration had attacked its “core First Amendment freedoms.”

“The Constitution confers on Anthropic the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety,” states its complaint. “The government does not have to agree with those views. Nor does it have to use Anthropic’s products. But the government may not employ ‘the power of the State to punish or suppress [Anthropic’s] disfavored expression.'”

A group of organizations friendly to civil liberties and the First Amendment—including the Foundation for Individual Rights and Expression (FIRE) and the Cato Institute—have filed a court brief in support of Anthropic’s position, arguing that “the Pentagon’s temper tantrum is a textbook violation of Anthropic’s First Amendment rights.”

According to these groups, it’s not just statements by Anthropic leaders that are protected—it’s the AI system itself.

“Claude is fundamentally expressive,” their brief states. “The Pentagon’s demand that Anthropic remove safeguards on that system—to change what Claude must and may say, analyze, and refuse—asks Anthropic to make a trade on a core freedom of expression.”

Anthropic makes a similar claim in its complaint, suggesting that First Amendment protection “extends to its Usage Policy,” which “has never permitted Claude to be used for mass surveillance of Americans or for lethal autonomous warfare.”

But is the policy governing Claude’s outputs really speech, or a form of conduct?

Are all AI systems speech?

These are thorny questions First Amendment experts are still hotly debating.

University of Akron law professor Jess Miers is on Anthropic’s side on this one. We have an existing body of case law that says “that curating and disseminating expression (even via algorithms) is a protected editorial activity,” and “that’s precisely” what AI model developers like Anthropic do, Miers posted to BlueSky.

“Model developers meticulously curate the datasets that they deem important for shaping the model’s ‘worldview,'” Miers pointed out. “Those choices alone are editorial: what kind of information do I want my model to train on? How much of it? What sources do I trust? The data curation decisions shape the outputs.”

As Miers sees it, “DOD is effectively trying to force Anthropic to make different editorial decisions that reflect the views and goals of the Administration.”

Some think this is taking things too far.

“We don’t want everything an AI does to be covered by the First Amendment,” posted University of Minnesota law professor Alan Rozenshtein. “It will make regulation of what will increasingly be large portions of the economy impossible.”

“It’s true that AI output will often be protected speech, but that’s because it will implicate *listener’s* ability to access AI output,” Rozenshtein continued. “But here the AI output is primarily being used as *conduct* for use in government military systems. Anthropic absolutely has a First Amendment right to not be punished for its public statements. But the government has to have the right not to use a tool because it doesn’t like its output, and that’s impossible if the output is itself First Amendment.”

It’s possible that a court need not decide whether AI outputs are protected speech to find a First Amendment violation here.

Anthropic’s public statements about AI limits and safeguards and so on are obviously protected. So are its statements and petitions to the government.

And there’s at least a case to be made that the Trump administration went so hard after Anthropic precisely because of its very vocal rejection of what the administration was asking it to do.

One could argue that an objection to the limitation on Claude’s outputs motivated terminating Anthropic’s contract with the military, and that’s OK. But the remarkable public vitriol and the administration’s above-and-beyond punishment hinged on the fact that the company said no to the government forcefully and publicly—and that’s not OK.

The administration’s “needless and extraordinarily punitive actions, imposed in broad daylight, are a paradigm of unconstitutional retaliation,” Anthropic suggests in its complaint. They were “designed to punish ideological disagreement.”

“As limitations go, refusing to participate in the creation of a totalitarian police state or the production of killer robots seem reasonable lines to draw,” notes J.D. Tuccille. But whether the lines are reasonable or not doesn’t really matter—the government can “respect those limits or take its shopping needs elsewhere.” Instead, Trump and his allies chose a third option: throwing “public temper tantrums over Anthropic telling them ‘no.'”

The Trump administration’s overblown statements and the fact that it’s not just ending the defense contract but trying to prevent others from doing business with the company (through the supply-chain risk designation) make clear that it was punishing Anthropic “for its corporate beliefs,” suggests Tuccille.


Come watch me interview an AI avatar (and some humans) about orgasmic meditation and more. I’ll be moderating a panel in New York City tomorrow night about the case against former OneTaste leaders Nicole Daedone and Rachel Cherwitz and the demonization and regulation of alternative practices and beliefs more broadly.

In a first for me, one of the three panelists will be an AI avatar, since the person it represents, Daedone, is currently in federal prison; she was denied bail while awaiting sentencing (which is scheduled for later this month). Her AI avatar was trained on her books and lectures and “isn’t a mere chatbot or a summary — it’s a distillation of Nicole’s actual thinking, language, and philosophy, able to engage in real conversation about the ideas she has spent a lifetime developing,” per the event organizer’s summary. I have no idea what to expect, but it should be a fun experiment, nonetheless.

The free event takes place in Harlem and starts at 7 p.m. More info here.


New: A Houston woman is suing Tesla in Harris County, alleging that her Cybertruck, while using Tesla’s “Full Self-Driving mode” tried to drive the car off of a bridge. Here is the dashcam footage provided by her lawyers: www.chron.com/culture/arti…

— gwen howerton (@kissphoria.bsky.social) 2026-03-09T19:06:48.930Z


What bank tellers and iPhones can teach us about AI. David Oks looks at why ATM machines didn’t destroy bank teller jobs—but iPhones did. Since the first decade of this century, “bank teller employment has fallen off a cliff,” a situation Oks attributes to smartphones. Oks has a theory about why:

When a technology automates some of what a human does within an existing paradigm, even the vast majority of what a human does within it, it’s quite rare for it to actually get rid of the human, because the definition of the paradigm around human-shaped roles creates all sorts of bottlenecks and frictions that demand human involvement. It’s only when we see the construction of entirely new paradigms that the full power of a technology can be realized. The ATM substituted tasks; but the iPhone made them irrelevant.

Could this theory have relevance for AI automation and jobs today?

The lesson is worth stating plainly. The ATM tried to do the teller’s job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller’s job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn’t need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology. That’s because as long as the old paradigm persists, there will be labor-shaped holes in which capital substitution will encounter constant frictions and bottlenecks.

This has, I think, serious implications for how we’re thinking about AI.

People in AI frequently talk about the vision of AI being a “drop-in remote worker”: AI systems that can be inserted into a workflow, learn it, and eventually do it on the level of a competent human. And they see that as the point where you’ll start to see serious productivity gains and labor displacement.

[…] But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.

The real productivity gains from AI—and the real threat of labor displacement—will come not from the “drop-in remote worker,” but from something like Dwarkesh Patel’s vision of the fully-automated firm.

More here.


After years of the media uncritically accepting the “something must be done for the children” narrative, it’s nice to see some of them waking up to the reality we’ve been warning about from the start. https://t.co/AJkllMeEZQ

— Ari Cohn (@AriCohn) March 9, 2026


• An app that pledged to help men overcome pornography “addiction” wound up leaking “intimate data on hundreds of thousands of its users, including their masturbation habits, and lied about its security issues,” 404 Media reports.

• “Red states get Waymos. Blue states get studies”: Kelsey Piper on the culture of stalling in progressive government.

• OpenAI is delaying its “adult mode” for ChatGPT rollout.

• “Another meta-analysis finds near zero effects for screen time,” notes psychologist Chris Ferguson: 

Indeed another meta-analysis finds near zero effects for screen time.

This, despite many of the effect sizes being bivariate correlations, and the authors acknowledging many of the longitudinal studies failed to correct for the Time 1 outcome variable…a very basic control,… https://t.co/cbPFAZiVq5

— Chris Ferguson ????????☘️???????????????????? (@CJFerguson1111) March 10, 2026

• Real estate moguls Alon, Oren, and Tal Alexander were convicted on Monday of federal sex trafficking charges.

• Social media restrictions for minors in Florida and Georgia went to court this week. In the Georgia case, federal appellate judges appeared skeptical of the constitutionality of a law requiring minors to get parental permission to be on social media. The same judges are also considering Florida’s House Bill 3, which bans or restricts social media account creation for minors; Courthouse News Service has a rundown of yesterday’s oral arguments.

• The Whore D’ouvres newsletter explores the difference between defending “rape fantasies”—better termed “consensual non-consent”—and defending rape.

• Major publishers are suing the shadow library Anna’s Archive.



Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

#CivicEngagement #MediaAccountability #MediaAndPolitics #PoliticalMedia #PublicOpinion
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

Microsoft Sides With Anthropic Against Trump Admin’s Supply Chain Risk Designation

18 minutes ago
Media & Culture

A DOGE Bro Allegedly Walked Out Of Social Security With 500 Million Americans’ Records On A Thumb Drive And Expected A Pardon If Caught

56 minutes ago
Cryptocurrency & Free Speech Finance

Android Phone Crypto Wallets Could Be at Risk Due to MediaTek Exploit: Ledger

1 hour ago
Media & Culture

Can the Government Ban You from Telling the Truth?

2 hours ago
Cryptocurrency & Free Speech Finance

Justin Sun Deal Complicates SEC’s Crypto Stance, Legal Experts Say

2 hours ago
Media & Culture

Immigration Officers Continue To Lie About Their Murders

3 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Foundry to Launch Institutional-Grade Zcash Mining Pool in April 2026

12 minutes ago

Microsoft Sides With Anthropic Against Trump Admin’s Supply Chain Risk Designation

18 minutes ago

A DOGE Bro Allegedly Walked Out Of Social Security With 500 Million Americans’ Records On A Thumb Drive And Expected A Pardon If Caught

56 minutes ago

Journalists in eastern DRC detained over war coverage, broadcasters occupied 

1 hour ago
Latest Posts

Revolut, crypto-friendly fintech, becomes fully licensed UK bank

1 hour ago

Three Binance Charts May Be Hinting at Bitcoin’s Next Move

1 hour ago

Android Phone Crypto Wallets Could Be at Risk Due to MediaTek Exploit: Ledger

1 hour ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Ripple share buyback program values the firm at $50 billion: Bloomberg

11 minutes ago

Foundry to Launch Institutional-Grade Zcash Mining Pool in April 2026

12 minutes ago

Microsoft Sides With Anthropic Against Trump Admin’s Supply Chain Risk Designation

18 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.