Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Palantir Has a Human Rights Policy. Its ICE Work Tells a Different Story

21 minutes ago

Judge Says DOJ and DHS Likely Coerced Tech Firms To Censor ICE-Tracking Platforms

30 minutes ago

Tether backs UAE tokenization firm KAIO in $8M funding round

45 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Tuesday, April 21
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»AI & Censorship»The Internet Still Works: Reddit Empowers Community Moderation
AI & Censorship

The Internet Still Works: Reddit Empowers Community Moderation

News RoomBy News Room3 hours agoNo Comments8 Mins Read1,506 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
The Internet Still Works: Reddit Empowers Community Moderation
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

Reddit is one of the largest user-generated content platforms on the internet, built around thousands of independent communities known as subreddits. Some subreddits cover everyday interests, while others host discussions about specialized or controversial topics. These communities are created and moderated by volunteers, and the site’s decentralized model means that Reddit hosts a vast range of user speech without relying on centralized editorial control. 

Ben Lee is Chief Legal Officer at Reddit, where he oversees the company’s legal strategy and policy work on issues including content moderation and intermediary liability. Before joining Reddit, Lee held senior legal roles at other tech companies including Plaid, Twitter, and Google. At Reddit, he has been closely involved in litigation and policy debates surrounding Section 230, including cases addressing the legal risks faced by platforms and their users and moderators. He was interviewed by Joe Mullin, a policy analyst on EFF’s Activism Team.

Joe Mullin: When we talk about user rights and Section 230, what rights are most at stake on a platform like Reddit? 

Ben Lee: Reddit, we often say, is the most human place on the internet. What’s often missing from the debate is that section 230 protects people—not platforms. 

It protects millions of everyday humans and volunteer moderators who participate in online communities. Without it, people could face lawsuits for voting down a post, enforcing community rules, or moderating a discussion. These are foundational activities on Reddit, and frankly, the whole internet.

If you had to describe section 230 to a regular Reddit user without naming the law, what would you say it does for them?

Section 230 protects your ability to participate in community moderation.

Even if all you are doing is up-voting or down-voting content, that’s participation. On Reddit, everyone is a content moderator, through voting. Up-voting determines the visibility of content. 

We believe, strongly, this is one of the only models to allow Reddit to scale. You make the community part of the moderation process. They’re invested in the community, making it better. 

How would user speech be affected if Section 230 were eliminated or weakened? 

We would undermine community self governance—the notion that humans can do content moderation, and take that responsibility for themselves. Whether you’re a small blog or big forum. I like to think of Reddit as composed of this federation of communities that range from the tiny to the humongous. That’s what the internet is! 

The legal risk would discourage people from moderating, or even speaking at all. The kind of speech we’re trying to protect is often critical of powerful people or entities. If a moderation decision leads to litigation from those powerful entities, that’s an expensive proposition to fight. 

Reddit relies on user-run communities and volunteer moderators. Can you walk me through how content moderation and legal complaints actually work in practice, and where section 230 comes into that? 

We have a tiered structure, like our federal system. Each community is like a state: it has its own rules, and enforces them. The vast majority of content moderation decisions are made by the communities, not by Reddit itself. 

Reddit is built on self-governing communities that are moderated by volunteers, supported by automated tools. Section 230 gives Reddit the freedom to experiment, and lets users shape healthy, interest-based spaces.

Section 230 is fundamental to protecting the moderators from a frivolous lawsuit. A screenwriting community might want to protect their community from scammy competitions—and then they get sued by that competition. 

Or a community wants to keep their conversation civil. And, for example, may not allow Star Trek characters to be called “soy boys,” and they enforce that. Then a person sues. 

I wish these were hypotheticals. But they were actual lawsuits. And we have them, routinely. 

What are policymakers missing about Section 230? 

The [moderation] decisions being criticized in court, are decisions to try to make the internet safer. In none of the cases that I mentioned is there a moderator saying, “I want to increase harmful content!” These are good-faith decisions about what makes the internet better. 

Section 230 is, at its core, protecting the ability for people to make those choices for their own communities. 

There’s a price to be paid for not having a Section 230. And it will be paid by internet users—not the biggest platforms.

Some see 230 as a way to punish Big Tech. But removing it doesn’t punish Big Tech—it makes them more powerful. It’s startups, community driven platforms, and individual moderators who rely on Section 230 to compete and innovate. Weakening Section 230 will harm the open internet, and reduce the choice, diversity, and resilience of the internet. 

The big guys, they have armies of lawyers. They have the budget to withstand a flood of lawsuits. Weakening Section 230 just entrenches them. 

In Reddit’s amicus brief in the Gonzalez v. Google Supreme Court case, you point out that without Section 230, many moderation decisions wouldn’t be protected. The brief states: “A plaintiff might claim emotional distress from a truthful but hurtful post that gained prominence when a moderator highlighted it as a trending topic. Or, a plaintiff might claim interference with economic relations arising from an honest but very critical two-star restaurant review.” 

When you have situations where moderators get threats or litigation, what can you do? 

We have had cases where our own moderators got sued, along with us. In the “soy boy” case, we worked to help find pro bono counsel for the moderators. 

Someone posted “Wesley Crusher is a soy boy,” and it got removed. I’m enough of a Star Trek fan that I understand both the reference, and why the moderator decided—“hey, it’s gone. I don’t want this here.”

This would not violate our Reddit rules. But the community took it down under its own rules about being civil. It was just not a kind-hearted action, and the community had a right to decide. 

But the moderator got sued. We got sued, actually, because the poster disagreed with that moderation choice. Section 230 is what allowed us to win that case. 

These are just average people, implicated only because they moderated their own community. They are trying to do the right thing by their community. 

In cases where litigation happens, when does Section 230 come into play? 

Section 230 is usually one of the first things that’s talked about in the case. It’s usually the most effective way of saying: if you believe someone who defamed you—please go to the person who has defamed you. If you’re looking to the moderator, or to Reddit itself, this is not a great way of getting the justice that you seek. 

Is there a different workflow internationally? 

There’s a very different workflow. We had a prominent case in France where a company was trying to sue moderators, and of course, we didn’t have section 230 to protect them. So we had to do all sorts of other things to protect them. It got much more complicated. 

The breadth of content that’s considered illegal in certain jurisdictions can be somewhat breathtaking. 

Our goal is always to preserve as much freedom of expression as possible for our community. In the U.S., we look at it through the lens of the First Amendment, and other aspects. Outside the U.S., we rely more on the lens of international human rights. 

How would you characterize legal demands around user content, the ones you see most often? 

They tend to be: somebody said something mean about me—take this down. Or someone says: you didn’t allow me to say something mean about someone or some entity. It completely runs the spectrum. 

One law that has already passed that weakens Section 230 is SESTA/FOSTA. From Reddit’s perspective, what changed after that? 

There’s some communities we had to shut down, in particular, support communities. There was a cost. Every time Section 230 is narrowed, there’s a cost—some types of speech and communities have a harder time staying online. 

The cost may not seem high to some people, because those communities are not for them. But if they visited them, they’d see that these are actual people, interacting in a positive way. If it wasn’t positive, we have rules for that—but that’s a different question. 

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

#AlgorithmicBias #Censorship #DataRights #Deplatforming #DigitalCensorship #PlatformAccountability
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

AI & Censorship

Palantir Has a Human Rights Policy. Its ICE Work Tells a Different Story

21 minutes ago
Media & Culture

Caught In The Crackdown: As Arrests At Anti-ICE Protests Piled Up, Prosecutions Crumbled

4 hours ago
Media & Culture

Palantir Goes Mask-Off For Fascism. It Won’t End Well.

6 hours ago
Campus & Education

FIRE statement on Kash Patel’s $250M defamation lawsuit against The Atlantic

6 hours ago
Media & Culture

Daily Deal: The Complete Arduino, Raspberry Pi & ESP32 Bundle

7 hours ago
Media & Culture

Rep. Mike Johnson Tries, Fails To Sneak Clean Section 702 Re-Authorization Past The Goal Line

8 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Judge Says DOJ and DHS Likely Coerced Tech Firms To Censor ICE-Tracking Platforms

30 minutes ago

Tether backs UAE tokenization firm KAIO in $8M funding round

45 minutes ago

RAVE Token Faces Another 50% Crash Amid Price Manipulation Claims

49 minutes ago

Nearly Half of New Streaming Music Is AI-Generated, Says Deezer—But Nobody’s Listening

51 minutes ago
Latest Posts

FBI Director Kash Patel Sues Atlantic Over Friday’s Article

2 hours ago

BTC bounces above $76,000 as DeFi suffers $14 billion exodus after major hack

2 hours ago

Bitcoin Holds $75K As Altcoins Search For Bullish Momentum

2 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Palantir Has a Human Rights Policy. Its ICE Work Tells a Different Story

21 minutes ago

Judge Says DOJ and DHS Likely Coerced Tech Firms To Censor ICE-Tracking Platforms

30 minutes ago

Tether backs UAE tokenization firm KAIO in $8M funding round

45 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.