Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

A Tale of Two Waiting Periods

3 minutes ago

DTCC, Wall Street’s clearinghouse, works with blockchains to tokenize corporate actions

23 minutes ago

Stablecoin Industry Opposes Bank of England’s Unhosted Wallet Ban

24 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Thursday, May 7
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»News»Media & Culture»Restricting Speech By Purportedly Protecting Children
Media & Culture

Restricting Speech By Purportedly Protecting Children

News RoomBy News Room1 hour agoNo Comments10 Mins Read243 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Restricting Speech By Purportedly Protecting Children
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

While governments around the world have imposed speech restrictions to fight misinformation and hate speech, they also have attempted to curb free speech for a less controversial reason: protecting children. But many of these restrictions stem from vague, unspecified, or speculative harms and corral wide swaths of speech that do not harm children. Censoring speech in the name of protecting children is not a terribly new phenomenon, especially in authoritarian countries. In 2012, for instance, Russia’s parliament passed a law allowing the country’s media censorship agency to unilaterally blacklist websites and take them offline, without any court approval. The lawmakers’ justification was protecting children from online harm, but civil liberties groups correctly predicted that the government would use these powers to curb far more speech. In recent years, such efforts have moved beyond authoritarian countries and taken hold in Western democracies.

The United States has seen repeated attempts to curb speech in the name of saving the children. Although they have failed, governments have continued to try over many decades. In 1969, the US Supreme Court struck down the Des Moines, Iowa, school district’s ban on black armbands worn to protest the Vietnam War, writing that “state-operated schools may not be enclaves of totalitarianism.” In 1997, the Supreme Court invalidated much of the Communications Decency Act, which criminalized the online transmission of “indecent” content to minors, writing that the “interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.” And in 2011, the court struck down a California law that banned sales of “violent video games” to minors, writing that the First Amendment does not give the government “a free-floating power to restrict the ideas to which children may be exposed.”

The moral panic did not stop with those cases. Across the country, states are scrambling to address the harms associated with minors’ use of social media. Many high-profile commentators and politicians have criticized social media for harming the mental health of teenagers, though there is substantial debate as to whether they have presented sufficient evidence of causation. In May 2023, then-Surgeon General Vivek Murthy issued an advisory on social media and youths’ mental health: “The most common question parents ask me is, ‘Is social media safe for my kids?’ The answer is that we don’t have enough evidence to say it’s safe, and in fact, there is growing evidence that social media use is associated with harm to young people’s mental health.”

States have stepped in to try to regulate social media. Among the highest profile recent attempts is Utah’s Minor Protection in Social Media Act, which the state legislature enacted in March 2024. The Utah law requires social media companies to “implement an age assurance system to determine whether a current or prospective Utah account holder on the social media company’s social media service is a minor.” For minors who have accounts, social media companies must impose a number of restrictions, including setting “default privacy settings to prioritize maximum privacy,” limiting direct messaging abilities, disabling search engine indexing of their profiles, and limiting a minor’s ability to share content with others. Those privacy settings cannot be changed without verifiable parental consent. The law also requires social media companies to disable functions that “prolong user engagement” for minors, such as autoplay functions.

The Utah law does not apply to all platforms, however. It only restricts “social media companies,” which it defines as a “public website or application” that mainly displays content created by users, permits those individuals to create public accounts, allows them to “interact socially with each other,” provides them with lists of other users with whom they are connected, and lets them post content that others can see. The law explicitly states that cloud storage and email is excluded from the definition of “social media company.

Why did the Utah legislature see the need to impose such limits on minors’ use of social media? In its findings, the legislature discussed the negative mental health impacts of “the addictive design features of certain social media services” and asserted that the platforms “are designed without sufficient tools to allow adequate parental oversight, exposing minors to risks that could be mitigated with proper parental involvement and control.” The legislature rationalized that it has “enacted safeguards around products and activities that pose risks to minors,” such as medications and cars. Missing from the state’s justification was the acknowledgement that unlike, say, car safety regulations, Utah’s social media law involves First Amendment–protected speech. Not surprisingly, the technology trade group NetChoice, along with Utah residents, sued the state, alleging that the law violates the First Amendment.

Central to NetChoice’s case was the argument that the statute’s definition of “social media company” would lead to over-regulation of protected speech. “Using a vague content-, speaker-, and viewpoint-based definition of ‘social media company,’ the Act imposes restrictions on certain websites’ ability to disseminate and facilitate the speech of their users,” NetChoice wrote in its motion for a preliminary injunction blocking the law. “Yet there is a fundamental mismatch between the State’s putative goals in regulating certain means of disseminating speech, and the Act’s haphazard regulation of certain websites. The Act does not regulate many websites across the Internet that use the same means of disseminating speech the Act restricts, while simultaneously burdening many websites that do not use those means at all.”

On September 10, 2024—less than a month before the law was set to go into effect—Utah federal judge Robert J. Shelby issued a preliminary injunction blocking the law. Speech regulations are particularly difficult to justify under the First Amendment if they are “content based.” And Shelby concluded that the Utah law is content based, because it only applies to platforms that the law “singles out [as] social media companies” and does not apply to other platforms.

Content-based speech regulations survive First Amendment challenges only if they are narrowly tailored to serve compelling state interests. Shelby concluded that Utah fell short of making that case, writing that although he “is sensitive to the mental health challenges many young people face,” the state has not “provided evidence establishing a clear, causal relationship between minors’ social media use and negative mental health impacts.” And even if Utah had a compelling interest, Shelby stated, the law is not narrowly tailored to advance that goal. He suggested that parents — not the government — should be the arbiters of the content their children see and share on social media: “While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls.”

Shelby also questioned the efficacy of the Utah law, noting that it “ultimately preserves minors’ ability to spend as much time as they want on social media platforms.” That weakens the state’s argument that the act is necessary to combat excessive use of social media. Conversely, Shelby found that the law blocks far more protected speech than necessary to achieve its goals: “Specifically, Defendants have not identified why the Act’s scope is not constrained to social media platforms with significant populations of minor users, or social media platforms that use the addictive features fundamental to Defendants’ well-being and privacy concerns.” Utah has appealed the ruling to the Tenth Circuit.

Speech restrictions in the name of child safety are not limited to the state level. Throughout 2024, members of Congress advocated for various versions of the Kids Online Safety Act, which would impose a duty of care on online platforms to “prevent and mitigate” online harms to children, with enumerated harms including eating disorders, suicide, and substance abuse. Senator Richard Blumenthal (D-CT), the bill’s sponsor, defended the duty of care as a standard requirement in many sectors. “Companies in every other industry in America are required to take meaningful steps to prevent users of their products from being hurt, and this simply extends that same kind of responsibility to social media companies, too,” he said on his website.

But, like the Utah law, the federal proposal could cause platforms to over-censor legitimate educational materials about those topics, out of fear of liability. In a July 2024 letter to lawmakers, civil liberties groups, including the ACLU and the Electronic Frontier Foundation (EFF), noted the bill’s free-speech problems: “One common concern among these diverse groups is the Duty of Care requirements that may cause companies to take down content to avoid liability. This could lead to aggressive filtering of content by companies preventing access to important, First Amendment–protected, educational and even lifesaving content.”

Such threats to free speech are not limited to the United States. In 2023, the United Kingdom’s Parliament approved the 300-page Online Safety Act, a sweeping set of mandates for online platforms. Among the most troubling, from a free-speech perspective, is a duty of care for preventing harms to children, including the vagueness of the law’s requirements and the delegation of broad enforcement powers to Ofcom, the UK’s communications regulator.

The duty of care is not the only concerning aspect of the UK law. It also allows Ofcom to compel platforms to search for illegal content, something that the Electronic Frontier Foundation says poses a real threat to the viability of end-to-end encryption. As the EFF wrote in 2023, “Such a backdoor scanning system can and will be exploited by bad actors. It will also produce false positives, leading to false accusations of child abuse that will have to be resolved. That’s why the OSB is incompatible with end-to-end encryption—and human rights.”

Another troubling aspect of the UK law is its requirement that websites verify the age of users, to block “harmful” online content from minors. As the EFF noted, “To prevent minors from accessing ‘harmful’ content, sites will have to verify the age of visitors, either by asking for government-issued documents or using biometric data, such as face scans, to estimate their age. This will result in an enormous shift in the availability of information online, and pose a serious threat to the privacy of UK internet users.” Such invasive verification practices threaten the ability of both minors and adults to access the internet anonymously.

Because the UK Online Safety Act is still being implemented, it is unclear the full extent to which the government would use the law to censor speech. But in a November 2024 policy paper, the UK’s Secretary of State for the Department for Science, Innovation, and Technology Peter Kyle indicated plans for expansive use of the new legal powers. For instance, Kyle wrote that “the growing presence of disinformation poses a unique threat to our democratic processes and to societal cohesion in the United Kingdom and must be robustly countered. Services should also remain live to emerging information threats, with the flexibility to quickly and robustly respond, and minimize the damaging effects on users, particularly vulnerable groups.” Kyle did not indicate precisely how the government might work with (or pressure) platforms to deal with misinformation. Nor did he say who determines what is “disinformation” or suggest ways to counter it. That vagueness is precisely the harm that such laws have. They empower large bureaucracies to claim sweeping mandates to decide what sorts of content are too harmful to be on the internet.

Excerpted from The Future of Free Speech: Reversing the Global Decline of Democracy’s Most Essential Freedom by Jacob Mchangama and Jeff Kosseff. Copyright 2026. Published with permission of Johns Hopkins University Press.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

#NewsAnalysis #PoliticalDebate #PoliticalNews #PressFreedom #PublicOpinion
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

A Tale of Two Waiting Periods

3 minutes ago
Cryptocurrency & Free Speech Finance

Ondo, JPMorgan, Mastercard and Ripple Team to Settle Tokenized Treasuries on XRP Ledger

31 minutes ago
Cryptocurrency & Free Speech Finance

Sandisk Is Mooning Like a Meme Coin. Here’s Why

2 hours ago
Media & Culture

The Spirit of the Declaration, Part 2

2 hours ago
Debates

Playing Gad Gad Saad’s new book tackles an interesting topic. Unfortunately, the author’s narcissistic ramblings make it almost impossible to read.

2 hours ago
Cryptocurrency & Free Speech Finance

Bitcoin, Ethereum ‘Q-Day’ Quantum Threat Could Arrive as Soon as 2030: Report

3 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

DTCC, Wall Street’s clearinghouse, works with blockchains to tokenize corporate actions

23 minutes ago

Stablecoin Industry Opposes Bank of England’s Unhosted Wallet Ban

24 minutes ago

Ondo, JPMorgan, Mastercard and Ripple Team to Settle Tokenized Treasuries on XRP Ledger

31 minutes ago

Restricting Speech By Purportedly Protecting Children

1 hour ago
Latest Posts

Nasdaq’s president says the SEC’s new crypto stance is letting markets ‘build’ again

1 hour ago

ETH Stuck Below $2.4K Despite Wider Crypto Market Recovery

1 hour ago

Sandisk Is Mooning Like a Meme Coin. Here’s Why

2 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

A Tale of Two Waiting Periods

3 minutes ago

DTCC, Wall Street’s clearinghouse, works with blockchains to tokenize corporate actions

23 minutes ago

Stablecoin Industry Opposes Bank of England’s Unhosted Wallet Ban

24 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.