Listen to the article
from the another-one-bites-the-dust dept
It’s been a little while since we last wrote about California’s deeply problematic “Age Appropriate Design Code,” which tried to force internet companies into taking blatantly unconstitutional steps to pressure companies into magically preventing all “harms” to kids. The law has bounced between the district court and the Ninth Circuit multiple times — and yesterday, once again, most of the law was deemed effectively unconstitutional and tossed out. The ruling is procedurally messy in annoying ways, but most of that we can blame on the Supreme Court. But we’ll get to that.
The bill, in somewhat troubling fashion, was drafted and heavily pushed for by a British Baronness/Hollywood director who made a documentary about kids and smartphones and got so freaked out by her own documentary, that she decided that she would single-handedly destroy the open internet for children, First Amendment be damned. Trade association NetChoice challenged the law (as it has challenged many state laws) and has been mostly successful.
As I explained to a court myself, the law was both impossible to comply with and a clear attack on free expression. The court agreed and threw out the law as unconstitutional. It went to the Ninth Circuit which mostly agreed that the law was unconstitutional. Unfortunately, right before the Ninth Circuit ruled, the Supreme Court’s Moody decision made a mess of things. While that ruling effectively killed unconstitutional bills in Florida and Texas that sought to regulate social media, the ruling went deep into the silly weeds, arguing that challenging an entire law as unconstitutional on its face (a “facial challenge”) required a nearly impossible set of standards to meet, preferring parties challenge the law “as applied” (i.e., once it actually violating people’s rights directly).
Because of that, part of the law was sent back to the lower court, where it was again deemed unconstitutional and blocked by injunction. And then that ruling was appealed again, leading to this Ninth Circuit ruling, which lifts part of the injunction, sending the case back down to the lower court yet again. But it effectively wipes out large parts of the Age Appropriate Design Code as clearly unconstitutional. Basically, all the parts in the law that actually do things are dead, because they pretty clearly regulate speech in violation of the First Amendment. The case — and what remains of the law — lives on as a procedural zombie, still technically breathing but stripped of its real teeth.
It’s a good ruling, though made slightly annoying by the procedural situation created by the Supreme Court’s Moody ruling.
Digging in: the court struck down most of the scary problematic provisions of the law, rightly noting that they violated the First Amendment on vagueness grounds. First up, there were provisions that tried to limit how websites could handle a child’s personal information, but this was a smokescreen. While it was dressed up to look like a “privacy” bill, the law really sought to impact what content kids could read, saying you couldn’t use data in a way that harmed the “well-being” of a child, and that the use had to be in the “best interests” of the child. There was also a provision regarding whether or not the data was used in way that was “materially detrimental” to the child. These are all super vague terms that were clearly really meant to be “don’t show kids content that might make them sad.”
The court said this is a problem:
NetChoice persuasively argues that the risk of subjective enforcement is particularly high because, as it contemplates “material detriment” to “a child,” the provision must be assessed as to any single child whose personal information is accessed by a covered online practice.
California argued there was no problem with requiring sites to create systems in the “best interests” of children, but the court rightly notes that you can’t just create general rules that accomplish that:
When evaluating the “best interests of the child” in family law proceedings, California courts recognize that “bright line rules in this area are inappropriate” and that “each case must be evaluated on its own unique facts.” In re Marriage of LaMusga, 88 P.3d 81, 91 (Cal. 2004) (citation modified). The standard operates through a specific child’s circumstances and factual record. See id. The data use restrictions ask something categorically different: covered businesses must determine prospectively whether a given practice is in “the best interests of” not any one child but “children”—a class of users that includes every child anywhere who can access a covered online practice. Cal. Civ. Code § 1798.99.31(b)(1)–(4) (emphasis added). Then covered businesses must tailor their practice accordingly. Applied at that scale, without the individualized, highly specific factual record giving the standard meaning in contexts such as a family law case, “best interests of children” cannot provide “sufficient notice of what is proscribed,”
Then there’s the issue of “dark patterns,” which is one of my least favorite terms that has become popular over the last few years. In practice it’s become a catch-all for ‘anything on any website that makes people do things I don’t like,’ and it’s not remotely well-defined. And that’s a problem when you have to get past the “vagueness” bar to be acceptable under the First Amendment:
As with the data use restrictions, the State’s plain-meaning argument is unconvincing where the range of harms that could plausibly qualify as “materially detrimental” is vast, spanning everything from financial exploitation to sleep loss, distraction, or hurt feelings. The fact that “dark pattern” is a defined term in the CAADCA does not help a covered business distinguish between these harms. And the prohibition’s use of the singular “child,” like in the data use restrictions, suggests that it is actionable based on a single child’s response to an online interface, meaning that a business designing a product accessed by millions of child users could face liability whenever any one of them experiences a harm that a regulator deems “material.”
So that’s gone too.
The court also highlights how the state does a lot of fear-mongering on edge cases that would clearly and somewhat obviously lead to mass censorship to avoid potential liability:
The State cites examples like “using a child’s information to connect them to a person that seeks to abuse the child, such as through sexploitation,” or “[u]sing a child’s information to recommend illegal products such as tobacco, alcohol, or gambling[.]” But these are extreme examples at the margins of what might be materially detrimental to a child’s well-being. The more difficult questions arise with examples like sleep loss, distraction, or hurt feelings. As the district court reasoned, and NetChoice argues on appeal, the CAADCA does not provide any guidance as to the breadth of conduct that “material[] detriment[] to the physical health, mental health, or wellbeing of a child” may reach.
This is what happens when headline-chasing regulators write laws based on moral panics and feel-good concepts like ‘well-being of children,’ assuming that either websites will nerd harder and somehow make it work, or the courts will sort it out on the back end.
But that’s not how the First Amendment works. There’s a reason why there’s a vagueness doctrine that is used to throw out laws that try to tapdance around it this way.
That said, not all of the ruling goes NetChoice’s way. Indeed, early on, the ruling gives a bit of a benchslap to NetChoice for continuing to challenge this law “facially” without meeting the near impossible standard setup by the Supreme Court in Moody:
NetChoice has been a party to many such cases—several before our court and the Supreme Court—and is presumably aware of the expectations for a facial challenge. At the risk of repetition, we offer similar guidance to NetChoice today.
The Moody ruling basically said that if you’re doing a facial challenge, you need to detail every possible application of the law and then show that a “substantial majority” are unconstitutional. That’s effectively impossible, especially since the law is written so broadly as to encompass things that go beyond just speech. That means, because the law also applies to commerce and other things, the facial challenge parts fail:
First, the State persuasively argues that whether “it is reasonable to expect” that a business’s “online service, product, or feature would be accessed by children,” … “says nothing about the nature of the business providing that service, product, or feature.” Indeed, as the State proffers, children “are capable of using ride sharing service[s] like Lyft or Waymo, electronic ticketing services such as StubHub or Ticketmaster, financial transaction services such as Paypal or Venmo, fitness products such as NFL Play 60 or Peloton, health-related services such as iHealth, or education-related products such as Wolfram Mathematica.” The CAADCA’s substantive requirements would “appl[y] evenhandedly” to any of these businesses if they are likely to be accessed by children, regardless of the content available through their online service.
This seems silly, but it’s what the Supreme Court now requires. Send your complaints to them, not the Ninth Circuit. The court effectively admits that the Supreme Court set an impossible standard in Moody:
To be sure, as we observed in NetChoice SB 976, “[d]oing so would entail the ‘daunting, if not impossible’ task of canvassing how the Act applies to an ‘ever-growing number of apps, services, functionalities, and methods for communication and connection.’” Id. at 1021 (first quoting Moody, 603 U.S. at 745 (Barrett, J., concurring); and then quoting Moody, 603 U.S at 725 (majority opinion)). We recognized that “such a showing” might be “unrealistic.” Id. But we nevertheless stated then, and maintain now, that “[i]t is a mystery how NetChoice could expect to prevail on a facial challenge without candidly disclosing the platforms that it thinks the challenged laws reach” and whether the coverage definition unduly burdens those platforms’ expression.
There is, also, a separate question of how a facial challenge to a law like this could even be possible with the sort of page limits courts require.
What this means, in practice, is that for states to survive a facial challenge, just make laws as crazily broad as possible, meaning it would be impossible to catalog all the many, many ways it might be enforced. That seems really bad. But, thanks to this Supreme Court, it’s what we’ve got.
The court does send the “age estimation” part back to the lower court, mostly because they say the record isn’t well enough developed (meaning we get to go through all of this yet again). There is some troubling language regarding last year’s ruling in FSC v. Paxton regarding age verification. As you’ll recall, the very prude conservative wing of the Supreme Court effectively overturned a couple decades worth of precedent to say “age verification online is fine for porn because porn is not protected by the First Amendment when kids see it.”
Many people insisted that this ruling was okay because it was limited to adult content, but so far we’ve seen state after state — and a few courts — suggest that it’s now “open season” on age verification laws. The language that shows up here is at least worrisome that the Ninth Circuit is open to a broad reading of the Supreme Court’s ruling:
NetChoice’s reading of Free Speech Coalition v. Paxton, 606 U.S. 461 (2025), also does not persuade. Free Speech Coalition considered a statute that required covered entities to make adult website visitors submit to an age verification system using either “government-issued identification” or “a commercially reasonable method that relies on public or private transactional data.” Id. at 467 (quoting Tex. Civ. Prac. & Rem. Code § 129B.003(b)(2)). The Supreme Court observed only that, with respect to that system, there is an “incidental burden that age verification necessarily has on an adult’s First Amendment right to access speech that is obscene only to minors.” Id. at 495. The Court said nothing about the effect of age estimation on First Amendment burdens generally, especially where age estimation is not required as a precondition to access content. To the contrary, the Court observed that “adults have no First Amendment right to avoid age verification, and the [challenged law] can readily be understood as an effort to restrict minors’ access.”
To the extent NetChoice argues that the age estimation requirement “require[s] consideration of content or proxies for content,” see NetChoice I, 113 F.4th at 1118, the age estimation requirement may impliedly regulate speech—but we cannot confidently draw that conclusion on this record, either.
More and more for the courts to argue about, I guess.
There’s also another bit of the lawsuit that has been revived, regarding “severability,” specifically regarding whether or not you can keep some parts of the law even as the bigger parts are struck down as unconstitutional. It’s another bit for them to argue about in more detail at the lower court, but not really the main point of all of this. The specifics here are that the law has a “notice-and-cure” provision if the Attorney General found a website to be violating the law. So there’s a question of whether or not that specific provision can be left alive, though I’m unsure what good it does if the rest of the law is found to be unconstitutional. But also, as the appeals court notes, this is basically all just on an underdeveloped record, so they’re sending it back to the lower court for more.
Either way, the key elements of California’s AADC have now been struck down as unconstitutional at the Ninth Circuit — for the second time — after two prior rejections at the district court level. The data use provisions and the dark patterns nonsense are gone on vagueness grounds. In some ways, that’s actually a stronger outcome than if the initial facial challenge had succeeded: there’s now clear appellate language explaining why this kind of vague “well-being” language can’t survive First Amendment scrutiny. California could theoretically go back and try to define things more narrowly, but chances are they’d find themselves right back at the First Amendment wall, because their ultimate goal has always been censorship dressed up as child safety.
The annoying part is the procedural mess the Supreme Court’s Moody decision created. We’re heading into round three at the district court, burning more time and resources on a law that should have been dead on arrival. This is exactly what we warned Governor Gavin Newsom and Attorney General Rob Bonta would happen when they first backed this law. They got the political headlines. Everyone else got years of litigation. And the law they championed is now a procedural zombie — technically still breathing, but stripped of everything that made it dangerous in the first place.
Filed Under: 1st amendment, 9th circuit, aadc, ab 2273, age verification, baroness beeban kidron, best interests, california, censorship, dark patterns, data use, facial challenge, free speech, gavin newsom, kids code, moody, protect the children, vagueness
Companies: netchoice
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

