Listen to the article
Our book traces the waves of elite panic that drive governments to regulate “misinformation,” “disinformation,” and other speech that the leaders believe are not in the best interests of the public. One wave of elite panic reached its peak in 2024. It was a pivotal year for the future of global democracy, as some 2 billion voters—about half the adult population of the globe—went to the polls, including voters in the United States, the European Union, France, the United Kingdom, Brazil, Indonesia, South Africa, Taiwan, Mexico, and India.
Despite a record number of eligible voters, the mood among many politicians, commentators, and media institutions was more fearful than celebratory. A New York Times article from January 2024 warned that “false narratives and conspiracy theories have evolved into an increasingly global menace,” and that “artificial intelligence has supercharged disinformation efforts and distorted perceptions of reality.” Experts cautioned that the combination of online influence campaigns and artificial intelligence had created a “perfect storm of disinformation” that threatened free and fair elections.
The EU-funded European Digital Media Observatory (EDMO) warned that disinformation campaigns had become “a pervasive phenomenon,” with more voters exposed than ever before. An anonymous senior EU official highlighted the threat from “tsunami levels” of disinformation: “It’s as if we have been infected by this foreign interference. It’s a silent killer.” Not to be outdone, Věra Jourová, the European Commission’s vice president for values and transparency, said AI deepfakes of politicians could create “an atomic bomb … to change the course of voter preferences.” To counter this threat, the European Commission sent menacing letters to social media platforms and dispatched crisis units, expecting to deal with attempts to cast doubt on the legitimacy of the election’s outcome for weeks after the vote.
At the Copenhagen Democracy Summit in May 2024, just a month before the European Parliamentary elections, Ursula von der Leyen, the president of the European Commission and then a candidate for reelection, made a significant pledge. She promised to prioritize a new “European democracy shield” to combat foreign interference. One aspect of this shield would focus on detecting “malign information or propaganda ” and, once identified, ensuring such content is “swiftly removed and blocked” by online platforms. This would build on—and likely expand—new obligations under the Digital Services Act. The shield would essentially normalize the kind of emergency measures the European Union had already adopted to ban and block Russian state-sponsored media in the wake of Putin’s attack on Ukraine in February 2022.
A few days after the invasion of Ukraine, the European Union suspended the broadcasting activities of the state-sponsored media outlets Russia Today (RT) and Sputnik, claiming that Russia was engaging in a “systematic, international campaign of media manipulation and distortion of facts” that threatened the democratic order in EU member states. On March 4, 2022, the European Commission clarified that social media companies “must prevent users from broadcasting … any content of RT and Sputnik” — a clarification broad enough to include content posted by users attempting to counter Russian propaganda. The list has since been expanded to cover more than a dozen Russian media outlets.
Josep Borrell, the EU’s High Representative at the time, defended the move, since Russian disinformation was “a major threat for the liberal democracies,” because “if information is manipulated … their choices are biased.” Borrell then jumped to the conclusion that by banning RT and Sputnik, “we are not attacking the freedom of expression, we are just protecting the freedom of expression.” One might argue that this Orwellian statement was itself an exercise in disinformation.
The EU’s General Court upheld the ban on RT and Sputnik, calling it necessary to stop a “vehicle for propaganda” supporting Russian aggression, even though no member state was at war. While the court claimed the ban’s temporary nature preserved freedom of expression, the conditions for lifting it — including that Russia must “cease propaganda actions against the Union” — made its temporary status more theoretical than practical.
Were these fears about online disinformation justified? The 2024 European Parliamentary elections took place from June 6 to 9 across the twenty-seven member states. These were followed by snap elections in France (June 30 and July 7) and the United Kingdom (July 4). Contrary to the alarmist narratives that preceded this massive exercise of democracy, neither fake news nor foreign interference subverted the will of the people. EDMO, which had warned about potential problems with the elections, concluded that “no major last-minute disinformation-related incidents have been detected.” Nor were the elections affected by the much-hyped deluge of deceptive deepfakes. In September 2024, the Alan Turing Institute—the United Kingdom’s national institute for data science and AI—analyzed AI disinformation in the European Union, French, and British elections. It found “no clear evidence that such threats had any impact on influencing large-scale voter attitudes or election results.”
The stark contrast between elite panic alarmism and reality on the ground should not have come as a surprise. It echoed the panic surrounding the 2019 European elections. Back then, European Commission President Jean-Claude Juncker warned that “in our online world, the risk of interference and manipulation has never been higher.” When the elections were over, the Commission concluded that no widespread disinformation campaigns had been identified, a finding shared by independent researchers. These concerns were largely fueled by the assumption that Russian disinformation had influenced the 2016 US presidential election bringing Donald Trump to power. Yet several studies have raised serious questions about the impact of disinformation campaigns (Russian and otherwise) on elections more broadly. As the authors of a 2023 study using longitudinal survey data concluded, “We find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.”
Even after these signs of democratic resilience, elite warnings about catastrophic disinformation continued at full volume. EDMO, sounding like a medieval inquisitor scouring for heretics, declared, “The European information space must be kept clean and monitored all the time.” European politicians agreed. On von der Leyen’s reelection as commission president on July 18, 2024, she reiterated her proposal for a European Democracy Shield. That same month, Cyprus—an EU member state—proposed a law criminalizing the spread of “fake news” with up to five years of imprisonment. After Germany’s 2025 election, the new Christian Democratic Union–led coalition platform asserted, “The deliberate dissemination of false factual claims is not protected by freedom of speech” and promised a new media oversight body targeting “information manipulation.”
Unlike its more permissive stance on hate-speech bans, the European Court of Human Rights (ECHR) has shown stronger skepticism toward vague or overly broad disinformation laws. In cases involving Poland and Ukraine, the ECHR highlighted governments’ limited leeway to restrict political speech during elections, finding violations of free speech. These cases, however, predate the post-2016 elite panic surrounding disinformation. In a 2019 decision, the court found in favor of an applicant but upheld a Polish election law requiring courts to address “untrue information” within twenty-four hours, citing the need to swiftly correct election-related “fake news” to safeguard electoral integrity. The court also stressed that the speech wasn’t excessively “vulgar or insulting.” In contrast, in 2021 the ECHR rejected a complaint by a local newspaper fined under the same Polish law for publishing unverified defamatory claims about a mayoral candidate, noting the lack of factual support.
ECHR case law suggests that the court may be more skeptical of disinformation laws than hate-speech bans—but not to the extent of protecting demonstrably false claims or the kinds of hyperbole, selective outrage, and strawman argumentation common on social media, where truth, falsehood, and opinion often blur into shades of gray.
Excerpted from The Future of Free Speech: Reversing the Global Decline of Democracy’s Most Essential Freedom by Jacob Mchangama and Jeff Kosseff. Copyright 2026. Published with permission of Johns Hopkins University Press.
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

