Listen to the article
This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the first post here, and the second here.
When people remember the 2011 uprisings across the Middle East and North Africa (MENA), they picture crowded squares, raised phones, and the feeling that the internet had finally shifted the balance of power toward ordinary people. But the past decade and a half is also a story about how governments, companies, and platforms turned those same tools into the backbone of a powerful state surveillance apparatus.
For activists, journalists, and everyday users, that means now living with a constant threat: the phone in your pocket, the platforms you organize on, and the systems you rely on for safety and connection can be weaponized at the flip of a switch. A global surveillance industry has treated repression by many MENA governments as a growth opportunity, and the tactics refined there now shape digital authoritarianism worldwide. This essay traces how that shift unfolded: security agencies upgraded older systems of repression with new surveillance tools and permanent monitoring infrastructure; cybercrime laws and mercenary spyware markets turned digital control into standard operating procedure; and biometrics, facial recognition, and ‘smart city’ projects laid the groundwork for AI‑driven surveillance that now shapes protests, borders, and everyday life far beyond the region.
Remembering the Arab Spring today means seeing the events of 2011 as both a remarkable moment of movement history when people leveraged networked tools in their fight for freedom and the beginning of a long, grinding effort to turn those same tools into mechanisms of state control.
Old‑School Repression, New‑School Tools
Long before Facebook and Twitter, regimes in places like Egypt and Syria already knew how to crush dissent. They leaned on informant networks, physical surveillance, and wiretaps, backed by emergency laws that let security agencies monitor and detain critics with almost no restraint. Research on the use of surveillance technology in MENA shows that, even before the Arab Spring, states were layering early digital tools like internet monitoring, deep packet inspection, and interception centers on top of that older machinery of control.
At the same time, connectivity was racing ahead. Cheap smartphones and social media suddenly let people share information at scale, coordinate protests, and broadcast abuses in real time. In 2011, EFF described both the excitement around “Facebook revolutions” and the early signs that governments were scrambling to upgrade their capacity to watch and disorganize popular dissent.
After the uprisings, Western critics endlessly debated how much credit to give social media itself. While in the background, security agencies across several MENA states reached a much simpler conclusion: if networked communication can help topple a dictator, then they needed to embed themselves deep inside those networks. Analyses of the rise of digital authoritarianism in MENA show how quickly officials pivoted from being surprised by online organizing to building systems to monitor and pre‑empt it.
In the years after 2011, governments across the region poured money into expanding internet monitoring and deep packet inspection, investing heavily in tools that let them systematically watch what people said and did on major platforms. Foreign vendors set up monitoring centers and interception systems that let security agencies block tens of thousands of sites, scrape and analyze social media at scale, monitor activist pages and online communities, and track activists in real time. They took the lesson of 2011 and built a new, pre‑emptive model of digital control, one that assumes the state should see as much as possible, as early as possible.
As we noted in 2011, exporting permanent surveillance infrastructure to already‑abusive governments doesn’t “modernize” public safety; it locks in an architecture of control that is primed to abuse dissidents, journalists, and marginalized communities.
Domestic Lawfare and Cyber-Mercenaries
The surveillance tech stack was only half the story. After the uprisings, a number of governments also rewrote the rules that govern online life. Cybercrime laws, “fake news” provisions, and overbroad public‑order and ‘morality’ offences gave prosecutors and security agencies legal cover to act with impunity. Governments in Saudi Arabia, Tunisia, Jordan, and Egypt combined counterterrorism, cybercrime, defamation, and protest laws into a legal thicket designed to make online dissent feel dangerous and costly. Morality laws and cybercrime provisions are used to target queer and trans people based on identity and expression.
At the United Nations, a new global cybercrime convention now risks baking this logic into international law. The convention was adopted by the UN General Assembly in late 2024, despite serious human rights concerns raised by civil society. Echoing our partners, EFF warned at the time that the UN cybercrime draft convention remained too flawed to adopt and urged states to reject the draft language because it legitimized expansive surveillance powers and criminalized legitimate expression, security research, and everyday digital practices around the world.
While on paper, these instruments gesture to “public safety” objectives, in practice they function as pathways for state security agencies to monitor, prosecute, and silence the communities most at risk. For state-targeted communities, that makes being visible online a calculated risk, not a neutral choice.
But criminal codes are only half the story. Mercenary tech is the other.
As governments worldwide looked for ways to outpace their critics, a parallel market emerged to help them infiltrate and take over devices. Companies like NSO Group marketed Pegasus and similar tools as off‑the‑shelf capabilities for governments that wanted to hack a target’s cellphones or other devices to read messages, turn on microphones, and monitor entire social networks while bypassing the courts.
In 2019, UN Special Rapporteur David Kaye called for a global moratorium on the sale and transfer of private surveillance tools until real, enforceable safeguards exist. Two years later, forensic work by Amnesty and media partners showed how the same spyware used to hack phones of Palestinian human‑rights defenders was used to surveil journalists, activists, lawyers, and political opponents across dozens of countries.
Regional groups responded by demanding an end to the sale of surveillance technology to autocratic governments and security agencies, arguing that you cannot keep selling “lawful intercept” tools into systems where law itself is an instrument of repression. Commercial spyware is at the center of digital repression, not at its margins. Surveillance vendors are not neutral suppliers. Safeguards remain weak, fragmented, or nonexistent in most of the countries buying these tools, yet vendors continue seeking new contracts and new militarized “use cases.” In other words, the companies that design, market, and maintain these systems precisely because they enable this kind of control profit from and help entrench authoritarian power.
Biometrics, Facial Recognition, and AI‑Powered Surveillance Cities
On top of this rapidly intensifying interception and spyware stack, governments and companies began layering biometrics and face recognition into everyday systems, creating pathways for bulk data collection, automated analysis, and risk profiling. In parts of MENA, national ID schemes, border and migration controls, and centralized biometric databases have been rolled out in environments with weak or captured data‑protection laws, making it easy to link people’s movements, services, and political activity to a single, persistent identifier.
Humanitarian programs are not exempt from this protocol. In Jordan, Syrian refugees have been required to submit iris scans and biometric data to access cash assistance and food, turning “consent” into a precondition for survival. When access to aid depends on enrollment in centralized biometric systems, any breach, misuse, or repurposing of that data can have severe, life‑altering consequences for people who have no realistic way to opt out. Investigations into surveillance‑tech firms complicit in abuses in MENA show that vendors profit from supplying biometric and surveillance tools for migration management and internal security, even when those tools are used in discriminatory or abusive ways.
Mass, indiscriminate surveillance technologies were first piloted in MENA on people who are already criminalized or made vulnerable by poverty, but their use quickly expanded from narrow, security‑framed deployments at borders and checkpoints to routine use in welfare offices, aid distribution sites, and city streets. As hardware for sensors, cameras, and data storage got cheaper and “smart city” surveillance systems promised seamless security and services, it became easier and less politically contentious to keep these systems running everywhere, all the time.
Unlike targeted hacking tools, these broad, city‑wide surveillance infrastructures built on camera networks, persistent sensors, and biometric databases erase any practical line between people under investigation and the broad public, normalizing bulk, indiscriminate monitoring of public space and everyday movement. In the Gulf, facial recognition and dense sensor networks are increasingly built into high‑profile “smart city” and mega‑project plans that lean heavily on biometric and AI‑driven monitoring. These are security‑first development projects where biometric and sensor infrastructures are designed from the outset to embed policing, migration control, and commercial tracking into the urban fabric. In this vision of the Gulf’s “smart city” future—often sold as seamless services and digital opportunity—“smart” is the branding, and pervasive monitoring is the operating principle.
EFF has consistently opposed government use of face recognition and biometric surveillance, in some instances calling for outright bans. In contexts that treat peaceful dissent as a security threat, embedding biometric surveillance into everyday infrastructure locks in a balance of power that favors militarized policing and state control. That infrastructure is now the starting point for a new set of risks. Surveillance systems built over the last decade are being repackaged as the foundation for a new generation of “AI‑enabled” defense and security products.
Companies that once focused on video management or perimeter security now advertise “defense applications” for AI‑driven situational awareness and threat detection, using computer‑vision models to scan camera feeds, compare against existing watchlists, and flag “suspicious” people or behaviors in real time. Drone and sensor platforms are being upgraded with embedded AI that tracks and classifies targets autonomously and with “drone‑based AI threat detection and intelligent situational awareness,” turning aerial surveillance into a continuous data feed for security agencies and militaries. In smart‑city and defense expos from the Gulf to Europe and North America, similar systems are marketed as neutral efficiency upgrades or tools to “protect critical infrastructure,” even where they are explicitly designed to scale up border enforcement, protest surveillance, and internal security operations.
As these systems are folded into AI‑driven defense products, the line between “civilian” infrastructure and militarized surveillance disappears, turning streets, borders, and aid sites into continuous input for security operations. That is the landscape that human rights and accountability efforts now have to confront.
Templates of Control, Networks of Resistance
The patterns established in heavily securitized MENA states after the Arab Spring now shape how states monitor and crush more recent uprisings, from Iran’s use of location data and facial recognition to track down protesters to long‑running crackdowns elsewhere in the region. This model of “digital authoritarianism” built on spyware, data‑hungry ID systems, platform control, and emergency‑style security laws has emerged everywhere from Latin America to Eastern Europe to here in the United States. As the new UN Cybercrime Convention moves toward implementation, its broad offences and surveillance powers risk turning this ad hoc toolkit into a formal template for cross‑border data‑sharing, repression, and an all‑purpose global surveillance instrument.
For people on the ground, none of this is theoretical. Human‑rights defenders, journalists, and ordinary users across the region face arrest, long prison sentences, and exile based on their digital traces. In that landscape, commercial spyware is not a side issue but part of the core machinery of repression. Pegasus has been used to hack journalists’ phones through zero‑click exploits and compromise human‑rights defenders and watchdog organizations themselves, including staff at Amnesty’s Pegasus Project partners and Human Rights Watch. These deployments give practical effect to the “cybercrime” and “terrorism” frameworks described earlier: person‑by‑person campaigns against particular communities, contacts, and networks, rather than neutral, generalized security measures.
Under these conditions, everyday security becomes a second job. People describe carrying multiple phones, keeping one for relatively “clean” uses and others for riskier conversations, splitting identities across platforms, using coded language, and moving their organizing off mainstream services when possible. Pushing this burden onto users is a political choice: states, platforms, and vendors could build systems that are safe by design; instead, they externalize risk to the people they watch and punish.
Even against that backdrop, civil society organizations have refused to cede the terrain to security agencies and vendors. Regional coalitions have demanded strict export controls and outright bans on selling intrusive surveillance tech to autocratic governments
Advocates have also pushed companies to do more than box‑ticking “due diligence.” Work with surveillance‑tech firms in the context of migration and border control has repeatedly shown that most are still far from serious human‑rights assessments, let alone willing to turn down these lucrative contracts.
Many of the same governments that have been critical of others on the issue of human rights have hosted or licensed companies that build these tools, in some cases buying similar capabilities for their own security agencies. European authorities, for instance, have investigated FinFisher’s export of spyware “made in Germany” to Turkey and other non‑EU governments. Meanwhile, the NSO Group has at least 22 Pegasus contracts with security and law‑enforcement agencies in 12 EU countries. This is a transnational industry, not a localized problem.
Against near impossible odds, people continue finding pathways to freedom. The global surveillance sector reinforces the same hierarchies and violence that people have found ways to survive against for generations. Queer activists and others at the sharpest edges of this system have had to develop their own forms of resistance, including against biometric and data‑driven targeting. Encryption, circumvention tools, and security training are not silver bullets, but they remain essential for anyone trying to organize, document abuses, or simply exist online with a bit less risk. Resources like EFF’s Surveillance Self‑Defense are one piece of that ecosystem, alongside trainers and groups who have been doing this work on the ground for years.
Remembering the Arab Spring in this context means not only tracing how surveillance expanded in its wake, but lifting up the people and coalitions who are still pushing back against that infrastructure today.
Defending the Future of Digital Dissent
The Arab Spring is often remembered through images of packed squares and hopeful tweets. But living with its aftermath means confronting the surveillance architecture built in its shadow: laws that turn online speech into a crime, spyware and biometric systems that turn phones and faces into tracking beacons, and platform practices that routinely sacrifice the people most at risk. None of that is inevitable, and none of it is confined to one part of the world.
Accountability has to reach both governments and the companies that profit from arming them with these tools. That means pushing for far stronger limits on how surveillance tech is built, sold, and deployed; demanding meaningful transparency when these systems are used; and defending the tools people rely on to communicate and organize safely, including robust encryption and secure channels. It also means taking direction from people in the region who have been navigating and resisting this landscape for years, rather than only paying attention once similar abuses show up elsewhere.
Surveillance itself is transnational: tools are exported, playbooks are copied, and data moves across borders as easily as money. And so we continue our work, documenting abuses, sharing security knowledge, and collectively organizing against these violent systems.
This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.
Read the full article here
Fact Checker
Verify the accuracy of this article using AI-powered analysis and real-time sources.

