

False Panacea: Abusive Surveillance in the Name of Public Health
A visitor's temperature is taken with an Iris Thermal Scanning device at the entrance to Edge Observation Deck at Hudson Yards on September 4, 2020 in New York City. Photo credit: Cindy Ord/Getty Images.
The public health crisis is laying a dangerous foundation for the future surveillance state.
Brick by brick, governments and companies responding to the public health crisis are laying a foundation for tomorrow’s surveillance state. Opaque smartphone apps collect biometric and geolocation data in an effort to automate contact tracing, enforce quarantines, and determine individuals’ health status. State agencies are gaining access to larger swaths of user data from service providers in a process that lacks oversight and safeguards against abuse. Police and private companies are accelerating the rollout of advanced technologies to monitor citizens in public, including facial recognition, thermal scanning, and predictive tools.
These systems have been deployed with little scrutiny or resistance. Most countries have yet to enact meaningful constraints on the collection and sharing of individuals’ biological information, known as biometric data, by state and corporate actors. Meanwhile, the past two decades of rapid technological change have already implanted surveillance into nearly every aspect of governance and commercial activity, creating an alarming amount of information that can be vacuumed up and manipulated by state and nonstate actors alike.
History has shown that new state powers acquired during an emergency tend to outlive the original threat. In their responses to the 9/11 terrorist attacks, governments around the world accelerated the militarization of law enforcement, gave state agencies broader mandates with less oversight, enhanced suspicion of and discrimination against marginalized populations, and normalized mass surveillance. The COVID-19 pandemic could serve as the catalyst for similar harms. Alarmingly, authorities in many countries have exploited the public health crisis to institute new and intrusive forms of surveillance, gaining novel powers of social control with few checks and balances.
The need for checks on runaway data collection
Contact tracing is vital to managing a pandemic. However, digital monitoring programs, which can sweep up more identifiable information than manual testing and tracing, are being implemented hastily, often outside of the rule of law and other structures of oversight and accountability that can ensure the protection of basic rights. Data collected from smartphone apps or by state agencies—such as one’s location, names, and contact lists—can be paired with existing public and corporate datasets to reveal intimate details of people’s private lives, including their political leanings, sexual orientation, gender identity, religious beliefs, and whether they receive specialized forms of health care. The conclusions drawn about an individual from these data can have serious repercussions, particularly in countries where one’s opinions or identities can lead to closer scrutiny and outright punishment.
The pandemic is ushering in a new age of digital social sorting, in which people are identified and assigned to certain categories based on their perceived health status or risk of catching the virus. Once flagged, a given group may be subjected to stigmatization and marginalization. They can face limits on their ability to access public services or education, return to work, send their children to day care, visit a shopping mall, or use public transport. Such programs may even take into consideration the actions of family members, housemates, or neighbors, penalizing individuals by association.
Authorities have exploited the crisis to institute intrusive forms of surveillance with few checks and balances.
These public health surveillance systems will be remarkably difficult, if not impossible, to decommission. As with national security matters, state agencies will always argue that they need more data to protect the country. There will also be great demand for health-related information from marketers, insurers, credit agencies, and any other industries that could profit from it. Given that the US National Security Agency itself has suffered high-level breaches affecting some of its most sensitive information, it is doubtful that such private actors will be able to defend the data from cybercriminals and state-sponsored hackers.
Greater public deliberation and independent oversight are needed to blunt the expansion and entrenchment of mass surveillance practices. At the very least, authorities must prove that a proposed measure is necessary and fit to purpose. Many new programs, for example, incorporate mobile-device location data to assist contact tracing, but the technology may not be precise enough to discern whether two people were at a safe distance from each other, and systems based on satellite signals are ineffective if the individuals are indoors. Such uncertainty is especially problematic if the location records are used to penalize people for not complying with quarantine or social-distancing rules.
Even if public health experts can demonstrate a monitoring program’s necessity and effectiveness, it must include independent oversight, transparency, and narrowly tailored rules that minimize what data are collected, who collects them, and how they can be used. Without such robust safeguards, the marginal benefits of pandemic surveillance are outweighed by the threat they pose to democratic values and human rights.
A proliferation of surveillance apps
Smartphone apps have been deployed for contact tracing or ensuring quarantine compliance in at least 54 of the 65 countries covered by this report. While these apps may make it easier for individuals to identify whom they have interacted with over a certain period of time, their rapid and nearly ubiquitous rollout presents an immense risk to privacy, personal security, and broader human rights. Developers have largely ignored established principles for privacy-by-design, an approach meant to ensure that privacy considerations are built into a tool’s architecture and software. Most apps are closed source, which does not allow for third-party reviews or security audits, and in practice there are few opportunities to appeal and redress any abuses. Moreover, in many countries, cybersecurity standards may have been made intentionally weak in order to facilitate broader data collection by state authorities.
These smartphone programs automatically gather sensitive information on where users live, with whom they reside, their daily routines, their casual interactions, and much more. Many of the apps ask for demographic and other data to facilitate user identification, then send the files, unencrypted, to a centralized server located in government offices. Researchers have demonstrated how easily these data can be leaked to cybercriminals, security agencies, and even other apps running on individuals’ phones. Some programs connect to additional surveillance technologies like facial recognition and electronic wristbands in order to verify users’ identities and more closely monitor their movements.
India is home to several pandemic apps that pose human rights risks. Aarogya Setu, a closed-source app that has been downloaded by over 50 million Indians, combines Bluetooth and Global Positioning System (GPS) tracking to determine users’ potential exposure and generates a color-coded “health status” to rate their risk of infection. Information collected from the government-backed app is stored in a centralized database, where it is shared with health institutes and other government agencies. More than a million people have been required to use it, and in at least one city, failure to download the app may result in criminal charges. Another closed-source app that collects and stores personal information, including GPS data, is Quarantine Watch, developed in partnership with the state government of Karnataka. The app requires users to send pictures of themselves accompanied by metadata on their geolocation to prove that they are complying with mandatory isolation. State officials have joked that “a selfie an hour will keep the police away.”
Although India is currently considering a data-protection bill, standards for cybersecurity remain lax, and sensitive COVID-19 databases created by the new apps have already been breached. Millions of personal records from a symptom-checker app developed by Jio, a leading telecommunications provider, were shown to be accessible without a password on an online database. Even before the pandemic, the security flaws of the country’s Aadhar biometric identification system led to numerous scandals involving data breaches. India is currently instituting a nationwide facial-recognition program that privacy advocates say could facilitate repression and discrimination. Separately, in October 2019 and June 2020, two reports revealed that government-linked spyware had been deployed against journalists and activists who drew attention to human rights violations in the country.
Mobile apps in Russia have added to the regime’s growing surveillance apparatus. The Social Monitoring app accesses GPS data, call records, and other information and requests random selfies from users to enforce quarantine orders and other restrictions on movement. In just over a month, authorities imposed nearly 54,000 fines totaling over $3 million on users. The penalties were sometimes erroneous and arbitrary, with those tagged for fines including the wrong identical twin, a bedridden professor, and sleeping users who received selfie requests in the middle of the night. Moscow residents over the age of 14 must log onto a government website to state their planned movements; users receive a QR code that is then scanned by security personnel in order to verify that they have permission to be in a given location.
The Bahraini government’s BeAware app is required for those in self-isolation or quarantine due to potential local exposure or a recent return from abroad. Individuals face fines of up to 10,000 Bahraini dinars ($26,000), a minimum three-month jail term, or both for failing to wear an electronic wristband or comply with the app. The program sends location and diagnostic information to a central government server and alerts authorities if an individual has strayed more than 15 meters from the phone. The government has a long record of monitoring dissidents for political reasons, including through the use of sophisticated spyware targeting the persecuted Shiite Muslim majority.
Saudi Arabia’s Tetamman app also comes with a mandatory Bluetooth bracelet. Failure to comply with strict quarantine measures can result in up to two years in prison, a fine of 200,000 riyals ($53,000), or both. A security researcher has reported that the Saudi government may also be testing the contact-tracing tool Fleming, which was created by the Israeli company NSO Group. The government has already used NSO Group’s other products to monitor and intimidate its critics. Authorities are strongly suspected of deploying the company’s Pegasus spyware to access the communications of activists and journalists, including the journalist Jamal Khashoggi, who was ultimately killed by Saudi agents in 2018.
In Turkey, a new system called Hayat Eve Sığar (HES) combines contact tracing with a health status code. A positive HES code is compulsory for all domestic travel. While the Turkish government app is the most efficient way to secure such a code, users can also text certain personal details to a phone number. The app emits Bluetooth signals to surrounding devices in order to facilitate contact tracing. It is used to monitor compliance with quarantine orders and sends data directly to law enforcement in case of violations. Government surveillance and the misuse of user data have been widespread in Turkey for years, and civil society has sounded the alarm about potential abuse of the new app.
Like the virus itself, quarantine and contact-tracing apps have had a disproportionate impact on certain populations. Singapore’s migrant workers, who often suffer from poor housing and employment conditions, are specifically required to use apps for contact tracing, the recording of symptoms, and reporting of their health status, setting them apart from other residents. In Ukraine, dozens of individuals were left stranded in an active conflict zone: people without smartphones and internet access, mainly the elderly, were unable to download the government’s mandatory Diy Vdoma self-isolation app and thus were not allowed to cross from separatist-controlled to government-controlled territory.
Private companies are also rapidly developing and selling health-code apps, which increasingly serve as gatekeepers for access to essential public services and the exercise of fundamental rights. COVI-Pass—a system designed by a British company—grants users a “VCode” to be scanned when entering office buildings, attending a sporting event, or walking in public. Individuals obtain color-coded results depending on their previous tests for the virus or its antibodies. COVI-Pass has already been sold to governments and companies in over 15 countries. Private companies in the United States have also expressed interest in requiring customers to use such coding systems, including airlines and hotels.
Amid the proliferation of problematic apps, some developers have attempted to create new products centered on privacy. An international consortium has supported the Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol. The Swiss team behind this project has opened up its source code for expert review in order to maximize cybersecurity and data privacy. In addition, tech giants Apple and Google jointly developed the Exposure Notification System application program interface (API). The opt-in software transmits random identification numbers via Bluetooth to surrounding smartphones and stores the numbers directly on the phone, rather than on centralized company or government servers. Users are notified if they interact with a person who has or is later identified as having tested positive for COVID-19.
The Google and Apple API allows health agencies to build their own apps using the firms’ privacy-respecting architecture. Authorities in Estonia, Brazil, and the United States are rolling out apps using either DP-3T, the Exposure Notification System, or both. Estonians can also rely on the country’s strong legal protections for privacy and transparency. However, Brazil’s recent track record on privacy and surveillance raises concerns for digital contact tracing. For instance, in October 2019, President Bolsonaro signed a decree, without public consultation or debate, compelling federal agencies to share a range of citizen data, including health records and biometric information. The United States also lacks federal privacy laws that could limit the ways in which data stored on phones and by apps are accessed, sold, or used.
Decentralized, opt-in, and Bluetooth-based contact-tracing tools are a promising alternative to more invasive, mandatory apps that feature centralized control. However, even they are not free of privacy and other risks. Smartphone apps in general are opaque about how they collect, store, and process data, and how and with whom they share information. Other apps on a user’s device, for example, may gain access to sensitive data stored there by the contact-tracing program, allowing them to sell the material to advertisers, insurers, credit agencies, or other data brokers. Proximity tracking is also vulnerable to being spoofed or hacked. Most importantly, no contact-tracing app will be useful or effective unless it is widely adopted and deployed in an environment with robust testing, manual contact-tracing systems, and a well-resourced public health infrastructure.
Tapping into telecommunications data
In at least 30 countries, governments are using the pandemic to engage in mass surveillance in direct partnership with telecommunications providers and other companies. New data-sharing initiatives may help authorities to conduct contact tracing and big-data analysis to understand the virus’ spread. However, the expanded data collection in many countries lacks transparency, proportionality, and privacy protections, posing clear risks to fundamental freedoms. It is particularly worrisome that national security and military agencies have been tasked with this work in some cases.
In Pakistan, the government has retooled an antiterrorism system to support “track and trace” efforts. The secretive program was developed by the Inter-Services Intelligence (ISI) agency, which has been implicated in enforced disappearances and other flagrant human rights abuses. It allows for “geofencing” to identify all of the people who have passed through a specific area at a specific time. There are separate reports of intelligence agents tapping the phones of hospital patients to determine whether their friends and family express having symptoms themselves. Officials also have access to a national biometric database containing information on over 200 million citizens. Little is publicly known about the overall program, though reports indicate that data can be passed on to police, health departments, and provincial government agencies. Patients who have tested positive, including health workers, have had their personal information leaked online, with severe consequences for their social standing and emotional well-being.
Sri Lanka has also integrated its defense apparatus into its pandemic response. Military intelligence officials are obtaining personal data from mobile service providers to identify people who have interacted with confirmed patients or evaded quarantines. Sri Lanka’s military has been accused of gross human rights violations and extrajudicial killings in the past, and since the 2019 presidential election, the authorities have escalated their intimidation and harassment of journalists, human rights defenders, and others they perceive as critics.
South Korea has been comparatively effective at containing its coronavirus outbreak, but its Infectious Disease Control and Prevention Act (IDCPA) permits broad surveillance, raising questions about epidemiological necessity and proportionality. Officials have pulled information from credit card records, phone location tracking, and security cameras—all without court orders—and combined it with personal interviews for rapid contact tracing and monitoring of actual and potential infections. Credit card histories reveal intimate details about people’s lives that go far beyond what is needed for contact tracing; people’s purchases can indicate their sexual orientation and religious beliefs, for example. South Korean officials have at times publicized patients’ gender, age range, and movements, which has fueled online ridicule, scrutiny, and social stigma. On the positive side, IDCPA does include important sunset provisions, requiring pandemic-related data to “be destroyed without delay when the relevant tasks have been completed.”
The government in Ecuador has taken a multifaceted approach to surveillance amid the pandemic. The country’s ECU 911, a public security network built mainly by Chinese firms with close ties to the regime in Beijing, has been actively collecting input from thousands of surveillance cameras, geolocation data, and police records to engage in “smart” analysis. ECU 911 is being incorporated into a new public health platform, which aggregates location data from satellite and mobile phones as well as information from the country’s COVID-19 app, including names, national identification numbers, birth dates, and geolocation records. National and local authorities are provided information from the platform’s database for contact-tracing purposes, to ensure quarantine compliance, and to identify any large gatherings at places such as schools, homes, and funeral sites. There is little transparency as to how long data are stored, by whom they could be used, and for what purposes.
In April 2020, state governors in Nigeria announced a new partnership with MTN, the country’s leading telecommunications provider, to model how vulnerable their states are to the pandemic based on subscriber information. Only two months earlier, the government and security forces were found to have been accessing mobile data records to identify and arrest journalists. Armenia’s parliament voted in March 2020 to grant surveillance agencies the ability to obtain telecommunications metadata from service providers, including phone numbers and the location, time, and other metadata of calls and messages, without judicial review. The data were meant to be used to identify individuals who may have encountered the virus and to monitor those in isolation, but the lack of transparency and oversight made it unclear how the records would or could be used in practice.
Some governments have taken preliminary steps to reduce privacy risks to users and are instead accessing aggregated and anonymized datasets to guide public health policy. In Australia, for example, the telecom company Vodafone provided the government with the location data of millions of people in an aggregated and anonymous format, allowing it to understand population movements and determine broad compliance with social-distancing restrictions.
In the United States, the mobile advertising industry handed over aggregated and anonymized location data to federal, state, and local governments. Authorities aimed to centralize the location data on people in over 500 cities in order to analyze how the disease was spreading. By requesting data from the advertising companies rather than mobile service providers, however, government agencies bypassed the minimal privacy-oversight mechanisms built into US law. The White House and the Centers for Disease Control and Prevention (CDC) have also reportedly negotiated with tech platforms about accessing aggregated and anonymized location data.
While anonymized data can be less invasive than individualized information, the records can be rendered identifiable, or deanonymized, when combined with other datasets or analyzed by big-data tools that are designed to find patterns in content from disparate sources. This potential means that anonymized and aggregated information remains vulnerable to exploitation or misuse by both governments and nonstate actors. The risk is compounded by the disproportionate surveillance laws and the lack of robust privacy protections in many countries, including both Australia and the United States.
Limited access to certain forms of data may be helpful for tracking how the virus spreads and to inform current and future responses to health crises. However, any sharing of digital information must be transparent, subject to independent oversight, and governed by the human rights principles of necessity and proportionality. The information collected should be firewalled from other uses and generally destroyed after the virus is brought under control. This will help ensure that authorities and private companies cannot easily repurpose health data to serve political, law enforcement, or commercial goals.
Rolling out the AI surveillance state
No country has taken a more comprehensive and draconian approach to COVID-19 surveillance than China, where the pandemic began. Over the past two decades, the Chinese Communist Party has built the world’s most sophisticated and intrusive surveillance state, consisting of both low- and high-tech elements. More recently, as China seeks to become a global leader in AI technology by 2030, authorities have experimented with machine learning, big data, and algorithmic decision-making in service of the regime’s politically repressive “social management” policies. Automated systems flag suspicious behaviors on the internet and, increasingly, on public streets using the world’s largest security-camera network. Since January, authorities have combined their existing monitoring apparatus and biometric records with invasive new apps and new opportunities for data collection.
After the coronavirus struck, regional officials partnered with major Chinese tech firms Alibaba and Tencent to develop “health code” apps. The prevailing software assigns individuals a QR code and low (green), medium (amber), or high (red) risk ratings depending on factors such as their location history and self-reported symptoms, although neither authorities nor the companies provide further information on how the risk levels are calculated. A green code has been required to access certain public spaces and office buildings. Although there are variations among the dozens of apps used in each province or municipality, an analysis by the law firm Norton Rose Fulbright found that the privacy policy of Beijing’s app does not incorporate strong privacy-by-design principles or state any time limit on the retention of data. A New York Times investigation showed that the Alipay Health Code app automatically shared data with the police.
As the initial outbreak was brought under control in China, certain health code apps were rolled back in cities like Shanghai. Conversely, in May, health officials in Hangzhou proposed to expand the city’s app system from simple color codes into personal “health scores” that would reflect people’s sleep patterns, alcohol consumption, smoking habits, and exercise levels. The proposal led to uproar among users and even earned a rare rebuke from state-run media. The concept has some similarities to experiments with government “social credit” systems and pilot apps run by corporations such as Ant Financial’s Sesame Credit, which track users’ personal and online behavior. Appearing on a blacklist maintained by municipal or provincial authorities can result in restrictions on movement, education, and financial transactions. By contrast, highly rated Sesame Credit users can win privileged access to private services, deposit waivers, and shorter lines at airport security. As of now, the government and privately run systems are maintained separately, although there are some indications they may be merged in the future.
Chinese authorities have also compelled state-owned telecoms and private tech companies to share data with public security bodies. Data terminals have been installed at train stations, hotels, and other high-traffic locations in order to rapidly collect information on individuals’ movements and location. Hundreds of individuals have issued complaints regarding COVID-19-related data leaks and privacy violations, with some observers calling for greater personal data protections to rein in the chaotic data-sharing prompted by the health crisis. Such demands add to rising pressure from Chinese netizens since 2018 in favor of data-protection legislation that would limit the ability of governments and corporations to access and use personal information.
Authorities are also testing the patience of residents through increasingly intrusive video and facial-recognition surveillance. Individuals have complained of being asked to install webcams inside their homes and outside their front doors. Facial-recognition companies like Hanwang claim that they can now identify people even if they are wearing a mask. The search engine giant Baidu announced in February that it had created face-scanning software to help the government identify people who are not complying with mask-wearing requirements. In March, authorities upgraded facial-recognition cameras in 10 cities with thermal detection technology, which can supposedly scan crowds of people and identify who has a fever.
Although China’s surveillance systems remain the most advanced and pervasive in the world, governments in countries across the democratic spectrum are rolling out biometric and AI-assisted surveillance with few or no protections for human rights. A network of over 100,000 cameras with facial-recognition capabilities in Moscow was reportedly used to enforce quarantines in March. Paris’s mass transit system has begun testing AI video cameras created by tech company Datakalab to compile statistics on riders wearing masks. Meanwhile, companies based in the United States and Europe are pitching tools to governments, schools, restaurants, and other institutions, claiming to be able to identify people with fevers at a distance. A biometric border-control system sold by the German company DERMALOG is being piloted in Bangkok, aiming to match facial recognition with fever detection to identify travelers who may have communicable diseases.
Many of the high-tech tools unveiled over the past year are not effectively tackling the crisis at hand. Instead they reinforce existing political repression and social inequity because of their dependence on inaccurate or biased data and the realities of racism and discrimination that shape the contexts in which they are used. Facial recognition, for example, is particularly unreliable for people of color and people who are transgender. One study found a 99 percent accuracy rate for white men, while the error rate for women who have darker skin reached up to 35 percent. Another study identified Native Americans as having the highest false-positive rates of any ethnicity in the United States.
Other forms of biometric technology, including those that employ forced DNA collection and emotion recognition, are similarly affected by discriminatory inaccuracies and can be just as easy to abuse. Biometric systems can collate information from face scans, iris scans, fingerprints, and DNA, and then use opaque algorithms to identify, track, and categorize people. Among other potential applications, such technology could be used to identify and monitor individual protesters, members of ethnic and religious minority groups, independent journalists, or any other group that is deemed a threat to those in power.
If such technologies are allowed to be introduced, it is imperative that they be governed by robust laws and regulations to protect fundamental rights and prevent the normalization of harmful and intrusive monitoring. The dangers they pose to freedom and democracy are simply too grave to ignore.
Living in the black box
The urgent need to combat COVID-19 has only accelerated the expansion of biometric surveillance and algorithmic decision-making in fields including health care, policing, education, finance, immigration, and commerce. The public should be deeply skeptical of this trend, in which private companies and government authorities promise purely technological solutions to problems that in fact require concrete economic, societal, or political action to address.
Opaque algorithms are quickly replacing human judgment in vital areas of human life, and the results are likely to create new inequalities and further disadvantage those who were already vulnerable to discrimination. In the context of health care, for example, predictive technology could be used to determine whether certain people or groups are more likely to contract or spread a virus, then bar them from public spaces. Similarly, in the criminal justice system, people deemed suspicious based on an automated analysis of inaccurate or discriminatory data could be flagged for enhanced monitoring or even arrest.
Only collective action on a global scale can halt the momentum of the emerging AI surveillance state.
As commercial enterprises, security agencies, and government bureaucracies come to trust and rely on digital technology, with all its flaws, there is a risk that the technology itself could effectively become the authority, rather than a tool used to implement human decisions. Policies determined by an inscrutable automated system cannot be examined or corrected using traditional democratic procedures. Humanity currently maintains some understanding of why an algorithm generates one output rather than another, but AI could ultimately remove what is known as “explainability”—and with it any sense of transparency, supervision, or accountability for injustice.
The future of privacy and other fundamental rights depends on what we do next. As schools reopen, people head back to offices, and travel resumes despite the ongoing pandemic, the push for mandatory mobile apps, biometric technology, and health passports will only grow. It is vital for the public to consider whether certain new forms of surveillance are necessary or desirable in a democratic society, to resist overblown or unrealistic promises from promoters of high-tech tools, and to push elected officials to build strong privacy protections and other democratic safeguards into law. Individual countries can take the lead, but only collective action on a global scale can roll back current excesses and halt the momentum of the emerging AI surveillance state.
METHODOLOGY AND DATA SOURCES: Freedom House identified a series of COVID-19-related surveillance data points and collected the relevant information on all 65 countries covered by Freedom on the Net. The resulting database was partly informed by the individual Freedom on the Net country reports written by external analysts. Freedom House staff conducted additional research and drew on the work of various other organizations, including the MIT Technology Review’s COVID Tracing Tracker, the COVID-19 Digital Rights Tracker from Top10VPN.com, the Centre for Internet Society’s Digital Identities framework, Privacy International’s Global COVID-19 response tracker, and OneZero’s COVID surveillance analysis of 34 countries. Visit freedomonthenet.org to access and download other country-specific data and sources used in this essay.
Explore Freedom on the Net 2020

Freedom on the Net 2020: The Pandemic's Digital Shadow
The pandemic is fueling digital repression worldwide.

Information Isolation: Censoring the COVID-19 Outbreak
Governments are using the pandemic as a pretext to crack down on free expression and access to information.

Explore Country Reports
Freedom House assesses the level of internet freedom in 65 countries around the world, examining issues including connectivity, blocking and filtering, and users’ risk of arrest or assault in reprisal for their online activity.