United States

A Obstacles to Access 21 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
75 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

The internet in the United States remains vibrant, diverse, and largely free from government censorship, and the country’s legal framework provides some of the world’s strongest protections for free expression online. However, a proliferation of electoral content that is false, misleading, and conspiratorial has created an unreliable online environment, seeping into the political system and undermining public confidence in American democracy. The country also lacks a comprehensive federal privacy law, and Congress has failed to adequately regulate disproportionate surveillance practices in which government agencies bypass judicial oversight by simply purchasing personal data from private companies.

The United States is a federal republic whose people benefit from a competitive political system, a strong rule-of-law tradition, robust freedoms of expression and religious belief, and a wide array of other civil liberties. However, in recent years its democratic institutions have suffered erosion, as reflected in rising political polarization and extremism, partisan pressure on the electoral process, bias and dysfunction in the criminal justice system, harmful policies on immigration and asylum seekers, and growing disparities in wealth, economic opportunity, and political influence.

header2 Key Developments, June 1, 2021 – May 31, 2022

  • In November 2021, President Joseph Biden signed the Infrastructure Investment and Jobs Act, which included $65 billion to improve broadband access and reduce digital disparities (see A1 and A2).
  • In June 2021, President Biden directed the Department of Commerce to evaluate whether software applications that are owned, controlled, or managed by “foreign adversaries” present a risk to national security. The order came after federal courts blocked—and Biden rescinded—former president Donald Trump’s 2020 efforts to effectively ban the social media apps WeChat and TikTok, which are owned by China-based companies (see B2).
  • In 2022, two federal courts released conflicting decisions on state-level laws in Florida and Texas that would limit social media companies’ ability to moderate content according to their terms of service and platform policies. The cases were expected to be adjudicated by the Supreme Court (see B3).
  • The Supreme Court’s decision to overturn the constitutional right to abortion in Dobbs v. Jackson Women’s Health Organization, a draft of which was leaked in May 2022, prompted renewed concerns about social media companies’ content moderation practices and law enforcement agencies’ access to personal data that could be used for criminal and civil investigations in US jurisdictions where legal access to reproductive health care is restricted (see B3 and C6).
  • Ahead of the November 2022 midterm elections, the online environment was riddled with false information, conspiracy theories, and egregious harassment aimed at election workers and officials. False, misleading, and harassment narratives often focused on voting machines, counting and vote-by-mail procedures, voting locations, and voting requirements (see B5, B7, and C7).
  • In June 2021, the Supreme Court ruled that a high school student’s posting of vulgar language about her school on social media was covered by constitutional protections for freedom of speech, but the justices provided some leeway for schools to regulate student speech (see C1).

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

The United States has the third-largest number of internet users in the world,1 but penetration rates and broadband connection speeds are lower than in other economically developed countries.2 In April 2021, the Pew Research Center estimated that 93 percent of US adults use the internet,3 and that 85 percent own a smartphone.4 The International Telecommunication Union reported a similar penetration rate of 90.9 percent in 2020.5

The speed-testing company Ookla reported the median US fixed broadband download speed to be 167.36 Mbps in August 2022, ranking it sixth worldwide.6 The median mobile download speed was 61.95 Mbps, ranking 24th globally.

Infrastructural problems and severe weather have sometimes undermined internet access for US residents (see A2).7 For example, Hurricane Ida caused widespread outages in Louisiana in August 2021, leaving some users unable to contact emergency services and loved ones.8

Congressional leaders put forward several plans to modernize the nation’s telecommunications networks during the coverage period.9 In November 2021, President Biden signed the Infrastructure Investment and Jobs Act, which included $65 billion for high-speed broadband deployment in underserved areas and an affordable connectivity program for low-income communities (see A2).10

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 2.002 3.003

Older members of the population, those with less education, households with lower socioeconomic status, and people living in rural areas or on tribal lands tend to face the most significant barriers to internet access.1 High costs, inadequate infrastructure,2 and limited provider options also impede access (see A4).3

The cost of broadband internet access in the United States exceeds that in many countries with similar penetration rates, creating an “affordability crisis,” according to New America’s Open Technology Institute.4 In May 2021, a report from the advocacy group Free Press concluded that the average US household’s internet service expenditures grew by 19 percent from 2016 to 2019, an increase that outpaced the rate of inflation during the same period.5

People living on tribal lands are among the least connected in the country.6 The Federal Communications Commission (FCC) calculates that more than 32 percent of tribal-land residents in the continental United States do not have high-speed fixed terrestrial or mobile internet service.7 Broadband expansion rates lag in these communities compared with other rural areas.8

Older residents use the internet at lower rates than the rest of the population. In 2021, researchers found that almost 42 percent of US seniors, or about 22 million older Americans, did not have access to broadband connections at home.9

Broadband access, and particularly affordability, remains a priority for lawmakers. The FCC’s Lifeline program has provided long-term assistance to reduce the cost of telecommunications services. The program offers $9.25 in broadband support per month for qualifying low-income households.10 As of April 2022, approximately 6.79 million people subscribed to Lifeline, or about 17 percent of eligible households.11 Following controversial changes under former FCC chairman Ajit Pai (2017–21), including new financial disclosure obligations and data-usage monitoring,12 a number of federal agencies, civil society organizations, and policymakers have advocated for improvements to the program.13

To ease the impact of disparities in access during the COVID-19 pandemic, Congress passed the Emergency Broadband Benefit program as part of the Consolidated Appropriations Act in December 2020.14 The program provided affordable broadband access, including on qualifying tribal lands, and a one-time discount for internet-related equipment.15 Parts of the program were then extended via the larger infrastructure package in the fall of 2021 (see A1).16

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The US government imposes minimal restrictions on the public’s ability to access the internet. Private telecommunications companies own and maintain the backbone infrastructure, and there are multiple connection points to the global internet, making a government-imposed disruption of service highly unlikely and difficult.

Federal law enforcement agencies have previously limited internet connectivity in emergency situations. In 2011, San Francisco’s Bay Area Rapid Transit (BART) authorities restricted mobile internet and telephone service on its platforms ahead of a planned protest against a fatal shooting by the transit police.1

Standard Operation Procedure 303, approved by a federal task force in 2006, establishes guidelines for wireless network restrictions during a “national crisis.”2 What constitutes a “national crisis,” and what safeguards exist to prevent abuse, remain largely unknown. In 2014, the FCC clarified that it is illegal for state and local law enforcement agencies to jam mobile networks without federal authorization.3

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 4.004 6.006

The broadband industry in the United States has grown more concentrated over time. An estimated 83 million people have access to only one broadband provider in their area.1 These de facto local monopolies have exacerbated concerns about high cost and accessibility.2

Comcast leads the fixed-line broadband market with more than 31 million subscribers overall. Its chief competitor, Charter Communications, has approximately 29.2 million subscribers.3 Following a decade of consolidation, three national providers—AT&T, Verizon, and T-Mobile—dominate the mobile service market.

Consolidation of the telecommunications sector has undermined consumer protection and choice. In 2019, the US Court of Appeals for the District of Columbia Circuit upheld AT&T’s acquisition of the media and entertainment company Time Warner,4 despite the Justice Department’s challenge to the merger.5 Less than a year later, reports of financial problems at AT&T surfaced, with customers facing price increases.6 Separately, antitrust experts have called for the reversal of a controversial 2019 merger between T-Mobile and Sprint, another mobile service provider.7

The FCC has attempted to address concerns about reduced competition. The commission included provisions within a 2016 Charter–Time Warner Cable deal that required Charter to expand broadband availability, including by establishing new cable lines in poorly served areas and providing affordable access to low-income families.8 Other conditions prohibited the companies from privileging their cable television services over online video competitors.9

Some state regulations undermine the operation of municipal or publicly owned broadband providers that could challenge market consolidation, deliver higher-quality and more affordable service, and reach underserved communities.10 The Institute for Local Self-Reliance reported in September 2021 that 17 states had restrictive legislation that was impeding the development of community broadband.11 However, legislation granting government entities the authority to offer broadband services was passed in Arkansas and Washington State in 2021.12

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 3.003 4.004

The FCC is tasked with regulating radio and television broadcasting, interstate communications, and international telecommunications that originate or terminate in the United States. It is formally an independent regulatory agency, but critics on both sides of the political spectrum argue that it has become increasingly politicized in recent years.1

The agency is led by five commissioners nominated by the president and confirmed by the Senate, with no more than three commissioners from one party. Jessica Rosenworcel, a commissioner who was originally nominated by former president Barack Obama, was confirmed as the first woman chair of the FCC in December 2021. However, the nomination of Gigi Sohn to fill the remaining vacancy on the commission received significant opposition, contributing to a partisan impasse.2 The lack of a tie-breaking vote on the panel has limited regulatory progress on key internet freedom issues such as net neutrality.

Other government agencies, such as the Department of Commerce’s National Telecommunications and Information Administration (NTIA), play advisory or executive roles on telecommunications, economic, and technology policies. The Infrastructure Investment and Jobs Act tasked the NTIA with managing a grants program created by the law (see A1 and A2).3 The Federal Trade Commission (FTC) is an independent agency that oversees consumer protection and antitrust efforts, including in the technology sector. The Department of Agriculture is also an important source of funding for broadband initiatives and wields significant influence on policy.4

In 2017, the FCC repealed its 2015 Open Internet Order, often referred to as the net neutrality rule, weakening its regulatory authority over internet service providers (ISPs).5 The agency then instituted the Restoring Internet Freedom Order,6 effectively allowing ISPs to speed up, slow down, or restrict the traffic of selected websites or services at will. Civil society and public interest groups argued that these changes disadvantaged consumers in various ways,7 and that the FCC had abandoned its responsibility to protect a free and open internet (see B6).8

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 6.006 6.006

In general, the government does not force ISPs or content hosts to block or filter online material that would be considered protected speech under international human rights law.

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 3.003 4.004

The government does not directly compel content hosts to censor political or social viewpoints online, although intermediaries can face liability for not restricting certain types of content, such as copyright infringements and child sexual abuse material (CSAM), after becoming aware of it. Broadly speaking, content hosts and social media platforms are the primary decision-makers when it comes to the provision, retention, or moderation of prohibited online content in the United States (see B3).

In June 2021, President Biden rescinded the August 2020 orders by former president Trump that would have effectively banned WeChat, a messaging application, and TikTok, a short-video platform, on the grounds that they presented threats to national security; both are owned by China-based companies.1 Federal courts had already blocked implementation of Trump’s orders, citing free speech concerns.2 Biden’s new order directed the Department of Commerce to evaluate the potential national security risks associated with apps that are owned, controlled, or managed by “foreign adversaries.”3 In November 2021, the department released proposed rules that would require third-party audits of such apps;4 the rules remained under review as of May 2022. Similarly, the interagency Committee on Foreign Investment in the United States (CFIUS) continued its review of TikTok during the coverage period.5 In June 2022, after the coverage period, one FCC commissioner wrote to Google and Apple and urged them to remove TikTok from their respective app stores.6

Although there is no evidence that direct government pressure on users to remove online content is systematic or widespread, users have occasionally faced such demands. In June 2021, a police chief in Pennsylvania summoned a Facebook user to the local police station to discuss posts in which the user had criticized the department.7 The chief threatened to pursue spurious felony charges against the user if the posts were not taken down. In September 2021, the Texas Department of Family and Protective Services removed a page on its website that provided resources for LGBT+ youth after a candidate in the Republican Party's gubernatorial primary election criticized the page on Twitter.8

People in the United States have occasionally had their content restricted based on requests from foreign governments. In one prominent case, the New York Times reported in June 2020 that the video-conferencing platform Zoom, acting on a request from the Chinese government, had temporarily suspended the account of a US-based Chinese activist who planned to host a meeting to commemorate the deadly 1989 crackdown on prodemocracy protests in Beijing’s Tiananmen Square.9

Section 230 of the Communications Act, as amended by the Telecommunications Act of 1996—commonly known as Section 230 of the Communications Decency Act—remained a subject of debate among policymakers during the coverage period (see B3). The law shields online providers and content hosts from legal liability for most material created by users, including lawsuits alleging defamation or injurious falsehoods.10 However, there are exceptions to this immunity under federal criminal law, intellectual-property law, laws to combat sex trafficking, and laws protecting the privacy of electronic communications. In July 2022, two separate judges—one in a case against the Snapchat messaging platform and another in a case against the video chat site Omegle—released conflicting decisions about whether Section 230 protected companies from legal liability as it relates to their product design, in addition to user-generated content.11 Section 230 also ensures legal immunity for social media companies and other content providers that act in good faith to remove content when it violates their terms and conditions of service or their community guidelines.12

The Allow States and Victims to Fight Online Sex Trafficking Act, also referred to as SESTA/FOSTA, was signed in 2018. The law established new liability for internet services when they are used to promote or facilitate the prostitution of another person.13 After the bill passed in the Senate, but before it became law, reports emerged of companies preemptively censoring content: Craigslist announced that it was removing the “personals” section from its website altogether.14 Civil society activists criticized the law for motivating companies to engage in excessive censorship in order to avoid legal action.15 Sex workers and their advocates also argued that the law threatened their safety, since the affected platforms had enabled sex workers to leave exploitive situations and operate independently, communicate with one another, and build protective communities.16 The law faces ongoing court challenges.17

Section 512 of the Digital Millennium Copyright Act (DMCA), enacted in 1998, created new immunity from copyright claims for online service providers. However, the law’s notice-and-takedown requirements have been criticized for impinging on speech rights,18 as they may incentivize platforms to remove potentially unlawful content without meaningful judicial oversight. Research has shown how DMCA complaints have been filed to take down criticism, commentary, political campaign advertisements, and other speech that should be protected under international free expression standards.19

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 4.004 4.004

The government places few restrictions on online content, and existing laws do not allow for broad government blocking of websites or removal of content. Companies that host user-generated content, many of which are headquartered in the United States, have faced criticism for a lack of transparency and consistency when it comes to enforcing their own content moderation rules.

Section 230 of the Communications Decency Act generally shields online sites and services from legal liability for the activities of their users, allowing user-generated content to flourish on a variety of platforms (see B2).1 Despite robust legal and cultural support for freedom of speech in the United States, the scope of Section 230 has become a focus of criticism. Concerns about CSAM, defamation, cyberbullying and cyberstalking, terrorist content, and protection of children from harmful or indecent material have contributed to calls for reform of the platforms’ legal immunity for user-generated content, as have complaints that platforms are “over-moderating” certain political viewpoints.

Federal lawmakers have proposed numerous bills that would reform Section 230 and increase intermediaries’ liability for the content they host.2 The draft Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act was reintroduced in Congress in February 2022, reigniting backlash from tech experts and civil society organizations (see C4).3 The bill would require that providers adopt “best practices” for detecting and combating CSAM on their platforms, or otherwise risk losing Section 230 protections and being held liable for such content. Critics have warned that, as written, the legislation would incentivize providers to censor excessively and suppress online speech, and could also undermine companies’ use of end-to-end encryption (see C4).4

The draft Platform Accountability and Consumer Transparency (PACT) Act,5 initially introduced in 2020 and refiled in 2021 with a few changes, would require online platforms to provide expanded explanations of their content moderation practices and force them to adhere to court-mandated takedown orders.6 While the bill received recognition from some observers as a “serious” attempt to address content moderation concerns, civil society groups, industry representatives, and scholars have raised free speech concerns, warned that the legislation’s takedown provision could be used for censorship, and noted that smaller platforms might lack the resources to remain in compliance.7

The proposed Justice against Malicious Algorithms Act, introduced in October 2021, would limit Section 230 protections for services with over five million monthly users when they knowingly or recklessly recommend content via a personalized algorithm that leads to severe physical or emotional harm.8

In September 2022, after the coverage period, the Biden administration called for reforms to Section 230 for large tech platforms.9 In May 2021, the administration had rescinded an executive order by former president Trump that was meant to limit protections against intermediary liability.10 Trump’s order referred to accusations that social media platforms show “political bias” by deliberately censoring conservative views,11 despite scant evidence to support and multiple studies debunking such claims.12

Lawmakers in several states—including Texas, Florida, Ohio,13 Kentucky, Arizona, and North Dakota14 —have also proposed bills to regulate social media companies’ content moderation practices. In March 2021, Utah’s legislature passed a law requiring that mobile devices automatically filter out pornography.15 However, the rule will not go into effect unless five other states implement similar measures, and it will sunset in 2031 in the absence of such companion legislation.

In May 2022, the US Court of Appeals for the 11th Circuit struck down the majority of a Florida law that would have prevented social media companies from suspending the accounts of political candidates for more than 14 days, ruling that companies’ content moderation practices amount to speech that is protected under the First Amendment of the US constitution.16 Under the Florida legislation, platforms that suspended candidates running for statewide office in violation of the law would have faced fines of up to $250,000 per day.17 The appeals court upheld some of the law’s transparency provisions, which require companies to clearly disclose their rules to users. In September 2022, the state attorney general petitioned the Supreme Court to review the ruling.18

In September 2022, after the coverage period, the US Court of Appeals for the Fifth Circuit upheld a Texas law that allows Texans to sue social media platforms with over 50,000 active users that allegedly moderate content based on “the viewpoint” of a user. The decision categorizes social media companies as “common carriers.” Many legal experts, industry groups, and civil society organizations condemned the court’s decision and the law.19 The case was expected to be appealed to the Supreme Court, especially in light of the apparently conflicting ruling on the Florida law.

After the coverage period in October 2022, the Supreme Court agreed to hear Gonzalez v. Google LLC, a case about the scope of Section 230 in relation to algorithmically recommended content.20 The court also accepted Twitter v. Taamneh, a related case that has implications for Section 230.

Following the Supreme Court’s June 2022 decision to overturn the constitutional right to abortion in Dobbs v. Jackson Women’s Health Organization, state lawmakers in South Carolina proposed a bill that would outlaw “providing information to a pregnant woman, or someone seeking information on behalf of a pregnant woman,” online if the information would be or is “reasonably likely” to be used for obtaining an abortion.21 The measure would also prohibit hosting or maintaining websites or providing access to a website for the same purpose. In August, the state’s governor reportedly clarified that the bill would not be advanced due to First Amendment obstacles.22

However, several news outlets reported that in the wake of the Dobbs decision, Facebook and Instagram had removed posts that discussed abortion pills, including general information on how to legally obtain the medication through the mail as well as offers from users to provide the pills to people who live in states with restrictive abortion policies.23 A spokesperson for Meta, the two platforms’ parent company, acknowledged incorrect enforcement of its policies, clarifying that posts discussing how to access abortion medication are allowed, while posts offering to provide the medication are not.24 NBC News reported that Instagram had also limited search results for posts that included the terms or hashtags “abortion pills” and “mifepristone,” a common abortion medication.25

Tech companies have successfully argued that moderation decisions are an exercise of their own constitutionally protected right to set platform policies, allowing them to remove content and accounts that violate their rules. After former president Trump repeatedly violated platform policies by posting baseless claims about mail-in ballots and voter fraud,26 and by spreading COVID-19 misinformation,27 among other infractions, several major social media firms permanently or temporarily banned him from their services in January 2021.28 In response to a decision by Facebook’s Oversight Board—a structurally independent entity composed of global experts who review content moderation decisions and assess whether they align with the company’s policies, its values, and international human rights norms29 —the platform later specified that Trump’s ban would last two years and that he would only be allowed back if “the risk to public safety has receded.”30 In September 2022, Nick Clegg, the president for global affairs at Meta, confirmed that a decision on the matter would be made in January 2023.31

Facebook, Twitter, YouTube, and other major platforms have faced criticism for insufficient transparency regarding the enforcement of their respective community standards, as well as for the effects of this enforcement on marginalized populations.32 Several studies and independent audits have identified cases of racial, gender, and other forms of discrimination in content moderation and advertising policies that affect the speech of people in the United States.33

The “Facebook Files,” an investigative report published in September 2021 by the Wall Street Journal, revealed the extent to which Meta had failed to address harmful content on its platforms, particularly for teenagers and younger children on Instagram. The investigation also found that the company treated certain highly visible users, including politicians, differently, exempting them from existing content moderation rules.34 The report prompted several congressional hearings and renewed efforts to develop state and federal legislation.35

Companies that serve as providers of internet infrastructure also enforce their own discretionary speech policies. Apple, Amazon, and Google removed the social media platform Parler from their app stores and hosting services because of violent content on the app in relation to the January 2021 attack on the Capitol in Washington, DC.36 In September 2022, the web-security and content-delivery firm Cloudflare blocked access to Kiwifarms, an online forum that had facilitated egregious harassment and contributed to offline harms including suicide.37 Kiwifarms was an offshoot of 8chan, another site for which Cloudflare dropped services in 2019 and one that hosted manifestos written by the perpetrators of mass shootings.38

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Reports of self-censorship among journalists, commentators, and ordinary internet users are not pervasive in the United States. Women and members of marginalized communities are frequent targets of online harassment and abuse, which can encourage self-censorship (see C7). Government surveillance practices may also contribute to self-censorship.1

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 2.002 4.004

False, manipulated, and misleading information is disseminated by both foreign and domestic entities in the United States. While actors from across the political spectrum deliberately spread false or misleading information,1 multiple academic studies and civil society research have shown that the tactic is disproportionately utilized by powerful actors on the right.2

The nonpartisan Election Integrity Partnership (EIP) found that misleading or false claims around the November 2020 elections contributed to a single, larger metanarrative about a “stolen election.”3 Researchers determined that false electoral narratives were primarily spread by right-wing social media influencers; hyperpartisan and fringe media outlets; right-leaning mainstream media outlets; and political figures, including former president Trump and his family members.4

The EIP concluded that the surge in baseless allegations of electoral fraud online helped to propel the assault on the Capitol in Washington on January 6, 2021. Following the attack on the Capitol by Trump supporters, political figures and both mainstream and fringe media sites continued to propagate false or misleading information, including claims that the violence was a “false flag” operation orchestrated by leftists or Trump opponents within the government.5 The bipartisan House of Representatives’ Select Committee to Investigate the January 6th Attack on the US Capitol, launched in June 2021 and active throughout the coverage period, also investigated online false and misleading information spread by former president Trump and his allies, reaching conclusions similar to those of the EIP.6

During the coverage period, political actors continued to spread false and misleading information about voting, electoral administration, and electoral integrity (see B7).7 Specifically, false narratives coalesced around electronic voting machines, counting procedures, vote-by-mail procedures, voting locations, and voting requirements. For example, according to the Washington Post, more than 100 Republican Party nominees for Congress or statewide office embraced false narratives about the 2020 presidential election result ahead of the November 2022 midterm elections. Researchers at New York University’s Center for Social Media and Politics found that more politicians were spreading unreliable information online in the run-up to the 2022 midterms compared with the 2020 general elections.8

False and misleading information about the Russian government’s February 2022 invasion of Ukraine also permeated the online space. Narratives emanating from Russian sources have been shared by some users in the United States, including prominent media and political figures.9 For example, former Trump administration national security adviser Michael Flynn, Fox News television host Tucker Carlson, and former Democratic congresswoman Tulsi Gabbard spread or insinuated their own support for Kremlin-backed conspiracy theories, such as one alleging that the US government builds bioweapons in Ukraine.10 Some agencies of the US government sought to proactively combat disinformation by declassifying intelligence and releasing information ahead of the invasion.11 Major technology companies including Facebook, Google, and Twitter adjusted content moderation practices in an effort to limit Russian disinformation.12

Political actors have spread manipulated information about COVID-19 since the outbreak of the pandemic in early 2020.13 A report by the Center for Countering Digital Hate found that 65 percent of antivaccine content across Facebook and Twitter between February 1 and March 16, 2021, could be attributed to only 12 users with large followings, including longtime antivaccine advocate Robert F. Kennedy Jr.14 Similarly, the Virality Project, a coalition of experts led by the Stanford Internet Observatory, found that the main spreaders of false or misleading information related to COVID-19 between February and August 2021 were antivaccine and wellness influencers, popular conspiracy theorist accounts, right-leaning political figures, and “media freedom” influencers.15 In January 2022, Twitter permanently suspended the personal account of Republican congresswoman Marjorie Taylor Greene for spreading false information about the virus.16

US officials and experts have been particularly concerned about influence operations carried out by actors based in Russia, China, and Iran, including ahead of the November 2022 midterm elections.17 The 2021 annual threat assessment from the Office of the Director of National Intelligence stated that the Chinese government was “intensifying” its effort to mold public discourse in the United States.18 In September 2021, a report from the cybersecurity firm Mandiant Threat Intelligence detailed pro-Beijing disinformation that was aimed at exploiting US partisan divisions over the COVID-19 pandemic and encouraging Americans to protest.19

Online news outlets in the United States are generally free from either formal arrangements or coercive mechanisms that compel them to provide favorable coverage of the government. Yet political and economic factors can sometimes intersect to incentivize a close relationship between a political party and a given news organization.20

Hyperpartisan news sites have played a central role in spreading false, misleading, and conspiratorial information to US audiences. For instance, in September 2021 the far-right site Breitbart ran a column alleging that Democratic leaders wanted Republican voters to not receive the COVID-19 vaccine.21 The right-wing Epoch Media Group spread Chinese-language misinformation about the 2020 elections, according to the EIP.22

Various reports have also alleged that private companies use coordinated teams of commentators to spread information for their commercial gain. In March 2021, the Intercept reported that Amazon created a group of employees called “ambassadors” to defend the company and its founder, Jeff Bezos, from criticism on social media, particularly in connection with employees’ efforts to unionize.23 In May 2021, New York’s Office of the Attorney General concluded that major service providers, via the Broadband for America advocacy group, spent $4.2 million to oppose net neutrality protections in 2017, including by generating 80 percent of the 22 million public comments on the topic that were sent to the FCC.24

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

There are no government-imposed economic or regulatory constraints on internet users’ ability to publish content. Online outlets and blogs generally do not need to register with, or have favorable connections to, the government to operate. Media sites can accept advertising from both domestic and foreign sources.

The Foreign Agents Registration Act (FARA) does not entail any direct restrictions on an outlet’s content or the ability to publish online, but it does require those that qualify as foreign agents to disclose their organizational structures and finances. US federal agencies have identified certain Chinese and Russian state media companies as “foreign missions” or “foreign agents,” and both designations come with certain reporting requirements and other limitations.1 In August 2021, the Justice Department required Sing Tao, a Hong Kong newspaper known for its pro-Beijing stance, to register as a foreign agent.2

Experts argue that the FCC’s 2017 repeal of the 2015 Open Internet Order could result in new constraints for those wishing to publish online (see A5).3 Under President Biden, proponents of net neutrality have been guardedly optimistic about the principle’s potential revival. In July 2021, Biden signed an executive order that contained several directives to develop stronger regulations related to net neutrality.4 In July 2022, after the coverage period, Senator Edward Markey, Senator Ron Wyden, and Representative Doris Matsui introduced the draft Net Neutrality and Broadband Justice Act, which would classify broadband access as a telecommunications service, providing the FCC with the authority to restore net neutrality protections.5

Since 2018, numerous state legislatures, attorneys general, and civil society groups have also sought to restore net neutrality.6 In October 2019, a federal appeals court upheld the FCC’s repeal of the Open Internet Order,7 but it also ruled that the commission could not preemptively block states from enacting their own laws to safeguard net neutrality. Several states have successfully adopted laws or executive orders to that end. 8

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 3.003 4.004

As a whole, the online environment in the United States is dynamic and diverse. Users can easily find and publish content on a range of issues, covering a variety of communities, and in multiple languages. However, an upswell of misinformation, hyperpartisan speech, and conspiracist content has threatened the information ecosystem in recent years, weakening trust in traditional media and government institutions and eroding the visibility and readership of more credible sources. Reports have also explored the ways in which the policies and algorithms of major platforms—including Facebook, Twitter, YouTube, and TikTok—have contributed to the promotion of misinformation.1

The integrity and reliability of online information has been undermined by the spread of electoral disinformation, first surrounding the 2020 elections and then ahead of the 2022 midterm elections (see B5).2 False and misleading information has driven calls to change how elections are administered at the state and local level. For example, a blog published by the far-right Gateway Pundit falsely claimed that the Electronic Registration Information Center (ERIC), which safely shares voter registration information across states and strengthens electoral security,3 was a left-wing effort to undermine voting. Louisiana secretary of state Kyle Ardoin later withdrew his state from the program, while Alabama secretary of state candidate Wes Allen vowed to remove his state from the program if elected.4

Research efforts have drawn the connection between online misinformation and weakening confidence in US elections. According to a poll by the Cable News Network (CNN) that was released in September 2021, some 78 percent of Republicans believe President Biden did not win the 2020 election, and 52 percent of Americans lack confidence that elections reflect the will of the people.5 An investigation by the website FiveThirtyEight found that election deniers would be listed as candidates on 60 percent of ballots for the 2022 midterms.6

COVID-19 misinformation has led to offline harms or changed behavior. Several studies have found linkages between exposure to misinformation and vaccine hesitancy or refusal among US residents.7 According to doctors, some COVID-19 patients who accepted false claims that the virus was a hoax failed to take their illnesses seriously and sought medical care too late.8

The rise of conspiracist content online in the United States is a multiyear trend.9 According to a 2022 poll from the Public Religion Research Institute, nearly one in five Americans believe in QAnon—an online conspiracist movement alleging that key Democrats and other elites are part of an international cabal of pedophiles, and that former president Trump is a heroic leader against these forces of evil.10 In May 2022, an 18-year-old man allegedly attacked a supermarket in Buffalo, New York, and killed 10 people; he was later linked to an online manifesto that repeatedly cited the White supremacist, racist, and antisemitic “great replacement” conspiracy theory.11

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

Score Change: The score improved from 5 to 6 because there were fewer reported incidents of surveillance, harassment, and arrests for protest-related online activities, following a spike in such cases during nationwide demonstrations in support of racial justice in 2020.

There are no technical or legal restrictions on individuals’ use of digital tools to organize or mobilize for civic activism. However, growing surveillance of social media and communication platforms, targeted harassment and threats, and high costs and other barriers to internet access have sometimes undermined people’s ability to engage in online activism.

After the Supreme Court’s Dobbs decision was leaked in draft form in May 2022 and then formally released in June, many users took to social media to share their personal experiences with abortion and pregnancy, to express their opposition to or support for the decision, and to coordinate civic mobilization.1 During protests against the decision that erupted in major cities across the country, some online journalists reported facing police violence (see C7).

During the previous coverage period, many Americans organized online protests against racial injustice and to provide support for the Black Lives Matter movement after the police killings of Black civilians Breonna Taylor in Kentucky and George Floyd in Minnesota in 2020.2 Federal, state, and local law enforcement agencies increased their social media surveillance amid the protests (see C5).3 Media reports in June 2020 revealed that agents from a Federal Bureau of Investigation (FBI) terrorism task force had appeared at homes or workplaces to question four people in Cookeville, Tennessee, who were involved in planning Black Lives Matter rallies on Facebook.4

Surveillance of protests and other organized civic movements have chilled some people’s willingness to use digital tools to associate and assemble. For example, citing concerns that online information would be used by hostile nonstate actors to disrupt or exploit planned gatherings, a number of Minneapolis residents instituted self-imposed restrictions on live streaming and sharing of information on social media during the 2020 racial justice protests.5 A photographer and activist in Philadelphia stopped posting to social media amid the protests in June 2020, arguing that the move was necessary to protect demonstrators from police retaliation.6

Despite strong constitutional protections for the freedom to assemble, the International Center for Not-for-Profit Law tracked numerous federal and state initiatives aimed at restricting that right from early 2021 to the end of the coverage period, including one ultimately unsuccessful Alabama legislative proposal that broadly defined incitement to riot and could have criminalized legitimate online activity.7

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 6.006 6.006

The First Amendment of the federal constitution includes protections for free speech and freedom of the press. The Supreme Court has long maintained that online speech has the highest level of constitutional protection.1

In June 2021, the Supreme Court ruled in favor of a high school student who was suspended after posting, while not on school grounds, an image on Snapchat that used vulgarities to express frustration with her school and its cheerleading squad.2 The nearly unanimous decision found that the student’s speech was protected under the First Amendment, but the justices acknowledged some leeway for schools to regulate speech when it is genuinely disruptive in order to deal with bullying and related problems.3

A 2017 Supreme Court decision had also affirmed the protected status of online speech, arguing that to limit a person’s access to social media “is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”4

In 2021, a federal district court dismissed a lawsuit that invoked a First Amendment right to record and live-stream police activity. An appeal in the case, which was originally filed by a Black activist in 2018 after officers threatened to arrest him if he persisted in recording them, was pending at the end of the coverage period.5 In 2017, other federal courts had upheld the right of bystanders to use their smartphones to record police actions.

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Despite significant constitutional safeguards, laws such as the Computer Fraud and Abuse Act (CFAA) of 1986 have sometimes been used to prosecute online activity and impose harsh punishments. State-level laws also penalize online activity.

Aggressive prosecution under the CFAA has fueled criticism of the law’s scope and application. The act prohibits accessing a computer without authorization, but fails to define the terms “access” or “without authorization,” leaving the provision open to interpretation in the courts.1 Until recently, reform efforts were largely unsuccessful.2 In April 2020, however, a court narrowed the scope of the CFAA by ruling in favor of researchers who were concerned that their work, which involved scraping data from websites, ran afoul of the law.3

In June 2021, the Supreme Court further limited the application of the CFAA and clarified the meaning of “unauthorized access.”4 The case, Van Buren v. United States, involved the conviction of a police officer who had accessed police databases for unofficial purposes.5 Following the decision, the US Court of Appeals for the Ninth Circuit ruled in hiQ v. LinkedIn that the CFAA likely does not bar people from scraping data from a public website, even if the website owner does not consent.6

Certain states have criminal defamation laws in place, with penalties ranging from fines to imprisonment.7 Among other state-level restrictions, Arizona governor Doug Ducey signed a law in July 2022—after the coverage period—that made it a misdemeanor offense to film police from less than eight feet away following a verbal warning.8

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 4.004 6.006

Prosecutions or detentions for online activities are neither frequent nor systematic. However, local police have investigated, arrested, and charged users for some actions. Due to strong legal protections for free expression, such cases are often dropped by prosecutors.

Police have periodically detained or retaliated against individuals for using their mobile devices or social media accounts to document law enforcement activity; most face charges such as obstruction or resisting arrest. In September 2021, the Los Angeles Police Department (LAPD) arrested William Gude, who runs the popular Twitter account @FilmThePoliceLA as well as Instagram and YouTube accounts where he posts recordings and reports on police interactions.1 Local activists and Gude argued that his arrest—for allegedly making criminal threats—was retaliation for his work. In January 2022, he was also charged with violating a California eavesdropping law after he used Twitter to publish a recording of a conversation he had with police about a misleading sign at an LAPD station.2

In January 2021, user Joshua Andrew Garton of Tennessee was detained for two weeks and charged with harassment for posting a doctored image on social media that depicted two police officers urinating on a gravestone. The charges were later dropped, and Garton sued local and state officials for violating his First Amendment rights. In June 2020, five people in New Jersey were charged with online harassment, a felony, for a Twitter post seeking to identify a masked police officer.3 One man was charged for publishing the post, while the other four had simply shared it. The charges were dismissed that August.4

Online journalists have been investigated, arrested, or charged while covering protests. During protests in support of the right to abortion following the Supreme Court’s Dobbs decision in June 2022, a few online journalists were temporarily detained by police, including a correspondent for the conservative site El American.5 In April 2022, an independent online photojournalist in Los Angeles was arrested and charged with ignoring police orders while documenting protests against the police killing of Kurt Reinhold, an unhoused Black man.6 During protests in response to the fatal police shooting of Black civilian Daunte Wright in Minnesota in April 2021,7 numerous journalists were arrested or detained, including Naasir Akailvi and Tracy Gunapalan of the Neighborhood Reporter, a social media news outlet.

Politicians have occasionally retaliated against journalists and social media users. Missouri governor Michael Parson referred St. Louis Post-Dispatch journalist Josh Renaud to a local prosecutor for criminal charges after Renaud found and reported that a state website made the Social Security numbers of teachers and administrators publicly available. The prosecutor determined that charges should not be filed.8

At times, politicians have attempted to use legal cases to identify anonymous critics on the internet. In March 2019, then congressman Devin Nunes, a Republican from California, sued Twitter and the users of three anonymous accounts, alleging defamation and seeking $250 million in damages; a Virginia judge overseeing the case ruled in June 2020 that Twitter was immune from liability under Section 230 of the Communications Decency Act, though the individual users were not protected by this ruling. The New York Times revealed in May 2021 that the Justice Department under the Trump administration had issued a subpoena to Twitter in a bid to identify the user behind one of the anonymous accounts.

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 3.003 4.004

There are no laws restricting anonymity on the internet, a situation that is seemingly in line with constitutional protections for the right to anonymous speech in many other contexts. At least one state law that stipulates journalists’ right to withhold the identities of anonymous sources has been found to apply to bloggers.1

Online anonymity has been challenged in cases involving hate speech, defamation, and libel. In 2015, a Virginia court tried to compel the customer-review platform Yelp to reveal the identities of anonymous users, but the state's Supreme Court ruled that the company did not have the authority to do so.2 In 2019, a federal court ruled that Reddit did not need to reveal the identity of one of its users to a plaintiff who was suing for copyright infringement.3

No legal limitations apply to the use of encryption, but both the executive and legislative branches have at times moved to undermine the technology.4 In 2020, the Justice Department issued a joint statement with governments from the United Kingdom, Australia, New Zealand, Canada, India, and Japan, calling on Facebook and other tech companies to help enable government access to encrypted messages.5

The proposed EARN IT Act was reintroduced in Congress in February 2022, despite strong opposition from civil society (see B3).6 According to the 2022 draft, an intermediary cannot lose immunity under Section 230 for providing end-to-end encryption, but its adoption of the technology can be used as evidence in court if the intermediary is suspected of hosting CSAM. Civil society, academic, and technical experts argued that, in effect, the bill would discourage encryption by imposing greater legal liability on intermediaries that offer it. 7

The degree to which courts can force technology companies to alter their products to enable government access is unclear. The Communications Assistance for Law Enforcement Act (CALEA) of 1994 requires telephone companies, broadband providers, and interconnected Voice over Internet Protocol (VoIP) providers to design their systems so that communications can be easily intercepted when government agencies have legal authority to do so, although it does not cover online communication tools such as Gmail, Skype, and Facebook.8

Federal law enforcement agencies sought to compel Apple to unlock the encrypted smartphones of alleged perpetrators following a terrorist attack in San Bernardino, California, in 2015,9 and an attack on a Navy facility in Florida in 2019.10 In both cases Apple resisted, and agents gained access by other means. A federal judge ruled in 2016 that CALEA did not allow the government to compel Apple to unlock an iPhone.11

In June 2021, the Justice Department announced that the FBI had intercepted more than 20 million messages on the encrypted platform Anom, which was specifically designed to attract transnational criminal organizations, as part of an elaborate sting operation. The bureau worked with the Australian government and an informant to covertly operate the platform,12 which rerouted messages to an undisclosed country for decrypting.13 Some surveillance law experts have suggested that the FBI worked with an additional country because surveillance of this kind would be unlawful in the United States.14

Some advocates have called for explicit legal protections for encryption.15 The draft ENCRYPT Act, most recently introduced with bipartisan support in May 2021, would block state and local governments from requiring “backdoor” access mechanisms in tech products and services.16

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

The legal framework for government surveillance in the United States is open to abuse, and authorities have engaged in certain forms of monitoring, particularly on social media, with minimal oversight or transparency.

The government’s search and seizure powers are generally limited by the constitution’s Fourth Amendment, but federal authorities claim to have much greater leeway to conduct searches without a warrant in “border zones”—defined as up to 100 miles from any US border, an area encompassing about 200 million residents.1 Under Directive No. 3340-049a of 2018, US Customs and Border Protection (CBP) asserts broad powers to conduct device searches and requires travelers to provide their device passwords to CBP agents.2 Courts remain split on the legality of the searches, however.3 In February 2021, a federal appeals court in Boston found the practice constitutional,4 but a federal appeals court in San Francisco had significantly narrowed CBP’s ability to conduct warrantless searches in 2019, limiting it to cases that relate to digital contraband.5

Between October 2021 and June 2022, CBP reported 34,964 electronic device searches.6 In an example from January 2021, an immigration lawyer in Texas reported that CBP officers confiscated and searched his phone without a warrant when he returned from a trip abroad.7 In September 2022, the Washington Post reported on a letter Senator Ron Wyden sent to CBP that revealed how information collected through these searches—including contact lists, call logs, photos, and messages—is collated into a searchable database called the Automated Targeting System and made accessible to CBP officers without a warrant.8

Federal, state, and local law enforcement bodies have access to a range of advanced tools for monitoring social media platforms and sharing the information they collect with other agencies,9 without clear oversight or safeguards for individual rights.10 For example, during the nationwide demonstrations against racial injustice in 2020,11 several federal agencies and local police departments collected information from posts, comments, live streams, and videos on social media (see B8).12 In September 2021, the Brennan Center for Justice revealed, via a public records request, that LAPD officers were authorized to collect social media account and email information from any civilian they interviewed.13 Local police have also created fake social media accounts to infiltrate users’ networks and gain access to more personal information.14

In 2019, the Department of State enacted a new policy that vastly expanded its collection of social media information.15 It required people applying for a US visa, numbering about 15 million each year, to provide social media details, email addresses, and phone numbers going back five years.16 In February 2022, the Department of Homeland Security (DHS) and CBP proposed requiring applicants for entry under the Visa Waiver Program to provide their social media handles.17 The White House had previously rejected a similar proposal in which DHS sought to expand its social media monitoring of people entering the country.18

In January 2022, a New York Times Magazine investigation revealed that the FBI had purchased and tested Pegasus spyware, a notorious surveillance product developed by the Israeli firm NSO Group, though there was no evidence that the tool had been deployed against people in the United States.19

The legal framework for foreign intelligence surveillance has in practice permitted the collection of data on US citizens and residents. Such surveillance is governed in part by the USA PATRIOT Act, which was passed following the terrorist attacks of September 11, 2001.20 In 2015, then president Obama signed the USA FREEDOM Act, which extended expiring provisions of the PATRIOT Act, including broad authority for intelligence officials to obtain warrants for roving wiretaps of unnamed “John Doe” targets and surveillance of lone individuals with no evident connection to terrorist groups or foreign powers.21 At the same time, the new legislation was meant to end the government’s bulk collection of domestic call detail records (CDRs)—the metadata associated with telephone interactions—under Section 215 of the 2001 law. The bulk collection program, detailed in documents leaked by former National Security Agency (NSA) contractor Edward Snowden in 2013,22 was ruled illegal by the US Second Circuit Court of Appeals in 2015.23

Under the USA FREEDOM Act, the NSA—which focuses on foreign intelligence collection—is permitted to access US call records held by phone companies after obtaining an order from the Foreign Intelligence Surveillance Court, also called the FISA Court in reference to the 1978 Foreign Intelligence Surveillance Act.24 Requests for such access require use of a “specific selection term” (SST) representing an “individual, account, or personal device” that is suspected of being associated with a foreign power or international terrorist activity;25 this mechanism is intended to prevent broad requests for records based on an area code or other imprecise indicators. The definitions of SSTs vary, however, depending on the authority used, and civil liberties advocates have criticized them as excessively broad.26

The USA FREEDOM Act requires the FISA Court to appoint an amicus curiae in any case that “presents a novel or significant interpretation of the law,” so that judges are not forced to rely on the arguments of the government alone in weighing requests. However, the court can waive the requirement at its discretion. The panel of amici curiae includes experts on privacy, civil liberties, and communications technology.27 Five people are currently designated to serve.28

Although the USA FREEDOM Act’s reforms to Section 215 of the PATRIOT Act were supposed to end bulk collection of CDRs, official statistics showed that a massive number were still being acquired.29 In 2019, the NSA recommended that the White House not seek reauthorization of the program because its operational complexities and legal liabilities outweighed the value of the intelligence gained.30

Collection under Section 215 was no longer authorized under current law during the coverage period, after the provision expired in March 2020 and Congress failed to reauthorize it.31 However, a “savings clause” allowed officials to continue using the authority for investigations that had begun before the expiration, or for new examinations of incidents that occurred before that date.32

Other components of the US legal framework allow surveillance by intelligence agencies, but often without adequate oversight, specificity, and transparency. Section 702, adopted in 2008 as part of the FISA Amendments Act, authorizes the NSA, acting inside the United States, to collect the communications of any foreigner overseas as long as a significant purpose of the collection is to obtain “foreign intelligence,” a term broadly defined to include any information that “relates to … the conduct of the foreign affairs of the United States.”33 Section 702 surveillance involves both “downstream” collection, in which stored communications data—including content—are obtained from US technology companies, and “upstream” collection, in which the NSA collects users’ communications as they are in transit over the internet backbone.34 Although Section 702 only authorizes the collection of information pertaining to foreign citizens outside the United States, Americans’ communications are inevitably swept up in this process in large amounts, and these too are stored in a searchable database.35 Under a 2018 reauthorization of Section 702, FBI agents must obtain a warrant to review the content of communications belonging to an American who is already the subject of a criminal investigation;36 the reauthorization also imposed additional transparency measures relating to the authority.37

The Section 702 protocols intended to limit official access to Americans’ communications are frequently violated. In October 2019, the FISA Court released three opinions in which it found that the communications data of tens of thousands of Americans had been subjected to improper searches by the FBI.38 The court also determined that the FBI had violated the law by not reporting the number of times it conducted “US person queries.”39 A subset of these violations have been attributed to the NSA’s collection of communications when they merely mentioned information relating to a surveillance target (referred to as “about” collection), which the agency halted in 2017.40

Under Title I of FISA,41 the Justice Department may obtain a court order to conduct surveillance of Americans or foreigners inside the United States if it can show probable cause to suspect that the target is a foreign power or an agent of a foreign power. In March 2020, the department’s inspector general released a memorandum documenting pervasive errors in previous FISA applications, along with a failure to abide by internal procedures meant to ensure their accuracy.42

Originally issued in 1981, Executive Order (EO) 12333 is the primary authority under which US intelligence agencies gather foreign intelligence; essentially, it governs all such collection that is not governed by FISA, and it includes most collection that takes place overseas. The extent of current NSA practices that are authorized under EO 12333 is unclear and potentially overlaps with other surveillance authorizations.43 Although EO 12333 cannot be used to target a “particular, known” US person, the very fact that bulk collection is permissible under the order ensures that Americans’ communications will be incidentally collected, and likely in very significant quantities. Moreover, questions linger as to whether the government relies on EO 12333 to conduct any surveillance inside the United States that would not be subject to judicial oversight.44 A letter from two senators that was made public in February 2022 revealed that the Central Intelligence Agency (CIA) secretly conducted bulk data collection, authorized under EO 12333, in a manner that implicated Americans. Senators Wyden and Martin Heinrich have called for more transparency regarding the kind of records that are stored and the legal framework under which they were collected.45

In criminal probes, law enforcement authorities can monitor the content of internet communications in real time only if they have obtained an order issued by a judge, under a standard that is somewhat higher than the one established under the constitution for searches of physical places. The order must reflect a finding that there is probable cause to believe a crime has been, is being, or is about to be committed.

Access to metadata for law enforcement, as opposed to intelligence, generally requires a subpoena issued by a prosecutor or investigator without judicial approval.46 Judicial warrants are only required in California under the California Electronic Communications Privacy Act (CalECPA).47

According to one ruling in federal court, law enforcement officials must obtain a judicial warrant to access stored communications.48 However, the 1986 Electronic Communications Privacy Act (ECPA) states that the government can obtain access to email or other documents stored in the cloud with a subpoena, subject to certain conditions.49

Several government agencies, including the DHS, have purchased extraction technology from companies like the Israeli firm Cellebrite that allow agents to extract information stored on a device or online within seconds.50 An October 2020 report from the nonprofit UpTurn revealed that more than 2,000 state and local law enforcement agencies also had such technology.51 In February 2022, the Intercept reported that all but one of the 15 US cabinet departments have Cellebrite products, including departments and agencies that have little association with intelligence collection, such as the Department of Agriculture and the Centers for Disease Control and Prevention.52

Dozens of law enforcement agencies have access to cell-site simulators or IMSI (international mobile device subscriber identity) catchers—commonly known as “stingrays” after a prominent brand name—that mimic mobile network towers and cause nearby phones to send identifying information; the technology enables police to track targeted phones or determine the phone numbers of people in a given area. As of 2018, the American Civil Liberties Union (ACLU) had identified 75 agencies across the country that used stingrays.53 In May 2020, the ACLU also revealed that between 2017 and 2019, US Immigration and Customs Enforcement (ICE) had used stingray or similar devices at least 466 times.54 Several courts have affirmed that police must obtain a warrant before using stingray technology.55

In September 2022, the Associated Press reported on the extent to which local police have access to Fog Reveal, a subscription product that collects and analyzes huge amounts of commercially available location data generated by mobile applications, allowing authorities to reconstruct an individual’s movements.56

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 4.004 6.006

There are few legal constraints on the collection, storage, and transfer of data by private or public actors in the United States. ISPs and content hosts collect vast amounts of information about users’ online activities, communications, and preferences. This information can be subject to government requests for access, typically through a subpoena, court order, or search warrant.

In general, the country lacks a comprehensive federal data-protection law that would limit how private companies can use personal information and share it with government authorities, though a number of bills have been proposed.1 The draft American Data Privacy and Protection Act, introduced in June 2022, would minimize the personal data collected by companies, allow users to opt-out of data transfers, and provide the FTC with enforcement power, among other provisions.2 The draft Fourth Amendment Is Not For Sale Act includes components that would prohibit law enforcement and intelligence agencies from buying sensitive personal information like geolocation data from private companies and require the agencies to obtain a warrant.3

Given the lack of a clear federal law, the FTC in August 2022 announced that it was seeking public comment on whether the agency should institute new regulatory restrictions to limit harmful commercial surveillance practices.4

Most legislative activity on data privacy has occurred at the state or local level.5 In 2022, at least 35 states considered new or updated privacy proposals.6 Two California laws, the 2018 California Consumer Privacy Act (CCPA) and the 2020 California Privacy Rights Act (CPRA),7 allow state residents to obtain information from businesses about how their personal data are collected, used, and shared.8 Among other powers granted to them under the CPRA, consumers can request that personal information held by a business be corrected, opt out of automated decision-making technology, and opt out of certain information sharing.9

Under the USA FREEDOM Act of 2015, companies are permitted to report granular detail on certain types of government requests, under some constraints.10 In 2019, a request under the Freedom of Information Act (FOIA) revealed that the FBI had used national security letters—a form of secret administrative subpoena that the bureau can issue to demand certain types of communications and financial records—to access personal data from a much broader group of entities than previously understood,11 including Western Union, Bank of America, Equifax, TransUnion, the University of Alabama at Birmingham, Kansas State University, major ISPs, and tech and social media companies.

Separately, the government may request that companies store targeted data for up to 180 days under the 1986 Stored Communications Act (SCA).12

In 2018, the Supreme Court ruled narrowly in Carpenter v. United States that the government is required to obtain a warrant in order to access seven days or more of subscriber location records from mobile service providers.13 The ruling also diminished, in a limited way, the third-party doctrine—the idea that Fourth Amendment privacy protections do not extend to most types of information that are handed over voluntarily to third parties, such as telecommunications companies.14

The scope of law enforcement access to user data held by companies was expanded earlier in 2018 under the Clarifying Lawful Overseas Use of Data (CLOUD) Act.15 The act stipulated that law enforcement requests sent to US companies for user data under the SCA would apply to records in the company’s possession, including overseas. The CLOUD Act also allowed certain foreign governments to enter into bilateral agreements with the United States and then petition US companies to hand over user data,16 bypassing the “mutual legal assistance treaty” (MLAT) process.17 In 2019, the United States and the United Kingdom signed the first Bilateral Data Access Agreement under the CLOUD Act, and in December 2021,18 the United States and Australia entered a similar pact.19

User information is otherwise protected under Section 5 of the Federal Trade Commission Act (FTCA), which has been interpreted to prohibit internet entities from deceiving customers about what types of personal information are being collected from them and how they are used.

Private companies may comply with both legal demands and voluntary requests for user data from the government. In October 2021, Vice News reported on an FBI document that clarified what data service providers collect and store, how the bureau and other law enforcement bodies can obtain location information from the providers without a warrant, and what tools agencies have to analyze the information provided.20 In November 2021, the transparency organization Property of the People released a previously unreported FBI document that showed the extent to which certain messaging platforms—like WhatsApp, Signal, iMessage, and Viber—store user data that can be accessed via warrants or subpoenas.21 In 2019, the Justice Department confirmed that a Drug Enforcement Administration (DEA) program had collected billions of phone records from AT&T without a court order.22

Government bodies have purchased phone location data to aid in investigations and law enforcement, sidestepping judicial and other forms of oversight.23 In July 2022, the ACLU published thousands of pages of records indicating that DHS agencies including CBP, ICE, the Secret Service, and the Coast Guard had purchased huge volumes of location information pulled from mobile apps.24 Similarly, Vice News reported in November 2020 that US military agencies tasked with counterterrorism initiatives had contracted a third-party data broker to provide personal information from a popular Muslim prayer and Quran application.25

The Dobbs decision reignited calls for Congress to pass a privacy law and for companies to limit the data they collect and share with state officials, particularly in states where abortion is now criminalized and such information could be used for prosecutions.26 A Gizmodo investigation identified 32 data brokers selling information from an estimated 2.9 billion profiles of people determined to be pregnant or who searched for maternity products online, as well as 478 million customer profiles categorized as “interested” in becoming or “intended” to become pregnant.27 Vice News similarly reported that the data broker SafeGraph was selling aggregated data, including location information, of people who visited abortion and reproductive health clinics.28 In response to the concerns, the FTC announced that it would to the extent of its legal authority protect Americans against companies that exploit health, location, and other sensitive information.29 Democratic Party senators also introduced the draft Health and Location Data Protection Act, which would ban data brokers from selling health and location data.30

Facebook complied with a search warrant sent in June 2022, after the coverage period, and provided Nebraska police with private messages between a mother and daughter as part of a felony case related to an alleged abortion.31 The incident prompted renewed calls for the platform to encrypt its messaging services.32

In May and June 2021, new disclosures revealed that the Justice Department under the Trump administration had secretly obtained the phone records of several Washington Post, CNN, and New York Times reporters while investigating leaks of classified information.33 Also targeted were members of Congress, their staff members, and their families,34 as well as former White House counsel Don McGahn and his wife.35 In response to these disclosures, the department in June 2021 announced that it would no longer secretly collect journalists’ records,36 and Senator Wyden introduced the draft Protect Reporters from Excessive State Suppression (PRESS) Act, which would create new federal protections for reporters’ phone and email records.37

Police issue “geofence” warrants to gain access to information from electronic devices within a given geographic area, raising due process and proportionality concerns. Police in Minneapolis obtained a warrant compelling Google to deliver account data for anyone within a specified location of the city in May 2020, during protests in response to George Floyd’s murder.38 In August 2020, two federal judges in separate opinions ruled that such broad location-based warrants violate the Fourth Amendment.39

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 3.003 5.005

Internet users are generally free from extralegal intimidation or violence by state actors. However, online harassment is a long-standing and growing problem in the United States. Women and members of marginalized racial, ethnic, and religious groups are often singled out for such threats and mistreatment. A 2021 report from the Pew Research Center found that 41 percent of adults in the United States have experienced online harassment, with 33 percent of women under 35 reporting that they have faced sexual harassment online.1

In recent years, people involved with election administration and certification have faced increasing online harassment, due in part to conspiracy theories about their role in supposed fraud schemes (see B5 and B7). A March 2022 poll conducted by the Brennan Center for Justice found that one in six officials have received threats—often via social media—due to their election work, and that election workers are leaving their jobs in growing numbers due to safety concerns.2 Reuters documented more than 850 messages threatening election officials and their families in relation to their work during the 2020 election period.3

In June 2022, Shaye Moss, a former Georgia election worker, testified before the House of Representatives’ January 6 Committee about how violent, racist, and death threats via text and social media had upended her life. The threats began after former president Trump and his lawyer Rudolph Giuliani smeared Moss and her mother as part of a conspiracy theory about fake ballots, which was then shared by far-right online outlets.4 The FBI recommended that Moss and her family relocate for their own safety; Moss’s mother moved, Moss avoided leaving her home, and both women ceased using their names in public.5

Scientists and government health officials have also faced increased online harassment, including threats of violence, amid the COVID-19 pandemic.6 A state health director in Ohio resigned after she was subjected to online threats due to her recommendations on COVID-19 mitigation practices.7

In general, online harassment and threats, including doxing, disproportionately affect women and members of marginalized demographic groups.8 A 2018 Amnesty International study found that Black women were 84 percent more likely to be mentioned in abusive posts than White women.9 In 2021, the Wilson Center concluded that gendered and sexualized disinformation aimed at women politicians was widespread.10 Similarly, a 2022 report found that women mayors and mayors of color reported higher rates of abuse and harassment, including online, compared with their male and non-Hispanic White peers.11 For example, Mayor Lauren McLean of Boise, Idaho, released a statement in March 2022 that described how her family, including her children, had been threatened and had their activities tracked online.12

Online journalists are at times exposed to physical violence or intimidation by police, particularly while covering protests. During a protest in Los Angeles against the Dobbs decision in June 2022, a reporter for the online news site LA Taco was repeatedly shoved by police officers, despite clear identification as a member of the press.13 Numerous online journalists were physically assaulted by police or civilians while covering racial justice protests in 2020, despite making it clear that they were media workers.14

Beyond isolated cases of violence, US-based journalists have faced growing online harassment. In a PEN America survey conducted between June and October 2021, 58 percent of more than 1,000 journalists and editors reported experiencing one or more forms of harassment, most often online, including via emails, trolling, doxing, or “catfishing.”15 The Committee to Protect Journalists found that 90 percent of US respondents to a 2019 survey of female and gender-nonconforming journalists cited online harassment as the “biggest threat” to safety associated with their jobs.16 In October 2020, after Fox News host Tucker Carlson disparaged NBC News reporter Brandy Zadrozny on his show, she received such severe and specific threats online that she required armed security for two weeks.17

In June 2022, the White House established the Task Force to Address Online Harassment in an attempt to tackle the problem, particularly with respect to women, LGBT+ people, civic and government leaders, journalists, and activists. The task force aimed to strengthen coordination across departments and agencies, improve data collection and research, develop new programs and policies, and increase access to services for those affected by harassment.18

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 1.001 3.003

Cyberattacks pose an ongoing threat to the security of websites and networks in the United States. Civil society groups, journalists, and politicians have also been subjected to targeted technical attacks. During the coverage period, federal agencies and security experts communicated the risk of and proactively monitored and prepared for cyberattacks by Russian actors following the Russian military’s invasion of Ukraine and related US sanctions.1 The Supreme Court’s Dobbs decision renewed digital security concerns among many Americans and raised the risk of technical attacks on websites associated with pro-choice and anti-abortion groups.2

In January 2022, an attack against the News Corporation media conglomerate was discovered, with hackers apparently gaining access to the emails and documents of some journalists at the Wall Street Journal, its parent company Dow Jones, and the New York Post. The cybersecurity firm Mandiant determined that those behind the attack were connected to China and seemed to be interested in reporting related to Taiwan, China’s Uyghur population, US military activity, US tech regulation, and White House officials including President Biden and Vice President Kamala Harris.3

Security experts and agencies warned of technical attacks before and during the November 2022 midterm elections, particularly by actors aligned with the governments of Russia, China, and Iran.4 Ahead of the 2020 elections, Microsoft announced that a hacking unit associated with the Russian military intelligence service had targeted at least 200 US organizations, including national and state political parties and political consultants. Iranian and Chinese hackers also targeted people associated with Trump’s and Biden’s presidential campaigns.5

In May 2021, President Biden issued an executive order designed to bolster federal cybersecurity networks.6 The move came after hackers suspected of affiliation with the Moscow-backed group Darkside carried out a ransomware attack on the Colonial Pipeline, which disrupted fuel delivery to significant portions of the East Coast.7

In one of the largest and most sophisticated attacks in recent years, SolarWinds, a prominent information technology company, was compromised by an extensive infiltration attributed to the Russian government; it was first reported publicly in late 2020.8 The attackers used SolarWinds as a vehicle to penetrate federal government agencies, private-sector networks, think tanks, and civil society organizations, as the company’s software updates were installed by more than 18,000 users.9 The Biden administration responded with sanctions against the Russian government in April 2021.10

In June 2020, the Toronto-based research center Citizen Lab revealed that Dark Basin, a hack-for-hire group, had used phishing and other attacks against US nongovernmental organizations working on issues related to net neutrality and a climate-change campaign called #ExxonKnew.11 Several journalists from major news outlets faced technical attacks emanating from the same group.

Cyberattacks against state and local governments are increasingly common. According to one analysis, between 2017 and August 2020, cyberattacks on state, local, tribal, and territorial governments rose by an average of nearly 50 percent.12

Ransomware attacks have siphoned resources from institutions and individual users. As of April 2022, some former Maryland teachers affected by a 2020 ransomware attack were still not able to change their medical insurance payments, meaning some were owed thousands of dollars while others were underpaying for benefits.13 Separately, a December 2021 ransomware attack on Lincoln College, a predominantly Black educational institution, limited its access to data, halted its fundraising campaigns, and paused its student retention and recruitment efforts. The attack’s effects, combined with those of the pandemic, contributed to the school’s closure after 157 years in operation.14

On United States

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    83 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

  • Networks Restricted

  • Websites Blocked

  • Pro-government Commentators

  • Users Arrested