United States

A Obstacles to Access 21 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
77 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

Internet freedom in the United States declined for the fourth consecutive year. Federal, state, and local authorities in many cases responded to nationwide protests for racial justice with intrusive surveillance, intimidation, and harassment, and there were some arrests for online activities. The online sphere was flooded with politicized disinformation, inflammatory content, and dangerous misinformation related to COVID-19, the November 2020 elections, and protests, among other topics. While the internet in the United States remains vibrant, diverse, and largely free from state censorship, an executive order signed by President Donald Trump in May 2020 marked a shift away from the robust intermediary-liability protections that have long been synonymous with the US internet freedom model. After the coverage period, the president also ordered US individuals and entities to halt transactions with TikTok and WeChat, potentially forcing the popular Chinese-owned social media platforms to sell or shutter US operations that have an estimated 50 million and 19 million users, respectively.

The people of the United States benefit from an open and competitive political system, a strong rule-of-law tradition, robust freedoms of expression and religious belief, and a wide array of other civil liberties. However, in recent years its democratic institutions have suffered erosion. Both longstanding and newer issues include partisan manipulation of the electoral process, racism and inequality in the criminal justice system, flawed and discriminatory policies on immigration and asylum seekers, and growing disparities in wealth, economic opportunity, and political influence.

header2 Key Developments, June 1, 2019 – May 31, 2020

  • After surviving legal challenges, the merger between T-Mobile and Sprint concluded in April 2020. Industry watchers and open-internet advocates have voiced concern over market concentration among mobile service providers (see A4).
  • In October 2019, a federal appeals court upheld a decision by the Federal Communications Commission (FCC) to repeal its 2015 Open Internet Order, which aimed to establish the principle of net neutrality. However, the court prohibited preemptive blocking of state-level action to protect net neutrality in the wake of the FCC’s repeal (see A5).
  • In May 2020, President Trump signed an executive order titled “Preventing Online Censorship” to limit protections against intermediary liability within Section 230 of the Communications Act, as amended by the Telecommunications Act of 1996. The order came after Twitter fact-checked the president’s baseless claims about the risk of fraud through mail-in voting, and was just one of many government efforts to reform Section 230 (see B2 and B3).
  • In an unprecedented action, the government moved to effectively ban two popular Chinese-owned social media platforms; President Trump cited national security concerns when issuing executive orders in August 2020, after the coverage period, that prohibited any transaction from the United States with WeChat and TikTok. The decision came after the Treasury Department launched an investigation into TikTok in November 2019 (see B2 and B3).
  • Amid nationwide protests for racial justice that surged in May 2020 and continued in the months after the coverage period, enhanced government surveillance—as well as intimidation, harassment, and arrests linked to online activity—infringed on people’s freedom to use digital technology to associate and assemble (see B8, C3, C5, and C7).
  • In April 2020, Puerto Rico amended its public security law to make it a felony to “transmit or allow the transmission” of “false” statements about government proclamations or orders related to emergencies, including those pertaining to COVID-19, with penalties of up to $5,000 in fines and six months in jail (see C2 and C3).

A Obstacles to Access

Reliable internet service is accessible for the majority of residents in the United States. However, challenges remain in rural areas and tribal lands. Concentration in the telecommunications industry continues, with only three major mobile service providers in the country. While the federal government has turned away from net neutrality principles, a growing number of states have implemented their own rules.

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

The United States has the third largest number of internet users in the world,1 although the overall connection speeds and access rates for broadband networks remain lower than in many peer countries. In June 2019, the Pew Research Center estimated that 90 percent of US adults use the internet.2 Similarly, the International Telecommunication Union found that 87 percent of the US population used the internet in 2017, earning the United States a ranking of 29th out of 207 countries.3 The FCC’s most recent Broadband Deployment Report determined that, as of 2017, more than 93 percent of the population lived in an area with access to both fixed-line and mobile broadband services.4

Comparatively speaking, the United States leads other member states of the Organisation for Economic Co-operation and Development (OECD) in terms of total fixed broadband and total mobile broadband subscriptions.5 On fixed broadband subscriptions per capita, the United States ranks 18th out of 37 countries, lagging well behind OECD leaders Switzerland, France, Denmark, Germany, the Netherlands, and South Korea. With respect to mobile broadband subscriptions per capita, the United States ranks fourth behind Japan, Finland, and Estonia. In a global comparison, the country ranked 15th in broadband speeds in 2019.6 Nearly all adults in the United States (96 percent) reported owning a mobile phone in 2019.7 This mobile broadband market is the largest among OECD countries.8

The speed-testing company Ookla reported the US mobile internet download speed to be 44.06 Mbps in July 2020, ranking it 33rd worldwide, and the fixed broadband download speed to be 152.6 Mbps, or ninth worldwide.9

Despite the country’s high penetration rates, infrastructural problems during the coverage period limited access; the incidents were often related to extreme weather events exacerbated by climate change. In the fall of 2019, California’s largest electricity provider imposed temporary blackouts to stymie the spread of wildfires,10 affecting telephone and internet service.11

Major broadband providers continue to expand fifth-generation (5G) mobile networks, and “in aggregate” 5G networks are available to the majority of the population, although coverage is skewed toward urban areas.12 China’s position as a global leader in 5G development has raised concerns among US policymakers about the national security,13 economic, and internet freedom implications of embracing 5G technology built by Chinese companies. During the coverage period, the US government was working toward a method for safeguarding the operation of 5G networks, with authorization under the Secure 5G and Beyond Act of 2020.14 In June 2020, after the coverage period, the FCC designated two Chinese telecommunications companies and 5G technology providers—Huawei and ZTE—as national security threats.15

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 2.002 3.003

Despite its relatively strong infrastructure for internet provision, the country suffers from disparities in access, with older members of the population, those with lower household incomes, and residents of rural areas or tribal lands facing the greatest obstacles to connectivity.1

Tribal communities are among the least connected populations in the United States.2 More than 27 percent of people living in tribal areas lack access to fixed terrestrial broadband.3 A study by the American Indian Policy Institute at Arizona State University found that only 69 percent of respondents living on tribal lands had stable mobile internet access at all times.4

More than 26 percent of rural residents do not have access to high-speed internet, while others in this demographic often suffer from unreliable connections and unaffordable prices.5 Created in 2018, the Department of Agriculture’s ReConnect initiative aims to build internet infrastructure in rural areas that have limited broadband access.6 In 2020, the department offered $550 million in grants, loans, or a combination of the two to private entrepreneurs seeking to participate in the program.7

Internet access rates for individuals aged 65 and older have steadily increased over the past decade, reaching 73 percent in 2019, according to the Pew Research Center.8 Mobile technologies are also helping to reduce the broadband digital divide.9 Younger adults, people of color, and those with lower household incomes are especially prone to being “smartphone dependent,” with limited alternatives for their internet access.10

The cost of broadband internet access in the United States exceeds that in many countries with similar penetration rates, creating an “affordability crisis,” according to New America’s Open Technology Institute.11 One comparative analysis from Cable.co.uk found that the United States was more expensive than 118 other countries in 2020 with respect to the average cost of a fixed-line broadband package per month.12

In 2016, the FCC announced the expansion of its Lifeline program—which allows companies to offer subsidized phone plans to low-income households—to include broadband internet access as a subsidized utility.13 However, the commission, under Ajit Pai’s leadership beginning in 2017 (see A4), has taken step to downsize the program.14 Changes passed that year would have limited the availability of certain price discounts to people living on tribal lands, with Pai claiming the change would incentivize service providers to create networks in the areas.15 In 2019, an appeals court vacated these changes, referring to them as “arbitrary and capricious.”16

As of 2018, more than 9.6 million Americans relied on Lifeline for internet access.17 The Center for Public Integrity and USA Today reported in November 2019 that enrollment in the program had fallen 21 percent since 2017.18 In 2019, the FCC announced administrative adjustments for the purpose of combating fraud in the program and increasing minimum service requirements.19 Among the changes were new financial disclosure obligations and data usage monitoring.20 A number of civil society groups objected, citing invasive practices that put Lifeline subscribers’ privacy at risk.21 Low-income users who depend on Lifeline may be significantly disadvantaged by data caps and other administrative and technical restrictions within the structure of the program.

In March 2020, during the COVID-19 pandemic, more than 200 organizations wrote to the FCC outlining public harms that might arise from internet access limitations.22 The FCC took some steps to reduce the impact of Lifeline program restrictions as the pandemic continued, such as waiving certain recertification requirements for enrollees.23

Stay-at-home policies implemented during the health crisis have highlighted and magnified long-standing inequalities in access.24 Some new efforts sought to protect households and individuals from loss of service related to financial difficulties. The FCC’s Keep Americans Connected Initiative calls on internet service providers (ISPs) to voluntarily pledge (with no penalties for abrogation) not to shut down connections for customers who are unable to pay their bills. It also encourages companies to waive late-payment fees for those facing coronavirus-related hardships, to open Wi-Fi hotspots for more Americans, and to offer temporary free internet service to students without home access, among other recommendations.25 Many service providers also temporarily waived data caps or expanded data plans for consumers during the pandemic.26 At the end of the coverage period, some companies opted to voluntarily extend their Keep Americans Connected pledges in various ways, while others resumed charging consumers for excess data usage.27

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The government imposes minimal restrictions on the public’s ability to access the internet. Private telecommunications companies, including AT&T and Verizon, own and maintain the backbone infrastructure. A government-imposed disruption of service would be highly unlikely and difficult to achieve due to the multiplicity of connection points to the global internet.

However, federal law enforcement agencies have occasionally limited wireless internet connectivity in emergency situations. The federal government also has a secret protocol for shutting down wireless internet connectivity in response to particular events, some details of which came to light following a lawsuit brought under the Freedom of Information Act (FOIA) in 2013.1 The protocol, known as Standard Operating Procedure (SOP) 303, was established in 2006 on the heels of a 2005 London train bombing in which the device was activated by cellular signal. SOP 303 codifies the “shutdown and restoration process for use by commercial and private wireless networks during national crises.” Just what constitutes a “national crisis,” and what safeguards exist to prevent abuse, remain largely unknown. The full SOP 303 documentation has not been publicly released.2

State and local law enforcement agencies also have tools to jam wireless internet service.3 However, in 2014, the FCC issued an enforcement advisory clarifying that it is illegal to jam mobile networks without federal authorization.4

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 4.004 6.006

While a variety of broadband service providers operate in the United States, the industry has trended toward consolidation. Many consumers have only one provider in their area, particularly for fixed-line service, and these de facto local monopolies have exacerbated concerns about high cost and accessibility.1 To some extent, the recognized deficiencies of the country’s internet infrastructure can be attributed to insufficient competition.2 Broadband Now identified 22 states with legal, regulatory, or economic barriers to the establishment of municipal broadband networks.3

From 2018 to 2019, the number of subscribers to the nation’s largest fixed-line broadband internet providers grew by about 2.5 million.4 Comcast leads the market with more than 28.5 million subscribers overall, while its primary rival, Charter Communications, reports approximately 26.6 million subscribers. Cox, the next closest competitor, has an estimated 5 million subscribers. In 2016, the FCC approved Charter Communications’ acquisition of Time Warner Cable and Bright House Networks; the transactions were subsequently greenlighted by the California Public Utilities Commission.5

Further consolidation of the telecommunications sector threatens to limit consumer access to information and communication technology (ICT) services and content. In June 2018, mobile service provider AT&T announced that it had acquired media and entertainment company Time Warner, a major content producer that is not affiliated with the broadband provider Time Warner Cable.6 In July 2018, the Justice Department announced that it would appeal the court decision that had allowed the merger to proceed, arguing that there would be harm to consumers.7 In February 2019, the US Court of Appeals for the District of Columbia Circuit upheld the lower court’s decision, and the Justice Department indicated that it was not planning another appeal.8 More than a year after the merger, there have been reports of financial problems at AT&T, with consumers facing price increases.9

The FCC has made some attempts to address concerns about reduced competition and limited consumer access in recent merger approvals. For example, the commission included provisions within the 2016 Charter–Time Warner Cable deal that required Charter Communications to expand broadband availability to close the digital divide, including by establishing new cable lines in poorly served areas of California and providing affordable access to at least 525,000 low-income families.10 Other conditions prohibit the companies from taking steps that would privilege their cable television services over online video competitors, such as the imposition of data caps on online content that would discourage subscribers from streaming video.11 In 2015, regulators had blocked a proposed merger between Time Warner Cable and Comcast, citing concerns about Comcast’s ability to interfere with over-the-top services (such as Netflix) as well as increased market concentration.12

The deployment of “long-term evolution” (LTE) networks, combined with the gradual phase-in of 5G technologies across the country (see A1),13 has accelerated the public’s reliance on mobile providers for internet access. Following a decade of consolidation, three national providers—AT&T, Verizon, and T-Mobile—now dominate the market. Verizon leads the group with approximately 119 million subscribers, followed by T-Mobile with 98.3 million and AT&T with 93 million.14

The FCC approved a merger between Sprint and T-Mobile in May 2019. Subsequently, in July 2019, the Justice Department gave its approval after reaching a settlement requiring Sprint to divest its prepaid mobile services to Dish Network.15 By September 2019, however, 17 state attorneys general had filed a lawsuit to block the merger,16 a move that was consistent with the federal government’s previous position against further consolidation of mobile networks. As far back as 2011, federal regulators had halted a proposed merger between AT&T and T-Mobile. In 2014, regulators signaled again their intent to prevent a rumored merger between the two companies.17

Despite the legal hurdles, T-Mobile and Sprint eventually concluded their $30 billion merger in April 2020.18 Ruling in favor of the acquisition, US District Court judge Victor Marrero concluded that “the merger is not reasonably likely to substantially lessen competition” in the mobile sector.19 In March 2020, 12 states and Washington, DC, ended their opposition to the merger after winning concessions related to new job opportunities, consumer protections, and broadband expansions from the companies.20

To promote affordable access,21 the FCC in 2016 began buying back spectrum that was previously set aside for television broadcasters and auctioning it for use by mobile broadband providers.22 Between December 2019 and March 2020, the FCC completed the large-scale Auction 103 for 5G millimeter-wave spectrum,23 which was instrumental for the rollout of 5G service.

In 2015, then president Barack Obama announced an initiative to encourage the development of community-based broadband services in small and remote municipalities and asked the FCC to remove barriers to local investment.24 The FCC quickly preempted state laws in Tennessee and North Carolina that restricted local broadband services.25 In 2016, a federal court ruled that the FCC did not have the authority to preempt such laws,26 which were also on the books in many other states.

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 3.003 4.004

The FCC is tasked with regulating radio and television broadcasting, interstate communications, and international telecommunications that originate or terminate in the United States. It is formally an independent regulatory body, but critics on both sides of the political spectrum argue that it has become increasingly politicized in recent years.1

The body is led by five commissioners who are nominated by the president and confirmed by the Senate, with no more than three commissioners from one party. President Donald Trump nominated Republican commissioner Ajit Pai to serve as chair in January 2017, and he was duly confirmed in office.2 A Republican majority currently controls the FCC.

Other government agencies, such as the Commerce Department’s National Telecommunications and Information Administration (NTIA), play advisory or executive roles on telecommunications, economic and technology policies, and related regulations.

The FFC under Pai’s leadership has taken a number of steps toward deregulating the telecommunications industry. In March 2017, the commission froze broadband privacy guidelines that were adopted the previous October.3 The rules would have required broadband providers to obtain opt-in consent from consumers before using and sharing information such as a user’s browsing history and application usage data. They would also have given consumers the ability to opt out of the use and sharing of other types of personally identifiable information.4 Using a law that facilitates the dismissal of regulations recently enacted by an outgoing administration, Congress confirmed these moves by repealing the broadband privacy guidelines supported by President Obama.5 In February 2017, the FCC ended its review of zero-rating practices—which provide free internet access to consumers under certain limited conditions—as part of its movement away from the principles of net neutrality.6 Critics argue that the perpetuation of zero-rating services, while modestly expanding internet access, has the potential to harm consumers by stifling market competition and limiting the diversity of online content available to some users.7

In December 2017, the FCC repealed its 2015 Open Internet Order, often referred to as the net neutrality rule, which had reclassified broadband internet access services from “information services” to “telecommunications services.”8 This reclassification gave the FCC greater legal authority to prohibit “unreasonable discrimination,” meaning network operators would not be allowed to give preferential treatment to certain types of online traffic on fixed or mobile networks.

The 2017 decision, known as the Restoring Internet Freedom Order,9 went into effect in June 2018. It effectively allowed ISPs to speed up, slow down, or restrict the traffic of selected websites or services at will. The repeal also weakened the FCC’s previous regulatory authority over broadband ISPs.10 Pai argued for the change as a “light-touch” regulatory model that would promote innovation and serve consumer interests.11 Yet several civil society and public interest groups have argued that it would disadvantage consumers in various ways,12 abandon the FCC’s responsibility to protect freedom of expression online,13 and likely result in a less free and open internet.14 According to opinion polls, a majority of Americans support the idea of net neutrality and its goals.15

Since 2018, numerous state legislatures, attorneys general, and civil society groups have taken up the fight to restore net neutrality (see B6). Twenty-one state attorneys general filed a lawsuit with the US Court of Appeals for the District of Columbia Circuit claiming that the FCC’s decision was “arbitrary and capricious” and violated several aspects of federal law. 16 Civil society groups and nonprofits—including Mozilla,17 Public Knowledge,18 the Open Technology Institute,19 and Free Press20 —filed protective petitions urging the US Courts of Appeals for the First and District of Columbia Circuits to review the FCC’s decision. The proposed Save the Internet Act, which would reinstate the internet regulations that were rolled back under Pai, passed the House of Representatives under Democratic Party leadership in 2019, but it had yet to be taken up by the Republican-controlled Senate as of 2020.21 In October 2019, a federal appeals court upheld the FCC’s repeal of the Open Internet Order,22 though it ruled that the FCC cannot preemptively block states from instituting their own laws intended to safeguard net neutrality.

A number of states, including California,23 Oregon,24 Vermont,25 Washington,26 Colorado,27 Maine,28 and New Jersey,29 have enacted net neutrality laws. Twenty others and the District of Columbia and Puerto Rico introduced net neutrality legislation during 2020.30 A proposed bill surfaced in New York in March 2020, but it was rebuked by the Electronic Frontier Foundation for insufficiently addressing zero-rating policies and related issues.31 The governors of Montana and New York have signed executive orders barring state agencies from conducting business with ISPs that violate net neutrality.32

The importance of internet access in daily life has increased under stay-at-home policies implemented in 2020 to limit the spread of COVID-19 (see A2). Responding to the FCC’s recent call for comments on the repeal of net neutrality rules, a number of groups allied behind the position that “without internet access it is virtually impossible for adults to telework, children to keep up with their classes via e-learning, and for people to try and stay healthy with telehealth.”33 As of May 2020, the FCC had not materially altered its regulatory stance in consideration of the special public needs associated with the pandemic.

B Limits on Content

The government places few restrictions on internet content and communication. However, some federal lawmakers and the Trump administration have proposed reforms to long-standing laws related to free expression online. Disinformation, from sources including foreign powers and US political leaders, continues to undermine the quality of the online environment, a problem that becomes especially acute during contentious political events like elections and protests.

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content? 6.006 6.006

In general, the government does not force ISPs or content hosts to block or filter online material that would be considered protected speech under international human rights law. This includes political speech.

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content? 3.003 4.004

The government does not directly censor any particular political or social viewpoints online, although legal rules do restrict certain types of content. Intermediaries can face copyright liability if they do not honor notice-and-takedown provisions of the Digital Millennium Copyright Act (DMCA). They can also face federal criminal liability for failing to remove content such as child sexual abuse imagery once they become aware of it. Proposals to reform Section 230 of the Communications Act, as amended by the Telecommunications Act of 1996 (commonly known as Section 230 of the Communications Decency Act)—a long-standing rule meant to protect freedom of speech online—drew intense scrutiny during the coverage period (see B3).

In an unprecedented development, the government moved to ban the US operations of two popular Chinese-owned social media apps (see B3). In November 2019, citing national security concerns, the Treasury Department’s Committee on Foreign Investment launched an investigation into TikTok, a short-video platform owned by the Chinese company ByteDance.1 In August 2020, after the coverage period, the Trump administration issued two executive orders that prohibited people, companies, or any other actor within the United States from engaging in any transaction with ByteDance or another Chinese company, Tencent, in connection with its WeChat app, unless the platforms were no longer owned by a Chinese firm.2 The Commerce Department announced that the two apps would be removed from app stores in the United States in late September 2020,3 although federal courts issued preliminary injunctions to pause enforcement of the orders, citing serious implications for freedom of speech.4 Also in September, the government and the companies involved discussed one proposed solution that would give the US-based firms Oracle and Walmart a 20 percent stake in a new entity called TikTok Global.5

Broadly speaking, content hosts and social media platforms are the primary decision-makers when it comes to the provision, retention, or moderation of online content, so long as the content is not prohibited under existing legal guidelines. Companies have successfully argued that moderation decisions are an exercise of their own constitutionally protected right to set platform policies, allowing them to remove content and accounts that violate their rules and terms of service. During the coverage period, the deluge of disinformation and incendiary content related to COVID-19, nationwide protests, the upcoming 2020 general elections, and other high-profile political events prompted more civil society groups, political actors, and ordinary members of the public to demand that social media companies label or remove the harmful material.6

There were several instances in which social media platforms identified posts by President Trump and his administration as having violated company policies.7 In May 2020, Twitter added an informational label to a post by the president that contained baseless claims about mail-in ballots and voter fraud (see B3). The company explained that the post violated its rules on civic integrity.8 Also in May, Twitter flagged a post from President Trump’s account for “glorifying violence” in response to nationwide protests against racial injustice that were prompted by the widely publicized police killing of Minneapolis resident George Floyd; the company did not remove the post, finding that it may be in the public’s interest to keep it accessible (see C7).9 Separately, in August 2020, after the coverage period, both Facebook and Twitter removed a post shared by the president for violating their rules on spreading COVID-19 misinformation.10

In some cases, providers of internet infrastructure enforce their own discretionary speech policies. Despite having no legal obligation to do so, the chief executive of the internet infrastructure provider Cloudflare decided in August 2019 to drop services for 8chan, an online forum that was notorious for hosting manifestos written by the perpetrators of mass shootings.11 As the company noted in a formal securities filing, such choices may be driven by pressure from customers or risk of adverse commercial consequences, making provision of services to such customers an economic risk factor for the company and its investors.12

Politicians and government officials have at times appealed to social media companies to remove, restore, or flag specific content. In September 2019, a group of senators criticized Facebook for a fact-checking review in which three doctors flagged an antiabortion group’s videos claiming that abortion is never medically necessary to save a woman’s life.13 Facebook removed the review after the senators accused the platform of bias against conservative viewpoints. In October 2019, Facebook denied a request from Democratic Party presidential candidate Joseph Biden’s campaign to remove a Trump campaign ad promoting unsubstantiated claims of corruption in US policy on Ukraine when Biden was vice president.14 After the coverage period in June 2020, acting homeland security secretary Chad Wolf called on Facebook, Google, Apple, Twitter, and Snap to more aggressively remove content for promoting, or intending to organize, violence or looting in connection with recent protests.15

Although there is no evidence that direct government pressure on users to remove online content is systematic or widespread, at least one user experienced such pressure during the coverage period. In March 2020, a police officer in Wisconsin threatened to charge a teenager and her family with disorderly conduct and take them to jail unless she deleted Instagram posts about her COVID-19 infection.16

People in the United States have occasionally had their content restricted based on requests from foreign governments. In one prominent case, the New York Times reported that the video-conferencing platform Zoom, acting on a request from the Chinese government, temporarily suspended the account of a US-based Chinese activist who planned to host a meeting to memorialize the 1989 crackdown on prodemocracy protests in Beijing’s Tiananmen Square.17 Chinese social media platforms used by US residents have also come under scrutiny for content moderation decisions or their use of automated filtering.18 The Toronto-based group Citizen Lab found that moderators censored content containing over 2,000 keywords related to the COVID-19 pandemic on the communication platform WeChat and live-streaming platform YY, including criticism of the Chinese Communist Party and more generalized health information19

In the absence of comprehensive regulatory mandates, tech companies have their own strategies for identifying and removing terrorist content. The Global Internet Forum to Counter Terrorism (GIFCT) is a cross-company “hash-sharing database” initially organized by Facebook, Microsoft, Twitter, and YouTube and now used by at least 13 companies to automatically flag over 200,000 images and videos.20 Content is included in the database for violating platforms’ own policies, not laws. A civil society letter criticizing GIFCT noted examples of apparent filtering errors such as YouTube’s deletion of over 100,000 videos stored by the Syrian Archive, a civil society organization dedicated to preserving evidence of human rights abuses in Syria. The letter noted that “almost nothing is publicly known about the specific content that platforms block using the Database, or about companies’ internal processes or error rates.”21 In response to critiques that the initiative was marked by secrecy and a lack of strong public involvement, among other concerns, GIFCT formally announced that it would become an independent organization at the end of 2019.22 That process was itself criticized by civil society groups as opaque and insufficiently protective of free expression and other human rights.23

In 2019, citing free speech concerns,24 the Trump administration announced that the United States would not sign on to the Christchurch Call, an agreement between social media companies and numerous national governments to combat terrorist content online. The pledge had been organized after a gunman live-streamed his attacks on mosques in Christchurch, New Zealand,25 and signatories included Amazon, Google, Facebook, Microsoft, and Twitter.26

Section 230 of the Communications Decency Act shields providers and content hosts from legal liability for most material created by users, including lawsuits alleging defamation or injurious falsehoods.27 However, there are exceptions to this immunity, including under federal criminal law, intellectual property law, laws to combat sex trafficking, and electronic communications privacy laws. Section 230 also ensures legal immunity for social media companies and other content providers that act in good faith to remove content when it violates their terms and conditions or their community guidelines.28 This policy design was considered instrumental in advancing the goals of economic growth, innovation, and freedom of expression during the early internet era.29 Many concerns regarding excessive or insufficient moderation of content on internet platforms center on the issue of how companies enforce their own rules (see B3).

The Allow States and Victims to Fight Online Sex Trafficking Act, also referred to as SESTA/FOSTA, was signed in April 2018. The law established new liability for internet services when they are used to promote or facilitate the prostitution of another person.30 While the laudable goal of the law is to aid the victims of sex trafficking, plaintiffs including advocates for sex workers’ rights and the Internet Archive have challenged the law as a violation of the constitution’s First Amendment speech protections. In 2019, a court of appeals permitted the case to go forward.31 After the bill passed in the Senate, but before it became law, reports surfaced of companies preemptively censoring content: Craigslist announced that it was removing the “personals” section from its website altogether.32 Civil society activists criticized the law for motivating companies to engage in excessive censorship in order to avoid legal action.33 Sex workers and their advocates also argued that the law threatened their safety, since the affected platforms had made it possible for sex workers to leave exploitive situations and operate independently, communicate with one another, and build protective communities.34 In December 2019, members of Congress introduced the SAFE SEX Workers Study Act, which, if passed, would evaluate the impact of SESTA/FOSTA on the health and safety of sex workers in the country.35

Section 512 of the DMCA, enacted in 1998, created new immunity from copyright claims for online service providers. However, the law’s notice-and-takedown requirements have been criticized for impinging on speech rights,36 as they incentivize platforms to remove potentially unlawful content without meaningful judicial oversight. Early research on the DMCA found that notice-and-takedown procedures were sometimes used “to stifle criticism, commentary, and fair use.”37 In other instances, overly broad or fraudulent DMCA claims resulted in the removal of content that would otherwise be excused under provisions for free expression, fair use, or education.38 DMCA complaints have also been exploited to take down political campaign advertisements, since their immediate removal means they will be unavailable during the electoral period, and the claims are unlikely to be challenged in court after the campaign ends.39 In May 2020, one major news outlet reported hundreds of fraudulent DMCA claims by people or companies—often using pseudonyms—to force the removal of links to content containing unfavorable news coverage about them.40 Despite such reports, in 2020 a long-awaited study by the US Copyright Office largely dismissed concerns about the law’s impact on online speech.41 Upon release, the research was criticized by some members of the legal and advocacy communities.42

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 4.004 4.004

The government does not place onerous restrictions on online content, and domestic laws do not allow for broad government blocking of websites or removal of content. However, companies that host user-generated content, many of which are headquartered in the United States, have faced criticism in recent years for a lack of transparency and consistency when it comes to enforcing their own rules on content moderation.

Section 230 of the Communications Decency Act, as commonly referred, generally shields online sites and services from legal liability for the activities of their users, allowing user-generated content to flourish on a variety of platforms (see B2).1 Despite robust legal and cultural support for freedom of speech within the United States, the scope of Section 230 has recently come under criticism. Concerns about child sexual abuse imagery, protection of minors from harmful or indecent content, defamation, cyberbullying and cyberstalking, illegal gambling, financial crime, and terrorist content all contribute to the desire for reform of platforms’ legal immunity for user-generated content; in some cases, concerns have been driven by a misperception that platforms have no legal obligations to remove content, even for federally criminalized content like child sexual abuse material. At the same time, worries about viewpoint discrimination by the platforms have driven demands to limit their discretion to define and enforce content moderation rules.

SESTA/FOSTA of 2018 has been the only recent legislative change affecting Section 230 (see B2). However, during the coverage period, new efforts to overhaul Section 230 were initiated by the Trump administration. In February 2020, the Department of Justice hosted a workshop to examine whether and how Section 230 should be updated.2 In May, President Trump signed an executive order titled “Preventing Online Censorship” that aimed to limit protections against intermediary liability. Among other things, it directed the NTIA to petition the FCC to clarify regulatory rules pertaining to Section 230.3 A broad coalition of civil society members, academics, and tech companies criticized the order as harmful to online speech,4 and some also argued that it had little legal standing and served mainly a performative function.5 The administration has been sued by at least one civil society group in response to the order.6

The Preventing Online Censorship order was issued after Twitter fact-checked the president’s baseless claims about mail-in voting (see B2). Moreover, the order’s text cites Twitter’s alleged “political bias,” alluding to accusations from certain conservatives that social media platforms deliberately censor politically conservative views, despite scant evidence of any such practice.7 In August 2019, Facebook released an inconclusive report on political bias after commissioning an audit into the issue conducted by a former Republican senator in collaboration with a private law firm.8 Also that summer, the White House held a “social media summit” to promote discussion of anticonservative bias claims, with attendees ranging from mainstream conservative figures to online personalities who have peddled far-right conspiracy theories.9 In May 2020, the Wall Street Journal reported that President Trump was discussing the creation of a panel to investigate complaints of bias against conservatives on social media.10

Beyond the May 2020 executive order, more than 10 different proposals related to Section 230 reform were brought forward over the past year.11 In March 2020, lawmakers introduced the draft Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act. The bill, as of September 2020, would abrogate Section 230 immunities for platforms that host or facilitate child sexual abuse images in a “reckless” or “negligent” way, allowing states to impose criminal or civil liability.12 Civil society groups, legal experts, and technologists have raised concerns about the bill’s impact on encryption (see C4),13 free speech, and privacy, arguing that its provisions allow companies to be held liable for content even if they do not know it is present on their platforms, may ultimately lead to suppression of evidence in prosecutions of the accused, and would not effectively tackle the spread of child sexual abuse images online.14

An earlier proposal, the Ending Support for Internet Censorship Act, introduced in June 2019, would dismantle the liability shields for major online platforms “unless they submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.”15 It would compel social media services to undergo certification by the Federal Trade Commission (FTC) to keep Section 230 immunities.16 Civil society organizations and tech companies have criticized the proposal for, among other reasons, undermining free speech protections and imposing broad, “unmeasurable,” and “artificial” standards on companies’ content moderation practices.17

Other bills on Section 230 that were proposed after the coverage period include the Platform Accountability and Consumer Transparency (PACT) Act,18 the Stopping Big Tech’s Censorship Act,19 and the Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act.20 In September 2020, the Justice Department announced that it had submitted separate legislation to Congress to reform the law.21

The August 2020 executive orders that would effectively ban TikTok and WeChat quickly came under legal scrutiny (see B2). In September, federal courts issued preliminary injunctions against the compelled removal of the apps from app stores, citing serious questions about the First Amendment.22 A coalition of civil society groups and experts have also argued that the bans were a disproportionate response that did not effectively tackle the genuine issues of data privacy and security, and amounted to a detriment to free speech and internet freedom.23 Supporters of the executive orders argued that they would help counter surveillance and other rights abuses by the Chinese government.24

The Children’s Internet Protection Act (CIPA) of 2000 requires public libraries that receive certain federal government subsidies to install filtering software that prevents users from accessing child sexual abuse images or other visual materials deemed obscene or harmful to minors. Libraries that do not receive the specified subsidies are not obliged to comply with CIPA. The Supreme Court’s interpretation of the statute permits adult users to request that the filtering be removed without having to provide a justification. However, not all libraries allow this option, arguing that decisions about filtering should be left to the discretion of individual libraries.25

Facebook, Twitter, and YouTube have all faced criticism for a lack of transparency regarding the enforcement of their respective community standards or terms of service, and for the effects of this enforcement on marginalized populations.26 Some observers have alleged that rules against hate speech, for example, lead to the removal of comparatively mild content posted by people of color, while other speech that appears more inflammatory remains accessible.27 The final report from an independent civil rights audit of Facebook, released in July 2020 after two years of work, raised similar concerns over hate speech, algorithmic bias, and content moderation policies.28 Separately, in August 2019, a group of LGBT+ content creators filed a lawsuit against YouTube on First Amendment and civil rights grounds. The complaint alleged that the company unevenly regulates and suppresses LGBT+ content, disproportionately filtering, demonetizing, and restricting access to such videos.29 Black activists also reported that their content was temporarily removed or inaccessible amid nationwide protests in 2020 (see B8).30

A number of platforms, including Facebook,31 Twitter,32 and YouTube,33 report data about the volume of content removed for violating community standards or terms of service. In 2020, amid increased attention on content removals and aggressive content moderation during the COVID-19 pandemic,34 many social media companies have improved transparency with regularly updated policy statements, including acknowledging increased reliance on automated tools rather than human judgment in takedown decisions.35 In addition, platforms are collaborating with industry and governmental partners to elevate the visibility of trusted sources for public health information.36

Facebook recently adopted a new mechanism for platform self-governance, although its effectiveness remains to be seen. A structurally independent Oversight Board will review and make recommendations regarding the company’s content decisions and platform policies.37 The board’s charter, which was released in September 2019, details its structure and operations.38 There are provisions for management via a separate trust, selection of an internationally inclusive membership panel, and the establishment of a quasi-judicial process for hearings, collaborative deliberation, and public judgments.39 The Facebook Oversight Board has a narrow mandate to adjudicate a small body of appeals, made by users who wish to have takedowns reviewed, and to assess whether Facebook accurately applied its existing policies. However, the board will not, for now, have power to change Facebook’s policies or review cases in which Facebook declined to remove content, nor will it evaluate content removals that Facebook determines to be required by law.40

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Reports of self-censorship among journalists, commentator, and ordinary internet users are not widespread in the United States. Women and members of marginalized communities are frequently targets of online harassment and abuse, which can influence self-censorship (see C7). A 2017 Amnesty International survey of women across eight countries, including the United States, who had experienced online harassment found that 76 percent changed how they used social media as a result.1 Research released by the US Department of Education in 2019 also documented increases in online harassment and found that girls are three times more likely to report being cyberbullied online than boys.2 Despite such prevalence, it remains unclear precisely how often the many forms of online harassment and abuse may lead journalists, commentators, and others to engage in self-censorship.

Social media users may change their behavior in line with their perceptions of government surveillance. A 2016 study in Journalism & Mass Communication Quarterly found that priming participants with subtle reminders about mass surveillance had a chilling effect on their willingness to publicly express dissenting opinions online.3 Another study from October 2018 reaffirmed the impact of online surveillance on self-censorship.4 Human Rights Watch has voiced concern about the extent to which Chinese citizens in the United States may self-censor on WeChat given the reach of the Chinese government’s digital surveillance regime.5

Various studies in recent years have concluded that aggressive leak investigations by the Justice Department, as well as US government surveillance programs, cause journalists and other writers to self-censor and to doubt their ability to protect the confidentiality and safety of their sources.6

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 2.002 4.004

False, manipulated, misleading, or conspiracist information, disseminated online by both foreign regimes and domestic actors, is a major problem in the United States. The failure of US institutions to effectively respond to disinformation surrounding the 2016 elections, combined with US political leaders’ willingness to spread such content themselves, has created an unreliable information environment ahead of the November 2020 general elections.

Foreign actors continue to orchestrate disinformation campaigns.1 In September 2020, Facebook and Twitter announced that the Kremlin-backed Internet Research Agency, which spread disinformation during the 2016 presidential election, was running a network of fake accounts and a website purporting to be a left-wing news outlet, even hiring US journalists to write for the site.2 Such evidence confirmed previous warnings from intelligence agencies that foreign governments would spread disinformation ahead of the 2020 elections. Similar reports also warn that Russian groups have refined their strategies since 2016, leaning more on English-language sites and shifting away from using fake social media accounts.3 The newer methods were on display in August 2020, when Kremlin-backed outlets Russia Today and Ruptly were reported to have manipulated and spread across Twitter a story alleging that racial justice protesters had burned Bibles and the American flag; it was eventually shared by several high-profile Republican leaders.4

In February 2020, Facebook removed a small network of Iranian-based accounts spreading news on topics relating to US immigration policy, Middle East policy, and US-Iranian tensions;5 in October 2019 the platform had similarly removed accounts originating in Russia and Iran that targeted the United States.6 The State Department has revealed that the Russian, Chinese, and Iranian governments each engaged in spreading anti-American COVID-19 conspiracy theories during 2020.7

Disinformation campaigns are increasingly homegrown and at times encouraged by government officials and politicians.8 Manipulated content and conspiracy theories continue to flood online discussions around major political, cultural, and social topics, including the COVID-19 pandemic,9 racial justice protests, 10 the impeachment of President Trump,11 the November 2020 elections,12 and the death of Supreme Court justice Ruth Bader Ginsburg.13

False and misleading political content is often propagated by President Trump himself on his official social media accounts,14 through both original and shared posts, including those from known conspiracist sources.15 The content has touched on a range of issues, such as claims that the official US death toll due to COVID-19 is inaccurate and baseless assertions about the risk of fraud through mail-in voting (see B2).16 The lead author of a Cornell University analysis of 38 million English-language articles about the pandemic said in September 2020 that Trump was “the single largest driver of misinformation” around COVID-19.17

Beginning in April 2020, ProPublica and First Draft analyzed Facebook posts using voter-related keywords, finding that the platform was rife with falsehoods, misleading information, or a blend of opinion and factual errors related to the upcoming November elections.18 Claims about mail-in ballots and voter fraud frequently came from right-wing users, while concerns about “stealing the election” were prevalent from users across the ideological spectrum.

The Washington Post reported in September 2020 on a campaign in which accounts used spam-like behavior to spread false or partisan content.19 The newspaper linked the accounts to teenagers in Arizona who were being paid and managed by Turning Point Action, an affiliate of the well-known pro-Trump youth organization Turning Point USA. People involved in the campaign reported that leaders in the group coordinated what information would be shared and when. Both Facebook and Twitter have since removed a number of accounts connected to the campaign. In October 2020, Facebook announced it would permanently ban the marketing firm Rally Forge, which was working with Turning Point Action, from its platform.20

During the previous coverage period, the New York Times reported in December 2018 that in the lead-up to a 2017 special Senate election in Alabama between Democrat Doug Jones and Republican Roy Moore, a group of Democrat-aligned operatives with no connection to Jones’s campaign created a fraudulent Facebook page and used Twitter accounts to support Jones’s candidacy and harm Moore.21

A study by the Oxford Internet Institute (OII) found that ahead of the 2018 midterm elections, domestic alternative online outlets were the main purveyors of “junk news,” a term defined broadly to encompass deliberately incorrect, deceptive, or misleading information, including content that is propagandistic, ideologically extreme, hyperpartisan, or conspiracist.22 In an examination of 2.5 million Twitter posts and nearly 7,000 pages on Facebook, OII found that far-right and conservative pages spread more junk news than all other source categories combined.23

The deluge of disinformation has led to offline harms. During the COVID-19 outbreak in 2020, doctors reported that some patients failed to take their illness seriously due to false claims that the virus was a hoax or no worse than the flu, meaning they sought medical care too late.24 A number of people also reportedly ingested cleaning fluids and other toxic or unproven treatments based on statements by President Trump that were amplified online.25

Online news outlets in the United States are generally free of either formal arrangements or coercive mechanisms meant to force them to provide favorable coverage of the government. Political and economic factors can sometimes align to incentivize a close relationship between a political party and a given news organization.26

Some domestic news outlets have been found to run covert campaigns of misleading content or disinformation. In August 2019, Facebook restricted the ability of the right-wing news outlet Epoch Times to purchase advertisements on its platform. The decision came after reporting revealed that the outlet was pushing conspiracist content to a vast number of Americans under page names that were not explicitly associated with the media group, allowing it to bypass the platform’s transparency rules on political ad spending.27 In December 2019, Facebook again removed hundreds of accounts, pages, and groups linked to the Epoch Media Group that used fake profile photos created with assistance from artificial intelligence.28 The accounts shared political information on topics such as religion, President Trump’s impeachment, and conservative ideology. In another example, the New York Times reported that Ken LaCorte, a former Fox News executive, had set up a digital operation that published inflammatory, misleading, and conspiracist content across websites with names such as Liberal Edition News and Conservative Edition News.29

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

There are no government-imposed economic or regulatory constraints on users’ ability to publish content. Online outlets and blogs generally do not need to register with, or have favorable connections to, the government to operate. Media sites can accept advertising from both domestic and foreign sources.

Experts argue that the FCC’s repeal of the 2015 Open Internet Order will result in new constraints for those wishing to publish online (see A5).1 Democratic Party lawmakers have pushed to enshrine net neutrality principles in federal legislation. The draft Save the Internet Act passed the House of Representatives in April 2019, but it had not been taken up for a vote in the Republican-controlled Senate by the end of the coverage period.2

In February and June 2020, the State Department designated nine Chinese state media companies as “foreign missions,” requiring them to report information on staffing and real estate holdings and limiting the number of employees they can post in the United States.3 In previous years, several Chinese and Russian state media outlets had been designated as “foreign agents,” a status with other transparency requirements attached.4 In September 2020, after the coverage period, the Justice Department declared Al-Jazeera Media Network to be an “agent of the Government of Qatar,” requiring its US-based social media division to register as a foreign agent.5 Al-Jazeera, which is privately held but reportedly subsidized by the Qatari government, expressed concerns that the registration would damage its journalistic work. However, neither “foreign agent” nor “foreign mission” designations entail any direct restrictions on an outlet’s content or ability to publish online.

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity? 4.004 4.004

The online environment in the United States remains vibrant and diverse, and users can easily find and publish content on a range of issues and in multiple languages. However, the growing prevalence of disinformation and hyperpartisan content over the past several years has affected the information landscape, eroding the visibility and readership of more balanced or objective sources.1 In addition, online harassment and abuse targeting women and members of marginalized communities who speak out on social media are a persistent threat to the diversity of information and viewpoints (see B4 and C7).

Newsroom closures driven by the economic fallout from COVID-19 could also pose a threat to the diversity of information available online, though there had not yet been a significant impact by the end of the coverage period. The nonprofit Poynter Institute, which maintains an updated list of newsroom closures, layoffs, and operational changes associated with the coronavirus, documented just under 200 newspapers, weeklies, magazines, television stations, and radio stations experiencing organizational disruption as of May 2020.2 National digital media outlets like Vice News, BuzzFeed, and Vox have all laid off staff. The majority of those affected, however, are local outlets like Seattle’s alternative weekly the Stranger and the Pulitzer Prize–winning Charleston Gazette-Mail in West Virginia, many of which maintain online news presences.3

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 5.005 6.006

Score Change: The score declined from 6 to 5 because enhanced surveillance, intimidation, and harassment during nationwide protests for racial justice infringed on people’s freedom to use digital technology to associate and assemble.

There are no technical or legal restrictions on individuals’ use of digital tools to organize or mobilize for civic activism. However, growing surveillance of social media and communications platforms, as well as targeted harassment and threats, have infringed on people’s freedom to engage in such activism.

Nationwide protests for racial justice and in support of the Black Lives Matter movement surged in mid-2020, after the police killings of Breonna Taylor in Kentucky and George Floyd in Minnesota in March and May, respectively. People flocked to internet services, communication platforms, and other digital tools to organize, mobilize, and share information. Such engagement, however, was met with enhanced surveillance,1 intimidation and harassment, and at times arrests for online activity (see C3, C5, and C7).

Federal and local law enforcement agencies increased their surveillance of social media platforms amid the protests (see C5).2 The Intercept reported in June 2020 that agents from a Federal Bureau of Investigation (FBI) terrorism task force appeared at homes or workplaces to question four people in Cookeville, Tennessee, who were involved in planning Black Lives Matter rallies on Facebook.3 One university student was questioned about her offer to provide transportation to and from a rally, as well as her private Facebook posts. An early coordinator of Cookeville’s protests was also interrogated, even after she ceased her activity in response to online death threats. Separately in North Carolina, the FBI questioned a man and his mother two days after he jokingly posted on Twitter that he was a local leader of antifa, a left-wing antifascist movement.4

Reports also suggest that law enforcement authorities had access to protesters’ electronic devices and private communications. The Lawyers Guild in Ohio cited cases in which local police handed over protesters’ phones to federal agents.5 Separately, the Washington Post reported on an internal Department of Homeland Security (DHS) document that discussed how the department had access to Portland, Oregon, protesters’ electronic messages, including from encrypted platforms like Telegram, and then disseminated such information to federal, state, and local agencies.6 The information reportedly focused largely on discussions among demonstrators about how to avoid being arrested or facing police violence during protests. Moreover, the New York Times reported in October 2020 that lawmakers on the Intelligence Committee said that DHS officers considered extracting data from protestors’ phones in Portland.7

Surveillance coupled with targeted harassment chilled some people’s willingness to use digital tools to associate and assemble. For example, citing concerns that online information would be used by hostile nonstate actors to disrupt or exploit planned gatherings, some Minneapolis residents instituted self-imposed restrictions on live streaming and sharing of information on social media.8 A photographer and activist in Philadelphia stopped posting to social media amid the protests, citing the need to protect demonstrators from police retaliation.9

Platforms have also restricted content relating to digital organizing. In June 2020, the civil society group Color of Change said it had collected hundreds of reports within a few weeks that Facebook had restricted or removed Black Lives Matter and antiracist content. 10 For example, activist and educator Louiza Doran reported that while she was leading an antiracist teleconference workshop, Instagram temporarily removed posts that mentioned her name, linked to her work, or had pictures of her, for allegedly violating community standards. She also said Facebook had suspended her ability to stream video live. The platform explained that its automated system had incorrectly flagged her personal site as spam. Separately, in April 2020, Facebook announced that it would remove “events” advertising protests against COVID-19 lockdowns if they would violate a state government’s rules on social distancing.11

In recent years, despite such obstacles, some of the country’s most visible social movements have successfully combined on-the-ground organizing with social media efforts. Researchers who analyzed more than 40 million posts about the Black Lives Matter movement in 2016 identified Twitter as a powerful tool for connecting activist communities across the United States.12 Another study of Twitter, by Crimson Hexagon and the PEORIA Project at George Washington University in the fall of 2017, found that the #MeToo hashtag was used to comment on the problem of sexual harassment and assault more than seven million times.13

C Violations of User Rights

The legal framework provides strong safeguards for free expression and press freedom online, but it lacks robust protections for user privacy. Federal, state, and local government agencies continued to engage in surveillance of social media with limited oversight and transparency during the coverage period, most notably in the context of nationwide protests calling for racial justice. Technical attacks linked to foreign authoritarian regimes also continued, threatening to distort the November 2020 general elections.

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 6.006 6.006

The First Amendment of the US constitution includes protections for free speech and freedom of the press. The Supreme Court has long maintained that online speech has the highest level of constitutional protection.1 In a 2017 decision, the court reaffirmed this position, arguing that to limit a citizen’s access to social media “is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”2 Lower courts have consistently struck down government attempts to regulate online content, with some exceptions for illegal material such as copyright infringement or child sexual abuse images.

In May 2018, a federal judge ruled that President Trump’s practice of blocking critics from following his Twitter account was unconstitutional, finding that the president’s Twitter feed serves as a public forum and that preventing members of the public from interacting with it violated the First Amendment.3 In July 2019, the US Court of Appeals for the Second Circuit upheld the decision.4

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities? 2.002 4.004

Despite significant constitutional safeguards, laws such as the Computer Fraud and Abuse Act (CFAA) of 1986 have sometimes been used to prosecute online activity and impose harsh punishments. Certain states have criminal defamation laws in place, with penalties ranging from fines to imprisonment.1

In April 2020, Puerto Rico amended its public security law to make it a felony to “transmit or allow the transmission” of “false” statements about government proclamations or orders related to emergencies, including those pertaining to COVID-19.2 Penalties include fines of up to $5,000 and six months of jail time. In May, the American Civil Liberties Union (ACLU) sued the Puerto Rican government, asserting that the law violated constitutionally protected rights to free speech, a free press, and due process.3 Separately, officials in Newark, New Jersey, warned in March 2020 that “any false reporting of the coronavirus in [the] city [would] result in criminal prosecution,” specifically including false information on social media.4

Instances of aggressive prosecution under the CFAA have fueled criticism of the law’s scope and application. It prohibits accessing a computer without authorization, but fails to define the terms ”access” or “without authorization,” leaving the provision open to interpretation in the courts.5 In one prominent case from 2011, programmer and internet activist Aaron Swartz secretly used Massachusetts Institute of Technology servers to download millions of files from JSTOR, a service providing academic articles. Prosecutors sought harsh penalties for Swartz under the CFAA, which could have resulted in up to 35 years’ imprisonment.6 Swartz died by suicide in 2013 before he was tried. After his death, a bipartisan group of lawmakers introduced “Aaron’s Law,” a piece of legislation that would prevent the government from using the CFAA to prosecute terms-of-service violations and stop prosecutors from bringing multiple, redundant charges for a single crime.7 The bill was reintroduced in Congress in 2015, but did not garner enough support to move forward.8 A number of states also have laws on computer hacking or unauthorized access, and several smaller legal cases have highlighted the shortcomings and lack of proportionality of these measures.9

In April 2020, the Supreme Court agreed to hear a case involving the CFAA that led to the conviction of a police officer who accessed police databases for unofficial purposes.10 An amicus brief filed by a group nongovernmental organizations (NGOs)—Electronic Frontier Foundation, Center for Democracy and Technology, and Open Technology Institute—argued for a narrow interpretation of the CFAA, asserting that a lower court’s decision in the case “broadens the CFAA’s scope and transforms it into an all-purpose mechanism for policing objectionable or simply undesirable behavior.”11 A decision was expected in 2021.

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities? 4.004 6.006

Prosecutions or detentions for online activities are relatively infrequent. However, the coverage period featured several cases in which people were arrested, charged, or threatened with criminal charges due to their online communications, notably during the COVID-19 pandemic and nationwide protests for racial justice in 2020. There have also been arrests related to the recording or live streaming of police interactions.

Members of the public and online journalists have been investigated, arrested, and charged in connection with online activities related to the ongoing racial justice protests, which ramped up in May 2020:1

  • At the end of May and into the first week of June, at least four people were charged with incitement to riot based solely on their social media posts, including content that constitutes protected speech under the First Amendment. Charges against at least one of the accused were soon dropped.2
  • On May 29, Bridget Bennett, on assignment for the news agency Agence France-Presse, and Ellen Schmidt, a photojournalist for the Las Vegas Review-Journal, were arrested in Las Vegas and charged with “failure to disperse.”3 While people facing that charge are supposed to be released from jail immediately, the two journalists were held in detention overnight.
  • Online journalists were arrested for reporting on protests after citywide curfews, despite displaying press credentials. On May 30, Vice News reporter Alzo Slade was arrested for allegedly breaking curfew in Minneapolis despite identifying himself as a reporter to police.4 Paige Cushman, a KATV journalist in Little Rock, was arrested and detained after streaming video live on Facebook; she was accused of violating curfew even after identifying herself as a member of the press.5
  • In June, after the reporting period, five people in New Jersey were charged with online harassment, a felony, for a Twitter post seeking to identify a masked police officer. One man was charged for publishing the post, while the other four had simply shared it.6

Users were also arrested for online activity related to the COVID-19 pandemic:

  • Also in April, arrest warrants were issued for a journalist with the online outlet ProPublica and a freelance photographer after they traveled to Virginia’s Liberty University to report on its controversial COVID-19 policies.7 They were accused of trespassing.
  • In May, a pastor in Puerto Rico was charged for allegedly violating the territory’s “fake news” law after using WhatsApp to discuss a rumor about an executive order related to COVID-19 (see C2).8 Charges were later dropped.
  • Connecticut prosecutors charged a teenager in April with violating computer crime laws by intentionally disrupting a high school’s virtual classes with “obscene language and gestures.”9

Other arrests in recent years in connection with online reporting or speech include the following:

  • Robert Frese of Exeter, New Hampshire, was arrested and charged in May 2018 with criminal defamation for posting in the comments section of a local newspaper’s website that the Exeter police chief had “covered up for a dirty cop.”10 The charges were dropped after the state attorney general raised First Amendment concerns.11 On behalf of Frese, the ACLU filed a federal lawsuit in December 2018 to challenge New Hampshire’s criminal defamation law.12 In June 2019, photojournalist Michael Nigro was arrested for trespassing in New York City while photographing and live-streaming a demonstration on climate change.13 He subsequently claimed that he was wearing press credentials at the time of arrest. The charges were dropped in July.
  • A trial was ongoing at the end of the coverage period in the case of seven students at the University of Puerto Rico who were originally charged in 2017 with illegal assembly and intimidation of public officers, having participated in and live-streamed protests at the university.14 Facebook handed over private conversations on its platform to prosecutors in the case (see C6).

Police have periodically detained individuals who use their mobile devices to upload images or stream live video of law enforcement activity.15 Most of the arrests have been made on unrelated charges, such as obstruction or resisting arrest, since openly recording police activity is a protected right. In 2017, federal courts upheld the right of bystanders to use their smartphones to record police actions.16 In Miami in September 2019, Emmanuel David Williams was charged with disorderly conduct and resisting an officer without violence; reporting about the incident suggested that Williams’s decision to film his interaction with police contributed to their decision to arrest him.17 Williams subsequently uploaded the content to social media. In 2016, officers in Louisiana detained store owner Abdullah Muflahi for six hours and confiscated his mobile phone after he recorded a fatal shooting by police.18 Chris LeDay, a Georgia-based musician who shared another video of the same incident on Facebook, was arrested soon afterward for unpaid traffic fines.19

In April and May 2019, federal authorities charged WikiLeaks founder Julian Assange with “conspiracy to commit computer intrusion” under the CFAA,20 and with 17 violations of the Espionage Act, for his role in publishing classified documents in 2010.21 Press and internet freedom advocates have expressed concern about these types of charges, and the Assange case specifically,22 arguing that they could have ramifications for the legitimate work of journalists.23 Assange remained in detention in the United Kingdom as of May 2020. Hearings on the US government’s request for his extradition were pending at the end of the coverage period.24

In March 2019, Republican congressman Devin Nunes sued Twitter and the users operating three anonymous accounts for $250 million in damages, alleging defamation.25 In June 2020, a Virginia judge ruled that Twitter was immune from liability under Section 230 of the Communications Decency Act, though the individual users in the case were not protected by this ruling.26

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 3.003 4.004

There are no legal restrictions concerning user anonymity on the internet, and constitutional precedents protect the right to anonymous speech in many contexts. At least one state law that stipulates journalists’ right to withhold the identities of anonymous sources has been found to apply to bloggers.1

The terms of service and other contracts enforced by some social media platforms can require users to register under their real names.2 Online anonymity has been challenged in cases involving hate speech, defamation, and libel. In 2015, a Virginia court tried to compel the customer-review platform Yelp to reveal the identities of anonymous users, but the Supreme Court of Virginia ruled that the company did not have the authority to do so.3 In May 2019, a court ruled that Reddit did not need to reveal the identity of one of its users to a plaintiff who was suing for copyright infringement.4

There are no legal limitations on encryption technology, but both the executive and legislative branches have at times moved to undermine encryption.5 In June 2019, Politico reported that the Trump administration was considering a legal ban on any encryption technology that would not allow law enforcement access.6 The following month, Attorney General William Barr argued that “warrant-proof encryption” degrades law enforcement agencies’ ability to detect, prevent, and investigate crimes.7 Subsequently, Barr requested that Facebook delay plans to encrypt messaging across its major products, citing public safety.8

In its original form, the draft EARN IT Act would have paved the way for weakening encryption, although the bill has since been changed to address these concerns in part (see B3).9 While a Senate committee approved the EARN IT Act in July 2020 with an amendment that would “exclude encryption” from the measure’s broader changes to Section 230 liability,10 advocates and some legal scholars still argued that the bill could set the stage for restrictions on the use of encryption across online platforms.11 In May 2020, in response to the shortcomings of the EARN IT Act, a group of Democratic senators introduced the Invest in Child Safety Act of 2020, an alternative bill that would maintain existing encryption standards while addressing online child exploitation.12

The proposed Lawful Access to Encrypted Data (LAED) Act, introduced in June 2020,13 goes significantly further than EARN IT to weaken encryption. The LAED Act would require the creation of a back door to encryption systems, so that both device manufacturers and service providers could decrypt devices and information at the request of law enforcement.14 Civil society groups, technical experts, and cybersecurity advocates have expressed strong opposition to the proposal.15

The degree to which courts can force technology companies to alter their products to enable government access is unclear. Following a terrorist attack in San Bernardino in 2015, the federal government obtained a court order that would have required Apple to create new software enabling the FBI to access the locked phone of one of the perpetrators.16 Apple resisted, and the case was dropped after the FBI gained access by other means. An inspector general’s report on the matter indicated that the FBI deliberately reduced its efforts to find a technical solution in an apparent attempt to set up a favorable and public legal confrontation with Apple.17

Similarly in December 2019, the Justice Department asked Apple to unlock two phones used by a gunman who attacked a Navy facility in Florida.18 Although the company did not comply, the government announced in May 2019 that it had broken into the phone. Security experts and civil rights groups expressed concern over the department’s request, arguing that forcing companies to create back doors for law enforcement undermines security, privacy, and public trust.19

The broader legal questions around encryption remain unresolved.20 Some efforts have been made to codify rules barring the government from requiring back doors for surveillance. In June 2018, a bipartisan group of lawmakers once again attempted to pass a bill first introduced in 2016 that would prohibit state and local governments from mandating backdoor access to devices.21 As of May 2020, it had not been voted on in either the House or the Senate.

The Communications Assistance for Law Enforcement Act (CALEA) currently requires telephone companies, broadband providers, and interconnected Voice over Internet Protocol (VoIP) providers to design their systems so that communications can be easily intercepted when government agencies have the legal authority to do so, although it does not cover online communications tools such as Gmail, Skype, and Facebook.22 Calls to update CALEA to cover online applications have not been successful. In 2013, 20 technical experts published a paper explaining why such an expansion (known as CALEA II) would create significant internet security risks.23

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

The legal framework for government surveillance has been open to abuse. Government authorities’ monitoring of social media continued during the coverage period with minimal oversight and transparency, and the targets included US citizens and residents engaged in constitutionally protected activity like protests and journalism.

Using a range of monitoring tools, federal and local authorities surveilled protesters during nationwide demonstrations for racial justice in 2020 (see B8).1 Elements of the DHS and other agencies have reportedly accessed and analyzed demonstrators’ private communications, through processes that appeared to lack judicial oversight and other democratic safeguards.2 Drones conducting surveillance in some cities may have been equipped with “dirtboxes,” which collect mobile-phone location information and other data.3 Moreover, some agencies were granted expanded surveillance powers.4 In June 2020, the Drug Enforcement Administration (DEA) was revealed to have been granted new capacities to “conduct covert surveillance” on protesters.5

Information that might clarify the extent of the protest-related surveillance and its impact on free association and assembly has largely remained secret, in part due to a lack of legal requirements for oversight and transparency. News reports have uncovered the use of some tools. In July 2020, the Intercept reported that certain police agencies were given intelligence packages filled with content and data that the private intelligence firm Dataminr pulled from Twitter.6 For example, posts discussing the officers involved in George Floyd’s death were sent to Dataminr’s clients. More broadly, media reports have confirmed that federal agencies or local police departments across the country, including in Pittsburgh, Minneapolis, Los Angeles, and Washington, DC, were monitoring and collecting information from posts, comments, live streams, images, and videos shared on Facebook, Instagram, Twitter, and other platforms. 7 Such monitoring led to several arrests (see C3).

Outside of the Black Lives Matter protests, government agencies, including the FBI,8 DHS, 9 Navy,10 and local police departments, are increasingly monitoring or proposing to monitor social media via numerous techniques. The information collected is subsequently stored in massive databases and can be shared with local, state, and federal authorities as well as multilateral government organizations and foreign states. The Brennan Center for Justice has detailed the extent to which the Department of Homeland Security (DHS)—specifically ICE, Customs and Border Protection (CBP), US Citizenship and Immigration Services (USCIS), and the Transportation Security Administration (TSA)—monitor social media.11 In 2016, the Brennan Center also identified more than 151 cities, counties, and local and state agencies that had purchased products for monitoring social media activities.12 These programs often employ automated systems, including advanced technology obtained from private contractors such as Palantir and Giant Oak.13 Investigators have also created fake Facebook accounts to gain access to targets’ personal networks and to follow their online activity. This technique has sometimes been used against civic activists.14

Peaceful protest movements and civic groups were specifically targeted for social media surveillance in the years prior to the 2020 protests. In 2018 and 2019, there were multiple reports of the FBI, ICE, and CBP gathering information on immigration-related activist groups, environmentalist protesters, anti-Trump demonstrators, and journalists and lawyers active in the southern border region.15 In May 2020, Vice News reported that police at the University of California at Santa Cruz employed military surveillance technology borrowed from the California National Guard to monitor a graduate student worker strike, including the social media activities of the strikers.16

In May 2019, the State Department enacted a new policy that vastly expanded its collection of social media information, 17 requiring people applying for a US visa, of whom there are about 15 million each year, to provide social media details, email addresses, and phone numbers going back five years.18 In December 2019, the Brennan Center, the Knight First Amendment Institute at Columbia University, and the law firm Simpson Thacher & Bartlett LLP filed a lawsuit on behalf of two nonprofit documentary filmmaker organizations to challenge the requirements.19

Warrantless searches of electronic devices at the border have escalated in recent years. In fiscal year 2019, CBP reported 40,913 device searches, up from 33,295 in 2018;20 only 5,085 were reported in 2012.21 Federal authorities claim to have expansive search and seizure powers, which are traditionally limited by the constitution’s Fourth Amendment, within “border zones”—defined as up to 100 miles from any border, an area encompassing about 200 million residents. The 2018 Directive No. 3340-049a provides CBP with broad powers to conduct device searches and requires travelers to provide their device passwords to CBP agents. A “basic search” under this authority can be conducted “with or without suspicion” on any person’s device, at any time, for any reasons, or for no reason at all, without a warrant.22 During the search, CBP is supposed to examine only what is physically “resident” on the device—such as pictures or texts—and not content accessible through the internet.23 However, even disconnected devices can contain sensitive information associated with, for example, social media activity.24 The directive also gives CBP the power to conduct an “advanced search,” with the use of external equipment to “review, copy, and/or analyze” the device’s contents. Advanced searches require “reasonable suspicion” of illegal activity or a “national security concern,” and agents must obtain supervisory approval. CBP has purchased technology from the Israeli company Cellebrite that allows agents to extract information stored on a device or online within seconds.25 This information can then be stored in interagency databases that aggregate data from other monitoring programs.26

Federal appeals courts are currently split on the legality of warrantless electronic device searches in border zones.27 In 2017 and again in May 2019, a bipartisan group of senators introduced legislation requiring agents to obtain a warrant before searching the electronic devices of US citizens or permanent residents, and forbidding them from detaining people for more than four hours while trying to persuade them to unlock their phones.28 Civil rights groups have also challenged the searches in court,29 filing a lawsuit on behalf of 10 US citizens and one legal permanent resident. In November 2019, a federal court in Boston ruled in favor of the plaintiffs, holding that border agents must have reasonable suspicion of a crime before conducting a search.30 The government is appealing the decision.

In January 2020, amid rising tensions with Iran after the US assassination of a senior Iranian general in Iraq, CBP reportedly confiscated the phones of several detained Iranians and Iranian-Americans at the US-Canada border and ordered them to provide social media passwords.31 Separately, the Financial Times reported in September 2020 that several Chinese students were pressured to hand their electronic devices to CBP agents when leaving the United States.32 In 2018, the Committee to Protect Journalists and Reporters Without Borders found 20 cases in which border agents conducted warrantless searches of journalists’ electronic devices.33 One reporter wrote in the Intercept in June 2019 that a CBP official at the US-Mexico border spent three hours searching through his phone, reviewing and asking questions about photos, emails, personal and professional files, videos, calls, texts, and messages on encrypted communication apps, including conversations with colleagues and journalistic sources.34

The legal framework for foreign intelligence surveillance has in practice permitted the collection of data on US citizens and residents. Such surveillance is governed in part by the USA PATRIOT Act, which was passed following the terrorist attacks of September 11, 2001, and expanded official surveillance and investigative powers.35 In 2015, President Barack Obama signed the USA FREEDOM Act, which extended expiring provisions of the PATRIOT Act, including broad authority for intelligence officials to obtain warrants for roving wiretaps of unnamed “John Doe” targets and surveillance of lone individuals with no evident connection to terrorist groups or foreign powers.36 At the same time, the new legislation was meant to end the government’s bulk collection of domestic call detail records (CDRs)—the metadata associated with telephone interactions—under Section 215 of the 2001 law. The bulk collection program was detailed in documents leaked by former National Security Agency (NSA) contractor Edward Snowden in 2013,37 and it was ruled illegal by the US Second Circuit Court of Appeals in 2015.38

The USA FREEDOM Act replaced the domestic bulk collection program with a system that allows the NSA to access US call records held by phone companies after obtaining an order from the Foreign Intelligence Surveillance Court, also called the FISA Court, in reference to the 1978 Foreign Intelligence Surveillance Act.39 Requests for such access require use of a “specific selection term” (SST) representing an “individual, account, or personal device,”40 which is intended to prevent broad requests for records based on a zip code or other imprecise indicators. An SST provision also applies when intelligence agents use Section 215 to obtain information other than CDRs, or when they use FISA pen registers and trap-and-trace devices (instruments that capture a phone’s outgoing or incoming call records) or national security letters (secret administrative subpoenas used by the FBI to demand certain types of communications and financial records).41 The definition of SST, however, varies depending on the authority used, and civil liberties advocates have noted that the SST provision for non-CDR Section 215 requests, in particular, is very broad.42

Another component of the USA FREEDOM Act established a panel of amici curiae with expertise in “privacy and civil liberties, intelligence collection, communications technology, or any other area that may lend legal or technical expertise” to the FISA Court, so that the judges are not forced to rely on the arguments of the government alone in weighing requests. The court must appoint an amicus in any case that, “in the opinion of the court, presents a novel or significant interpretation of the law.” However, the court can waive this requirement by issuing “a finding that such appointment is not appropriate.”43 Five people are currently designated to serve as amici curiae.44

Although the reforms to Section 215 under the USA FREEDOM Act were an improvement, numerous problems have since come to light.45 Although the law was supposed to end bulk collection of CDRs, official statistics showed that a massive amount was still being acquired. Between 2016 and 2018, the number of “received” CDRs increased from over 151 million in 2016 to more than 434 million in 2018.46 In April 2018, the NSA revealed that technological problems caused it to collect data about phone calls it was not authorized to collect.47 As a result, the NSA purged all of the records it had obtained over the preceding three years. The NSA paused the program a few months later, and in April 2019, the agency recommended that the White House not seek reauthorization of the program because its operational complexities and legal liabilities outweighed the value of the intelligence gained.48 Nonetheless, the administration asked Congress to permanently reauthorize the CDR program. Government watchdogs maintain that the CDR program authority should not be renewed because it is “highly invasive,” it lacks evidence of efficacy in protecting the country from security threats,49 and it is technically dysfunctional.50

Section 215, the roving wiretaps provision, and the lone-wolf amendment all expired on March 15, 2020, after the Senate declined to take up a House-passed reauthorization bill.51 However a “savings clause” allowed officials to continue using the authorities for investigations that had begun before the expiration, or for new examinations of incidents that occurred before that date.52

The Senate passed the draft USA FREEDOM Reauthorization Act on May 14, 2020.53 Although an amendment to the bill prohibiting the warrantless gathering of users’ internet browsing histories was defeated,54 senators approved an amendment that would strengthen the role of amici curiae by giving them greater access to information, granting them new authority to bring matters to the FISA Court, and adding to the categories of cases in which there should be a presumption that they will participate.55 The House, however, canceled a floor vote on the Senate-passed bill when Republicans pulled their support.56 No further progress on the bill had been reported as of August 2020.

Various other components of the legal framework still allow surveillance by intelligence agencies that lacks oversight, specificity, and transparency:

  • Section 702 of FISA: Section 702, adopted in 2008 as part of the FISA Amendments Act, authorizes the NSA, acting inside the United States, to collect the communications of any foreigner overseas as long as a significant purpose of the collection is to obtain “foreign intelligence,” a term broadly defined to include any information that “relates to … the conduct of the foreign affairs of the United States.”57 Section 702 surveillance involves both “downstream” (also known as PRISM) collection, in which stored communications data—including content—is obtained from US technology companies, and “upstream” collection (see below), in which the NSA collects users’ communications as they are in transit over the internet backbone.58 Although Section 702 only authorizes the collection of information pertaining to foreign citizens outside the United States, Americans’ communications are inevitably swept up in this process in large amounts, and these too are stored in a searchable database.59

    In 2016, during the FISA Court’s annual review and reauthorization of surveillance conducted under Section 702, the government notified a FISA Court judge of widespread violations of protocols intended to limit NSA analysts’ access to Americans’ communications.60 Upstream collection is more likely than other programs to incidentally collect communications sent between US citizens.61 The report showed that analysts had failed to take steps to ensure that they were not improperly searching the upstream database when conducting certain types of queries.

    In response, the court delayed reauthorizing the program, and in 2017 the NSA director recommended that the agency halt its collection of communications if they merely mentioned information relating to a surveillance target (referred to as “about” collection), and instead only collect communications to and from the target.62 Privacy advocates welcomed the decision, but emphasized that the incident underscored the need for legislative reform of Section 702.

    Section 702 was reauthorized for six years in January 2018.63 Despite robust advocacy efforts from civil liberties and privacy watchdogs, there were few legal changes. The renewal legislation did not prohibit “about” collection, meaning the NSA could legally attempt to resume the practice as long as it first obtained the FISA Court’s approval and gave Congress advance notice. Congress also rejected calls to require the government to obtain a warrant before searching through data obtained under Section 702 to find the communications of US persons.64

    The final bill did contain a provision requiring a warrant when FBI agents seek to review the content of communications belonging to an American who is already the subject of a predicated criminal investigation not relating to national security. Observers noted that these criteria were too narrow to require a warrant in most cases.65 The 2018 reauthorization also included measures to increase transparency, such as a requirement that the attorney general brief members of Congress on how the government uses information collected under Section 702 in official proceedings such as criminal prosecutions.66

    In October 2019, the FISA Court released three opinions originally issued in 2018.67 In the decisions, the court found that the FBI had conducted a “large number” of searches of Section 702 data that did not comply with internal rules, the statute, or the Fourth Amendment, because they were not “reasonably likely to return foreign-intelligence information or evidence of a crime.”68 Tens of thousands of Americans had been subject to these improper searches. The court also found that the FBI violated the law by not reporting the number of times it searched for Americans’ data.69

    In May 2020, the intelligence community released its annual Statistical Transparency Report, which details the frequency with which the government uses certain national security powers. It documented an increase in Section 702 surveillance targets from 89,128 in 2013 to 204,968 in 2019.70 The report also contained evidence of the government’s noncompliance with another provision of the reauthorization law. In six instances in 2018 in which the FBI reviewed the contents of Americans’ communications after conducting a search in a criminal, non–national security case, the bureau failed to obtain a warrant as required by law.71 In combination with the FISA Court opinions released in October 2019, this revelation added to ongoing concerns about insufficient oversight of such surveillance programs.72

  • FISA Title I: Under Title I of FISA,73 the Justice Department may obtain a court order to conduct surveillance of Americans or foreigners inside the United States if it can show probable cause to suspect that the target is a foreign power or an agent of a foreign power. In December 2019, the department’s inspector general released a report finding that the applications to conduct surveillance of former Trump campaign aide Carter Page had been riddled with errors and omissions.74 The inspector general followed up by reviewing a sample of 29 applications filed in other cases, and in March 2020 released a memorandum finding pervasive errors in these cases, along with a failure to abide by internal procedures meant to ensure the accuracy of applications.75
  • Executive Order 12333: Originally issued in 1981, Executive Order (EO) 12333 is the primary authority under which US intelligence agencies gather foreign intelligence; essentially, it governs all collection that is not governed by FISA, and it includes most collection that takes place overseas. The extent of current NSA practices authorized under EO 12333 is unclear, but documents leaked in 2013 suggest that it was the legal basis for the so-called MYSTIC program, in which all of the incoming and outgoing phone calls of one or more target countries were captured on a rolling basis. The Intercept identified the Bahamas, Mexico, Kenya, and the Philippines as targets in 2014.76 Although EO 12333 cannot be used to target a “particular, known” US person (because any such acquisition must comply with FISA), the very fact that bulk collection is permissible under the order ensures that Americans’ communications will be incidentally collected, and likely in very significant numbers. Moreover, questions linger as to whether the government relies on EO 12333 to conduct any surveillance inside the United States—a worrisome prospect given that surveillance under the order is not subject to any judicial oversight.77 These questions took on new salience in March 2020, when Senator Richard Burr, chairman of the Senate Select Committee on Intelligence, warned that if Congress failed to reauthorize Section 215 of the USA PATRIOT Act, the president would obtain the same information using EO 12333.

A law passed in 2014 included a requirement that the NSA develop “procedures for the retention of incidentally acquired communications” of Americans collected pursuant to EO 12333, and that such communications may not be retained for more than five years except when subject to certain broad exemptions.78 In 2015, the Obama administration updated a 2014 policy directive that put in place new restrictions relevant to EO 12333 on the use of information collected in bulk for foreign intelligence purposes.79 Civil society groups, however, continue to campaign for more information about how the executive order is being used and for comprehensive reform.80

Access to metadata for law enforcement, as opposed to intelligence, generally requires a subpoena issued by a prosecutor or investigator without judicial approval.81 Judicial warrants are only required in California under the California Electronic Communications Privacy Act (CalECPA), which has been in effect since 2016.82 In criminal probes, law enforcement authorities can monitor the content of internet communications in real time only if they have obtained an order issued by a judge, under a standard that is somewhat higher than the one established under the constitution for searches of physical places. The order must reflect a finding that there is probable cause to believe that a crime has been, is being, or is about to be committed.

More uncertain is the status of stored communications. One federal appeals court has ruled that constitutional protections apply to stored communications, so that a judicial warrant is required for government access.83 However, the 1986 Electronic Communications Privacy Act (ECPA) states that the government can obtain access to email or other documents stored in the cloud with a subpoena, subject to certain conditions.84 In 2016, the House of Representatives passed the Email Privacy Act, which would require the government to obtain a probable cause warrant before accessing email or other private communications stored with cloud service providers.85 The bill was reintroduced in 2017 and again passed the House, but it failed to pass the Senate during the 2017–18 Congress.86

Other legal implications of law enforcement access to devices have been debated in the courts. In 2016, a Maryland state appellate court ruled that law enforcement bodies must obtain a warrant before using “covert cell phone tracking devices” known by the product name Stingray.87 Several other court decisions subsequently affirmed that police must obtain a warrant before using this technology.88 Stingray devices mimic mobile network towers, causing nearby phones to send identifying information and thus allowing police to track targeted phones or determine the phone numbers of people in the area. In its decision, the Maryland court rejected the argument that because turning on one’s phone allows the service provider’s towers to send and receive signals from the device, individuals are effectively “volunteering” their private information for use by other third parties.89 As of November 2018, the ACLU had identified 75 agencies across the country that use Stingray devices.90 In May 2020, the organization also revealed that between 2017 and 2019, ICE used similar devices at least 466 times.91

In 2017, the Detroit News obtained court documents showing that federal agents used Stingray devices to find and arrest an undocumented immigrant.92 Privacy advocates argue that because the devices collect information from mobile phones in the area surrounding the target, and thus constitute mass surveillance, their use by law enforcement agencies should be limited to serious cases involving violent crimes, not immigration violations.93 In December 2019, the ACLU sued CBP and ICE, alleging that they had “failed to produce records” and data on their use of Stingray devices.94

C6 1.00-6.00 pts0-6 pts
Are service providers and other technology companies required to aid the government in monitoring the communications of their users? 4.004 6.006

There are few legal constraints on the collection, storage, and transfer of data by private or public actors in the United States. ISPs and content hosts collect vast amounts of information about users’ online activities, communications, and preferences. This information can be subject to government requests for access, typically through a subpoena, court order, or search warrant.

In general, the country lacks a robust federal data-protection law, though a number of bills have been proposed.1 In 2017, President Trump signed SJ Resolution 34,2 which rolled back FCC privacy regulations introduced in 2016 that would have given consumers more control over how their personal information is collected and used by broadband ISPs (see A5). At the end of 2019, three different privacy bills had been submitted to Congress.3 Two of them, the Consumer Online Privacy Rights Act (COPRA) and the US Consumer Data Privacy Act (CDPA), contain provisions requiring affirmative (opt-in) consent prior to the processing of sensitive consumer data, mandate that entities covered by the laws carry out privacy assessments, and afford users “rights to access, correction, deletion and data portability.”4

To date, most legislative activity on data privacy has occurred below the federal level, with many states considering or passing laws.5 The California legislature enacted AB 375,6 also known as the California Consumer Privacy Act (CPPA) of 2018, which allows Californians to demand information from businesses in the state about how their personal data are collected, used, and shared.7 By 2019, at least 25 states and Puerto Rico were considering such proposals.8 A Vermont law implemented in February 2019 requires companies that buy or sell the personal data of state residents to register with the state government and disclose whether affected users can opt out of data collection.9 In Maine, a law passed in June 2019 requires ISPs to obtain consent from customers before using, selling, or distributing their data.10 Nevada also passed CCPA-inspired amendments to its existing privacy law that went into effect in October 2019.11

The USA FREEDOM Act in 2015 changed the way private companies publicly report on certain types of government requests for user information. Prior to the law, the Justice Department restricted the disclosure of information about national security letters, including within the transparency reports voluntarily published by some internet companies and service providers.12 In 2014, the department had reached a settlement with Facebook, Google, LinkedIn, Microsoft, and Yahoo that permitted the companies to disclose the approximate number of government requests they receive, using aggregated bands of 250 or 1,000 rather than precise figures.13 Twitter, not a party to the settlement, sued on the grounds that the rules amounted to a prior restraint violating the company’s First Amendment rights.14 A judge partially dismissed Twitter’s case in 2016.15 Meanwhile, the USA FREEDOM Act granted companies the option of more granular reporting, though reports containing more detail are still subject to time delays, and their frequency is limited.16

Despite the USA FREEDOM Act’s aim of improving transparency, government requests continue to be made in secret. In September 2019, documents released in response to a FOIA request by the Electronic Frontier Foundation revealed that the FBI had been accessing personal data through national security letters from a much broader group of entities than previously understood.17 Western Union, Bank of America, Equifax, TransUnion, the University of Alabama at Birmingham, Kansas State University, major ISPs, and tech and social media companies had all received such letters. In February 2020, it also came to light that the New York Police Department had relied on a little-used provision in the USA PATRIOT Act to subpoena Twitter data from the editor of the New York Post.18

The government may request that companies store targeted data for up to 180 days under the 1986 Stored Communications Act (SCA). Practices for general collection and storage of communications content and records vary by company.19

In June 2018, the Supreme Court ruled narrowly in Carpenter v. United States that the government is required to obtain a warrant in order to access seven days or more of subscriber location records from mobile providers.20 Privacy advocates lauded the decision, noting that location information could have a greater impact on privacy than other types of user data collected by private companies.21 The ruling also diminished, although in a limited way, the third-party doctrine—the idea that Fourth Amendment privacy protections do not extend to most types of information that are handed over voluntarily to third parties, such as telecommunications companies.22

The scope of law enforcement access to user data held by companies was expanded under the Clarifying Lawful Overseas Use of Data (CLOUD) Act,23 signed into law in March 2018 as part of a government spending bill.24 Introduced as an update to the SCA with regard to the policies governing cross-border data transfers,25 the CLOUD Act determined that law enforcement requests sent to US companies for user data under the 1986 law would apply to records in the company’s possession regardless of storage location, including overseas. Previous requests had been limited to user data stored within the jurisdiction of the United States. The CLOUD Act also allows certain foreign governments to enter into an executive agreement with the United States and then petition US companies to hand over user data.26 Proponents of the law, including several large US tech firms,27 argued that the previous legal framework was outdated and cumbersome. It required law enforcement personnel to go through a potentially lengthy “mutual legal assistance treaty” (MLAT) process between countries to obtain information pertaining to local crimes because the data were stored overseas.28 Civil liberties advocates complained that the law further undermined user privacy.29 In 2019, the United States and the United Kingdom signed the first bilateral data access agreement under the CLOUD Act.30 A coalition of civil society groups expressed concern about the deal.31 As of May 2020, the United States was in talks with Australia regarding a similar pact.32

Private companies may comply with both legal demands and voluntary requests for user data from the government. In response to a request from Puerto Rican authorities, Facebook turned over the data and communications of three news outlets’ pages that live-streamed 2017 protests against austerity measures (see C3).33 The information sought included private communications, location data, and credit card information. In March 2019, the Justice Department confirmed that a DEA program had collected billions of phone records from AT&T without a court order.34 Information and communication platforms may also monitor the communications of their users for the purpose of identifying unlawful content to share with law enforcement (see B2).

User information is otherwise protected under Section 5 of the Federal Trade Commission Act (FTCA), which has been interpreted to prohibit internet entities from deceiving users about what personal information is being collected and how it is being used. The law also prevents the use of personal information in ways that harm users without offering countervailing benefits. In addition, the FTCA has been interpreted to require entities that collect users’ personal information to adopt reasonable security measures to safeguard it from unauthorized access. State-level laws in 47 states and the District of Columbia also require entities that collect personal information to notify consumers—and, usually, consumer protection agencies—when they suffer a security breach that exposes such information.

Section 222 of the Communications Act, as amended by the 1996 Telecommunications Act, prohibits telecommunications firms from sharing or using information about customers, beyond the bounds set by existing rules, without acquiring consent. This provision had historically only applied to phone companies’ records about phone customers, but under the FCC’s 2015 Open Internet Order, it also applied to ISPs’ records about broadband customers.35 Following the FCC’s 2017 decision to repeal the order, some have suggested that providers may continue operating under Section 222 but without FCC guidance or enforcement.36

In February 2019, Vice News revealed that mobile service providers had sold their customers’ real-time location data, in a seeming violation of FCC regulations.37 Following an investigation, FCC chairman Ajit Pai announced in January 2020 that such companies were in violation of federal law and would be penalized.38 Government bodies have also reportedly purchased phone location data to aid in investigations and law enforcement. In 2020, the Internal Revenue Service (IRS),39 ICE,40 and the Secret Service all reportedly engaged in the practice.41

In May 2020, during the COVID-19 pandemic, Apple and Google launched an Exposure Notification System (ENS), for phones using their respective operating systems, to assist with contact tracing.42 ENS uses Bluetooth signals to determine proximity to other smartphones and track potential exposure to the coronavirus. Despite efforts to design the system with a decentralized architecture for collecting data, there were concerns over privacy and doubts about whether the tool could be entirely secure from vulnerabilities or exploitation.43 Separately, the mobile advertising industry handed over aggregated and anonymized location data to federal, state, and local governments. Authorities aimed to centralize the location data on people in over 500 cities in order to analyze how the disease might be spreading.44

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in retribution for their online activities? 3.003 5.005

Internet users generally are not subject to extralegal intimidation or violence by state actors. However, online journalists are at times exposed to physical violence or intimidation by police, particularly while covering protests. Women and members of marginalized racial, ethnic, and religious groups are often singled out for threats and harassment by other users online.

A number of online journalists covering or live-streaming protests for racial justice that ramped up in May 2020 were physically assaulted by police, despite making it clear that they were members of the press.1 A photojournalist for Syracuse.com in New York State was shoved to the ground by a police officer on May 30.2 Also that day, Vice News reporter Michael Adams was pushed to the ground, held down, and hit with pepper spray while reporting via social media from Minneapolis,3 and Al Tompkins, a journalist for Poynter, was tear-gassed alongside other journalists while covering demonstrations in the city.4

Ordinary users who showed support for or shared information about protests also faced harassment and intimidation. The young woman who recorded and posted a widely disseminated video of the police killing of George Floyd later reported harassment from users on social media.5 In another example, a New York police union used Twitter to dox, or reveal sensitive personal information, about the daughter of New York City mayor Bill De Blasio, who had been arrested while protesting on May 30.6 Twitter later removed the post and suspended the union’s account.

The online harassment, threats, and at times physical attacks associated with the 2020 protests were not isolated incidents. Researcher Dragana Kaurin interviewed people who had recorded and shared high-profile videos of violent arrests and police killings of Black Americans—including Freddie Gray, Eric Garner, Walter Scott, Philando Castile, and Alton Sterling—over several years. Kaurin documented numerous reports of police retaliation, harassment, physical violence, doxing, and other forms of intimidation aimed at deterring community members from sharing evidence of police brutality.7

President Trump has directly contributed to online harassment and intimidation,8 and those who speak out against the administration are often targeted for harassment by his supporters.9 An analysis of the president’s Twitter account from the US Press Freedom Tracker found nearly 1,900 posts from 2015 to the end of 2019 that used inflammatory language toward news outlets and individual journalists. In May 2020, after Twitter fact-checked his posts on mail-in voting (see B2), Trump singled out a company employee in a post. The employee then received a barrage of harassing messages, including posts from members of the president’s reelection campaign.10 Separately the same month, in response to clashes between protesters and police during racial justice demonstrations, Trump issued a series of threatening posts, with one including a warning that “when the looting starts, the shooting starts.”11 Twitter flagged the post for “glorifying violence” (see B2).

Some journalists working for online outlets have been subjected to harassment at the border, including through warrantless searches of their electronic devices (see C5). A CBP officer in October 2019 repeatedly asked Ben Watson, news editor for the online outlet Defense One, whether he wrote propaganda.12 Watson reported that he was not handed back his passport until he agreed to answer in the affirmative. In February 2019, David Mack, a reporter for the online news outlet BuzzFeed, said that a CBP officer aggressively questioned him about his organization’s coverage of special counsel Robert Mueller and President Trump as he passed through a New York City airport.13 CBP’s assistant commissioner for public affairs later apologized to Mack.

In general, online harassment and threats, including doxing, disproportionately affect women and other members of marginalized groups.14 In a 2019 Committee to Protect Journalists survey of 115 female and gender-nonconforming journalists in the United States and Canada, 90 percent of the US respondents cited online harassment as the “biggest threat” to safety associated with their jobs.15 A December 2018 Amnesty International study of abuse targeting female journalists and politicians on Twitter found that Black women were 84 percent more likely to be mentioned in abusive posts than White women.16 The Pew Research Center found in 2017 that one in four Black Americans has faced online harassment because of their race or ethnicity.17

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 1.001 3.003

Cyberattacks are an ongoing threat to the security of networks and databases in the United States. Civil society groups, journalists, and politicians have also been subjected to targeted technical attacks.

In June 2020, research from the Toronto-based group Citizen Lab revealed that Dark Basin, a hack-for-hire group, had used phishing and other attacks against US NGOs working on issues related to net neutrality and a climate-change campaign called #ExxonKnew.1 Several journalists from major news outlets similarly faced technical attacks emanating from the group.

From the police killing of George Floyd in late May 2020 through the beginning of June, cyberattacks against advocacy groups increased by 1,120 times, including DDoS attacks against projects to raise bail funds for jailed protesters.2 Separately, a DDoS attack at the end of May on the Minneapolis Police Department’s website was attributed to the hacker collective Anonymous.3

Ahead of the general elections in November 2020, Microsoft announced in September that a hacking unit associated with Russian military intelligence had targeted at least 200 organizations, including national and state political parties and political consultants. Iranian and Chinese actors have also targeted people associated with Trump’s and Biden’s presidential campaigns.4 In October 2019, Microsoft discovered “significant cyber activity” coming from Phosphorus, a group believed to have links with the Iranian government.5 Within 30 days, Microsoft found more than 2,700 attempts to identify email accounts and hack 241 accounts of current and former US government officials, journalists, prominent Iranian expatriates, and President Trump’s reelection campaign. Previously in January 2019, the Democratic National Committee reported that some of its email addresses were targeted in a spear-phishing campaign strikingly similar to those conducted in 2016 by the Russian intelligence-linked hacking group known as Cozy Bear.6

In May 2020, the US government revealed that foreign actors, including foreign governments, had carried out cyberattacks against the Department of Health and Human Services, hospitals, research laboratories, health care providers, and other institutions in the medical industry.7 Groups linked to the Russian and Chinese governments have been identified as likely culprits behind attacks meant to steal COVID-19 medical information and research.8

Cyberattacks against state and local governments are increasingly common. In June 2019, Lake City, Florida, was hit by a ransomware attack that cost it nearly $460,000. 9 In August 2019, hackers mounted 22 simultaneous cyberattacks on state and local governments in Texas. That state was hit twice more in May 2020—first with an attack on its judicial agencies and then with an attack on the Department of Transportation.10 Municipal governments in New Bedford, Massachusetts;11 Pascagoula, Mississippi;12 and New Orleans, Louisiana;13 as well as the Louisiana state government,14 all experienced cyberattacks during the coverage period.

The United States has launched a series of legal and policy initiatives to address the growing threat of cyberattacks. In May 2020, the Senate Commerce Committee approved the Cybersecurity Competitions to Yield Better Efforts to Research the Latest Exceptionally Advanced Problems (CYBER LEAP) Act of 2020. It would establish incentives for the development of new practices and technology related to the economics of cyberattacks, cyber training, and federal agency resilience to cyberattacks.15 In September 2018, the White House released a National Cyber Strategy for the next 15 years, with goals that included securing critical infrastructure and partnering with the private sector.16 Critics of the strategy argued that it was “reckless,” as it called for an increase in preemptive, effectively offensive cybersecurity operations rather than focusing on defensive measures, which could “escalate conflicts” and prove counterproductive.17

In 2015, President Obama signed the Cybersecurity Information Sharing Act as part of an omnibus bill. The law requires DHS to share information about threats with private companies, and allows companies to voluntarily disclose information to federal agencies without fear of being sued for violating user privacy (see C6).18 Critics said that the measure’s privacy protections were not strong enough, and that the voluntary disclosure of data to any federal agency—which could include the Department of Defense—threatened to undermine civilian control of cybersecurity programs and blur the line between cybersecurity and law enforcement applications for the information.19

On United States

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    83 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

  • Networks Restricted

  • Websites Blocked

  • Pro-government Commentators

  • Users Arrested