United Kingdom

A Obstacles to Access 23 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
78 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

The internet remained free for users in the United Kingdom (UK), with few major constraints on access or content. Following the lead of its European counterparts, the government introduced a bill to regulate online platforms and content hosts in an effort to compel them to remove content that is illegal or deemed harmful. The government also banned the purchase of fifth-generation (5G) mobile technology from Huawei, the Chinese telecommunications company, over security concerns.

The UK—comprising England, Scotland, Northern Ireland, and Wales—is a long-standing democracy that regularly holds free elections and is home to a vibrant media sector. While the government enforces robust protections for political rights and civil liberties, recent years have featured concerns about increased government surveillance of residents as well as rising Islamophobia and anti-immigrant sentiment. UK referendum voters in 2016 narrowly supported leaving the European Union (EU), a process known colloquially as Brexit, and an agreement to finalize the UK’s departure from the bloc was reached in December 2020.

header2 Key Developments, June 1, 2020 - May 31, 2021

  • In July 2020, the government banned the purchase of Huawei technology, citing security concerns, and ordered existing Huawei equipment removed from UK infrastructure by the end of 2027 (see A1).
  • The government introduced the Online Safety Bill, which empowers the telecommunications regulator to impose penalties on online services that neglect to remove certain online content, namely illegal content and content that the regulator deems “harmful” (see B3, B5, and B6).
  • In March 2021, the Scottish parliament passed a hate crimes law that was criticized by rights group for overly broad provisions that may limit online freedom of expression (see C2 and C3).
  • A March 2021 report revealed that the government had ordered two internet service providers (ISPs) to install surveillance technology that would record users’ web history in 2019 (see C6).

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

Access to the internet is considered a key element of societal and democratic participation in the UK. Broadband access is almost ubiquitous, and nearly 100 percent of households are within range of ADSL (asymmetric digital subscriber line) connections. All national mobile service providers offer fourth-generation (4G) network technology.

The Digital Economy Act 2017 obliges service providers to offer minimum connection speeds of 10 megabits per second (Mbps).1 Access to “superfast” broadband connections, with advertised speeds of at least 30 Mbps, continues to expand.2 While the geographical coverage of superfast broadband networks has reached more than 95 percent of the UK, the communications regulator, the Office of Communications (Ofcom), has noted that the country lags behind in fully fiber-optic broadband.3 In March 2018, the government launched a new voucher scheme that provides up to ₤3,000 ($4,100) toward installation costs for full-fiber broadband connections serving small and medium-sized enterprises.4 In October 2018, ₤200 million ($271.5 million) was allocated to enable an “outside-in” approach to providing full-fiber broadband in rural areas.5 As of December 2019, 69% of all residential lines were superfast broadband, representing a 10% increase from 2018.6 Additionally, 3.5 million households had access to full-fiber broadband with speeds up to 1 gigabit per second (Gbps).7

Mobile telephone penetration is extensive. In 2019, 98 percent of households had mobile phone connections.8 In 2017, some 73 percent of surveyed adults used mobile phones to access the internet, up from 36 percent in 2011.9 In 2018, 78 percent of adults viewed their mobile device as their primary means of internet access,10 with 92.1 million mobile subscriptions across the country.11 For those over age 54, laptop computers remained the most popular devices.12

In July 2020, the UK government banned the purchase of 5G technology from Chinese telecommunications company Huawei from 2021 onward and ordered existing Huawei equipment removed by the end of 2027 due to security concerns.13 The major telecommunications providers, including EE, Three, and Vodafone, all use some elements of Huawei equipment in their 5G infrastructure; the cost of that removal was estimated at over £500 million ($667 million).14

The policy, which followed United States (US) sanctions against the company, reversed a January 2020 announcement that the UK would permit the use of Huawei equipment in certain components of the country’s developing 5G network.15 The UK has barred Huawei from providing “core” elements of the 5G network since April 2019.16

The UK government had previously awarded contracts to Huawei in 2010 to provide infrastructure for fixed-line and mobile internet service, including 5G mobile technology. In order to allay security concerns, national security and intelligence agencies such as Government Communications Headquarters (GCHQ) arranged to monitor for any problems through regular audits and other measures. In July 2018, the fourth annual report on this monitoring raised significant concerns about Huawei-provided infrastructure, especially with respect to third-party equipment suppliers such as ZTE, another Chinese telecommunications firm.17 In 2017, China had passed national intelligence legislation that gave government agencies significant authority to interfere with Chinese manufacturers of telecommunications hardware.18

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 2.002 3.003

Internet access continues to expand, gradually reducing regional and demographic disparities.

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries. The average monthly price for a typical mobile data package was ₤12.57 ($17.07) in 2019, while1 the most affordable fixed-line broadband packages cost a little over ₤23.50 ($31.91) a month.2 Median gross weekly earnings for full-time workers were ₤569 ($773) in 2018.3 In September 2020, Ofcom reported that the average household in 2019 spent ₤77.50 ($105), or 3 percent of the total amount of monthly expenses, on internet services per month.4

According to 2020 data from UK’s Office of National Statistics, virtually all residents between the ages of 16 and 44 are internet users, with most accessing the internet through their mobile device.5 Ninety-six percent of households have access to the internet.6 However, those in the lowest income groups are significantly less likely to have home internet subscriptions, with the gap between socioeconomic groups remaining the same for the past few years. Some 22 percent of people with disabilities have no internet access, though in 2020, around 11 million individuals with disabilities accessed the internet, the highest recorded number since the survey began in 2011.7 There is almost no general gender gap in internet use in 2020—93 percent of men used the internet compared to 91 percent of women—though about 43 percent of women over 75 have never used the internet. Women between the ages of 65 and 74 have experienced the largest rise in internet usage, from 47 percent in 2011 to 84 percent in 2020.

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The government does not exercise control over the internet infrastructure and does not routinely restrict connectivity. On April 17, 2019, however, British transport police ordered the ISP Virgin Media to shut off Wi-Fi service in some London Underground stations.1 The restriction came in response to protests and peaceful civil-disobedience actions by the environmentalist group Extinction Rebellion, which called on the government to reduce carbon emissions and combat climate change more aggressively.2 The group had publicized its plans to peacefully disrupt Underground service. Appendix 1 of the Wi-Fi Operational Agreement between Virgin Media and Transport for London details the process for implementing orders from police or security agencies to temporarily cut or restrict Wi-Fi service.3

The government does not place limits on the amount of bandwidth that ISPs can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services, such as peer-to-peer file sharing and television streaming. Mobile providers have cut back on previously unlimited access packages for smartphones, reportedly because of concerns about network congestion.

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 5.005 6.006

There are few obstacles to the establishment of service providers in the UK, allowing for a competitive market that benefits users.

Major ISPs include Virgin Media with a 24 percent market share, BT (formerly British Telecom) with 23 percent, Sky with 17 percent, TalkTalk with 9 percent, and others constituting the remaining 27 percent.1 Ofcom continues to use regulations to promote the unbundling of services so that incumbent owners of infrastructure invest in their networks while also allowing competitors to make use of them.2

ISPs are not subject to licensing, but they must comply with general conditions set by Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.3

Among mobile service providers, EE, which has been owned by BT since 2016, leads the market with 22 percent of subscribers, followed by O2 with 19 percent, Vodafone with 15 percent, Three with 10 percent, and Tesco with 7 percent.4 Mobile virtual network operators including Tesco provide service using the infrastructure owned by one of the other companies.

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 4.004 4.004

The various entities responsible for regulating internet service and content in the UK generally operate impartially and transparently.

Ofcom, the primary telecommunications regulator, is an independent statutory body. It has broadly defined responsibility for the needs of “citizens” and “consumers” regarding “communications matters” under the Communications Act 2003.1 It is overseen by Parliament and also regulates the broadcasting and postal sectors.2 Ofcom has some authority to regulate content with implications for the internet, such as regulating video content in keeping with the EU AudioVisual Media Services Directive.3 The government has announced that Ofcom will enforce the Online Harms Bill (see B3, B5, and B6).4

Nominet, a nonprofit company operating in the public interest, manages access to the .uk, .wales, and .cymru country domains. In 2013, Nominet implemented a post-registration domain name screening to suspend or remove domain names that encourage serious sexual offenses.5

Other groups regulate services and content through voluntary ethical codes or coregulatory rules under independent oversight. In 2012, major ISPs published a Voluntary Code of Practice in Support of the Open Internet.6 The code commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade the services of a competitor. The code was amended in 2013 to clarify that signatories could deploy content filtering or provide such tools where appropriate for public Wi-Fi access.7 Ofcom also maintains voluntary codes of practice related to internet speed provision, dispute resolution, and the sale and promotion of internet services.8

Criminal online content is addressed by the Internet Watch Foundation (IWF), an independent self-regulatory body funded by the EU and industry associations (see B3).9 The Advertising Standards Authority and the Independent Press Standards Organization regulate newspaper websites. With the exception of child-abuse content, these bodies eschew prepublication censorship and operate post-publication notice and takedown procedures within the E-Commerce Directive liability framework (see B3).

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 5.005 6.006

Blocking generally does not affect political and journalistic content or other internationally protected forms of online expression. Service providers do block and filter some content that falls into one of three categories: copyright-infringing material, the promotion of terrorism, and depictions of child sexual abuse. Optional filtering can be applied to additional content, particularly material that is considered unsuitable for children.

In October 2019, the government dropped plans for automated age verification for online pornography, deeming it technically infeasible.1 The Digital Economy Act 2017 includes provisions that allow blocking of “extreme” pornographic material, setting standards that critics said were poorly defined and could be unevenly applied.2 The measures were originally set to come into force in April 2019, but then later postponed.3 The measures and goals of the age verification system were not incorporated into the Online Harms Bill (see B3, B5, and B6) in May 2021, contrary to what some advocates of the bill had expected.4

ISPs are required to block domains and URLs found to be hosting material that infringes copyright when ordered to do so by the High Court (see B3).5

Overseas-based URLs hosting content that has been reported by police for violating the Terrorism Act 2006, which prohibits the glorification or promotion of terrorism, are included in the optional child filters supplied by many ISPs.6 “Public estates” like schools and libraries also block such URLs.7 The content can still be accessed on private computers.8

ISPs block URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content in accordance with the Internet Services Providers’ Association’s voluntary code of practice (see A5). Mobile service providers also block URLs identified by the IWF as containing such content.

All mobile service providers and some ISPs that provide home service filter legal content that is considered unsuitable for children. Mobile service providers enable these filters by default, requiring customers to prove that they are over age 18 to access the unfiltered internet. In 2013, the four largest ISPs agreed with the government to present all customers with an “unavoidable choice” about whether to enable parentally controlled filters.9

Mobile UK, an industry group consisting of Vodafone, Three, EE, and O2,10 introduced filtering of content considered unsuitable for children in a code of practice published in 2004 and updated in 2013.11 Content considered suitable for adults only includes “the promotion, glamorization or encouragement of the misuse of illegal drugs”; “sex education and advice which is aimed at adults”; and “discriminatory language or behavior which is frequent and/or aggressive, and/or accompanied by violence and not condemned,” among other categories (see B3).

The four largest ISPs—BT, Sky, Virgin Media, and TalkTalk—offer all customers the choice to activate similar optional filters to protect children. The relevant categories vary by provider, but can include social networking, games, and sex education.12 Website owners can check whether their sites are filtered under one or more category, or report overblocking, by emailing the industry-backed nonprofit group Internet Matters,13 though the process and timeframe for correcting mistakes varies by provider.

These optional filters can affect a range of legitimate content pertaining to public health, LGBT+ topics, drug awareness, and even information published by civil society groups and political parties. In 2012, O2 customers were temporarily unable to access the website of the far-right British National Party.14 Civil society groups have criticized the subjectivity of the filtering criteria. A 2014 magazine article noted that all ISPs had blocked dating sites with the exception of Virgin Media, which operates one.15 An Ofcom report found that ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”16

Blocked, a site operated by the Open Rights Group, allows users to test the accessibility of websites and report excessive blocking and optional filtering by both home broadband and mobile internet providers.17 As of March 2021, the number of blocked and filtered sites was reported to be over 775,000; more than 21,000 of which are suspected to be blocked inadvertently.18 This includes sites related to advice for abuse victims, addiction counseling, LGBT+ subjects, and school websites.19

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 3.003 4.004

Political, social, and cultural content is generally not subject to forced removal, though excessive enforcement of rules against illegal content can affect protected speech (see B1). The government continues to develop regulations that would compel platforms to restrict content that is deemed harmful but not necessarily illegal (see B3, B5, and B6).

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful to terrorists, or incites people to carry out or support terrorism. As of April 2019, the police’s Counter-Terrorism Internet Referral Unit (CTIRU), which compiles lists of such content, reported that 310,000 pieces of material had been taken down since 2010.1

In February 2018, the government announced that it had developed software that automatically detected and labelled online content associated with the Islamic State (IS) militant group. The technology is aimed at smaller platforms and services that may not have sufficient resources to carry out similar functions.2

When child sexual abuse images or criminally obscene adult materials are hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. For content that is hosted on servers overseas, the IWF coordinates with international hotlines and police to have the offending content taken down in the host country.

Similar processes are in place under the oversight of TrueVision, a site that is managed by the police, for the investigation of online materials that incite hatred.3 The government has accused social media platforms of not doing enough to combat hate speech.4 The government in 2017 announced plans for a national hub that would monitor online hate speech and help refer certain content to online platforms for their removal.5 Between April 2017 and December 2019, the unit logged 1,581 cases, fewer than 1 percent of which resulted in charges, raising concerns about efficacy. The unit’s budget totalled over £1 million ($1.3 million) from 2017 to 2020.6

In September 2019, the European Court of Justice (ECJ) ruled that search engines do not have to apply the right to be forgotten—the removal of links from search results at the request of individuals if the stories in question are deemed to be inadequate or irrelevant—for all global users after receiving an appropriate request to do so in Europe.7 The ECJ had previously ordered search engines to apply the right to be forgotten in 2014.8 In April 2018, the UK’s High Court ordered Google to delist search results about a businessman’s past criminal conviction in its first decision on the right to be forgotten. In another case, the court rejected a similar claim made by a businessman who was sentenced for a more serious crime.9

Despite ending membership in the EU, the British government and data protection regulator, the Information Commissioner’s Office (ICO), committed to implementing the EU’s General Data Protection Regulation (GDPR),10 which came into force in May 2018 (see C6). Under the GDPR, the right to be forgotten, along with other rights enshrined in the GDPR, will continue to apply in the UK under the Data Protection Act 2018.

During the COVID-19 pandemic, the Cabinet Office Rapid Response Unit (RRU) began an aggressive campaign to address “misinformation narratives” concerning the virus.11 The RRU has worked with social media platforms to remove certain content identified as “misinformation.” In some cases, the RRU flags the content or posts a rebuttal.

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 3.003 4.004

The regulatory framework and actual procedures for managing online content are largely proportional, transparent, and open to correction. However, the optional filtering systems operated by ISPs and mobile service providers—particularly those meant to block material that is unsuitable for children—have been criticized for a lack of transparency, inconsistency between providers, and excessive application that affects legitimate content. Separately, the government has introduced a bill to regulate additional content that it considers harmful but not necessarily illegal.

In May 2021, the government published the Online Safety Bill (see B5 and B6), which proposes a new regulatory framework to compel search engines and online platforms to remove illegal and harmful content under the statutory duty of care, defined as an obligation “to moderate user-generated content in a way that prevents users from being exposed to illegal and harmful content online.”1 The bill had not been approved by the end of the coverage period.2 The bill would apply to illegal content, “content that is harmful to adults,” and “content that is harmful to children.” Illegal content includes child sexual exploitation and abuse (CSEA) offenses, terrorist content, and additional content that will be specified by the secretary of state. The bill broadly defines “content that is harmful to adults” and “content that is harmful to children” as content that presents a “material risk of… having, or indirectly having, a significant adverse physical or psychological impact.” While the bill does not expressly require online services to use automated content moderation tools to remove content, online services would be required to use “accredited technology” to remove terrorist and CSEA content, and to “swiftly take down content.”3

The services targeted under the proposed legislation include search engines and “user-to-user” services, defined as an internet service that hosts user-generated content or facilitates public or private interaction between at least two people. Under the bill, certain services would be designated by the secretary of state based on their function and number of users to take additional action to combat harmful content.4 Those services would also be required to protect “journalistic content,” which seemingly includes independent journalists, and “content of democratic importance,” broadly defined as content that “appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of the United Kingdom.” Protections for “journalistic content” would include an expedited content removal appeals process for journalists.5

For services not in compliance with the regulation, Ofcom would be empowered to issue notices and fines up to ₤18 million ($24.4 million) or 10 percent of global turnover, depending on which number is higher. In a December 2020 response to the consultation of the white paper, the government noted that these fines would make it “less commercially viable” for services to operate in the UK, thus forcing companies to comply.6 Additionally, Ofcom would have the ability to order a suspension of service for platforms that do not comply, and an interim suspension of service in extreme cases. If the suspension orders are deemed ineffective and companies refuse to comply, Ofcom would then be empowered to order interim and long-term access restriction orders. Ofcom would also be responsible for drafting “codes of practice” to explain how to properly manage harmful content.7

The bill follows the government’s publication of the Online Harms White Paper in April 2019,8 the government’s initial response to industry stakeholders in February 2020,9 and the full government response in December 2020.10 The April 2019 White Paper noted that harmful content encompasses disinformation, trolling, sale of illegal items, the nonconsensual distribution of intimate images, hate speech, harassment, promotion of self-harm, and content uploaded by those who are incarcerated.11

In February 2019, Parliament’s Digital, Culture, Media, and Sport Committee released its 2018 report on disinformation and “fake news” (see B5),12 which recommended increasing legal consequences for hosting disinformation.”13

Under the Digital Economy Act 2017, ISPs are legally empowered to use both blocking and filtering methods, if allowed by their terms and conditions of use.14 The Digital Economy Act also imposed a number of requirements on ISPs and content providers, notably Section 14(1), which obliges content providers to verify the age of users accessing online pornography. Implementation of Section 14(1) was abandoned in October 2019 because of technical limitations (see B1). In February 2018, the British Board of Film Certification (BBFC) was designated as the age-verification regulator, and it launched a public consultation to develop guidance on the means and mechanisms for providers to achieve compliance.15

Civil society groups have criticized the default filters used by ISPs and mobile service providers to review content deemed unsuitable for children, arguing that they lack transparency and affect too much legitimate content, which makes it difficult for consumers to make informed choices and for content owners to appeal wrongful blocking.

ISPs block URLs using content-filtering technology known as Cleanfeed, which was developed by BT in 2004.16 In 2011, a judge described Cleanfeed as “a hybrid system of IP address blocking and DPI-based URL blocking which operates as a two-stage mechanism to filter specific internet traffic.” While the process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables blocking of individual URLs rather than entire domains, it does not entail “detailed, invasive analysis of the contents of a data packet,” according to the judge’s description. Similar systems adopted by ISPs other than BT are “frequently referred to as Cleanfeed,” the judge wrote.17

ISPs are notified about websites hosting content that has been determined to violate or potentially violate UK law under at least three different procedures:

  • The IWF compiles a list of specific URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content; the list is distributed to ISPs and other industry stakeholders that support the foundation through membership fees.18 ISPs block those URLs in accordance with a voluntary code of practice set forth by the Internet Services Providers’ Association (see A5). IWF analysts evaluate material for potential violations of a range of UK laws,19 in accordance with a Sexual Offences Definitive Guideline published by the Sentencing Council under the Ministry of Justice.20 The IWF recommends that ISPs notify customers about why a blocked site is inaccessible,21 but some simply display error messages.22 The IWF website allows site owners to appeal their inclusion on the list. Citizens can also report criminal content via a hotline. An independent 2014 judicial review of the human rights implications of the IWF's operations found that the body’s work was consistent with human rights law.23 The IWF appointed a human rights expert in accordance with one of the review’s recommendations, but it deferred action on a recommendation to limit its remit to child sexual abuse.24 The list of sites blocked for hosting child sexual abuse imagery is not public.
  • The CTIRU, created in 2010, compiles a list of URLs hosted overseas that contain material considered to glorify or incite terrorism under the Terrorism Act 2006,25 and these are filtered on public-sector networks. The blacklist is not made public on the grounds that releasing it would facilitate access to unlawful materials.
  • The UK High Court can order ISPs to block websites found to be infringing copyright under the Copyright, Designs, and Patents Act 1988.26 The High Court has held that publishing a link to copyright-infringing material, rather than actually hosting it, does not amount to an infringement;27 this approach was confirmed by the Court of Justice of the European Union.28 A new intellectual property framework adopted in 2014 included exceptions for making personal copies of protected work for private use, as well as for “parody, caricature, and pastiche.”29 Copyright-related blocking has been criticized for its inefficiency and lack of transparency.30 In 2014, after lobbying from the London-based Open Rights Group, BT, Sky, and Virgin Media began informing visitors to sites blocked by court order that the order can be appealed at the High Court.31 While High Court orders are not kept from the public, they can be burdensome to obtain in practice.32

The processes by which mobile service providers block content that the industry group Mobile UK deems unsuitable for children lack transparency, and their effects vary across providers. In some cases, the filtering activity may be outsourced to third-party contractors, further limiting transparency.33 Child-protection filters are enabled by default in mobile internet browsers, though users can disable them by verifying that they are over age 18. Mobile virtual network operators are believed to “inherit the parent service's filtering infrastructure, though they can choose whether to make this available to their customers.”34 O2 allows its users to check how a particular site has been classified.35 The filtering is based on a classification framework for mobile content published by the BBFC.36 The BBFC adjudicates appeals from content owners and publishes the results quarterly.37

Website owners and companies that knowingly host illicit material and fail to remove it may be held liable, even if the content was created by users, according to EU Directive 2000/31/EC (the E-Commerce Directive).38 Updates to the Defamation Act effective since 2014 limit companies’ liability for user-generated content that is considered defamatory. However, the Defamation Act offers protection from private libel suits based on third-party postings only if the plaintiff is able to identify the user responsible for the allegedly defamatory content.39 The act does not specify what sort of information the website operator must provide to plaintiffs, but it raised concerns that websites would register users and restrict anonymity in order to avoid civil liability.40

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Self-censorship, though difficult to assess, is not understood to be a widespread problem in the UK. However, due to factors including the government’s extensive surveillance practices, it appears likely that some users censor themselves when discussing sensitive topics to avoid potential government intervention or other repercussions (see C5).1

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 3.003 4.004

Concerns about content manipulation have increased in recent years, with foreign, partisan, and extremist groups allegedly using automated “bot” accounts, fabricated news, and altered images to shape discussions on social networks.

During the December 2019 general election, the governing Conservative Party and opposition Labour Party spread misleading content and disinformation on social media, including doctored videos shared by the Conservative Party.1 Google banned eight different ads from the Conservative Party for “violating Google’s advertising policy”; six of the ads related to a website created by Conservative Party officials that imitated the Labour Party’s election manifesto.2

The online environment was allegedly manipulated surrounding the 2016 Brexit referendum and the June 2017 elections, adding to the polarization of online political discourse. In the lead-up to the referendum, targeted online ads distributed by Leave campaign groups on Facebook included misleading statistics and wild claims, with one even accusing the EU of seeking to ban teakettles.3 In May 2017, Facebook reported that it had removed tens of thousands of fake accounts to limit the impact of deliberately misleading information disguised to look like news reports; such accounts had spread online prior to the elections.4 It was not clear whether those circulating the fake reports had a coherent agenda or how significant their influence was. One group accused Facebook and Twitter of failing to curb disinformation that depicted Muslims and migrants in a negative light.5 Separately, a 2017 study by the group Hope Not Hate examined anti-Muslim activists’ exploitation of terrorist attacks in the UK to spread their views on social media.6

There have been a number of reports about the influence of foreign states, especially Russia, on the Brexit referendum. Platforms such as Facebook and Twitter initially denied that there was substantial interference.7 However, those denials were met with skepticism. In July 2020, the UK government said that the Russian government had tried to influence the 2019 election by illicitly acquiring sensitive US-UK trade documents and distributing them on the social media platform Reddit.8

After accusing Russia of waging a disinformation campaign to sow discord in democratic countries,9 Prime Minister Theresa May in January 2018 announced plans to establish a national security communications unit tasked with combating disinformation by state actors and others.10 The Department for Digital, Culture, Media and Sport’s Centre for Data Ethics and Innovation published a paper on audio and visual deepfakes in September 2019 that called for public education to mitigate the spread of misinformation as well as further research. 11

The May 2021 Online Safety Bill would empower Ofcom to create a disinformation and misinformation commission to advise online services (see B3 and B6).12

A parliamentary inquiry into social media interference intensified in March 2018, when Christopher Wylie, a former employee at the data analytics company Cambridge Analytica, claimed that the firm had illegally obtained information from Facebook on millions of accounts and had developed techniques for categorizing and influencing potential voters.13

The report on disinformation and “fake news” published in February 2019 by Parliament’s Digital, Culture, Media, and Sport Committee specifically addressed Facebook’s purported role in facilitating the spread of disinformation.14 The report called for electoral law reforms as well as the establishment of intermediary liability to help address the problem (see B3).

The government also runs a counter-disinformation campaign called SHARE—previously known as “Don’t Feed the Beast”–that provides users with a checklist of features to note before sharing posts and media online.15

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

Online media outlets face economic constraints that negatively impact their financial sustainability, but these are the result of market forces, not political intervention.

Publishers have struggled to find a profitable model for their digital platforms, though more than half of the population reportedly consumes news online. In 2018, a survey found that 64 percent of adults used the internet to access news, with social media being the most popular online source.1

Ofcom is responsible for enforcing the EU’s 2015 Open Internet Regulation, which includes an obligation for ISPs to ensure net neutrality—the principle that internet traffic should not be throttled, blocked, or otherwise disadvantaged on the basis of content. It remains to be seen whether the post-Brexit process will lead the UK government to change its policy on net neutrality or maintain its current approach.

The May 2021 Online Safety Bill (see B3 and B5) would empower Ofcom to fine online services the greater of ₤18 million ($24.4 million) or 10% of a service’s global turnover if they do not comply with the bill’s provisions, which could negatively impact online services’ ability to operate in the United Kingdom.

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 4.004 4.004

The online information landscape is diverse and lively. Users have access to the online content of virtually all national and international news organizations. While there are a range of sources that present diverse views and appeal to various audiences and communities, the ownership of leading news outlets is relatively concentrated,1 and particular media groups have been accused of political bias.

The publicly funded British Broadcasting Corporation (BBC), which maintains an extensive online presence, has an explicit diversity and inclusion strategy, aiming to increase the representation of women and LGBT+ people, and the representation of different age ranges and ethnic and religious groups.2 Similar models have been adopted by other national broadcasters.3

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

Online mobilization tools are freely available, and collective action continues to grow in terms of both numbers of participants and numbers of campaigns. Some groups use digital tools to document and combat bigotry, including Tell MAMA (Measuring Anti-Muslim Attacks), which tracks reports of attacks or abuse submitted by British Muslims online.1 Petition and advocacy platforms such as 38 Degrees and AVAAZ have emerged, and civil society organizations view online communication as an indispensable part of any campaign strategy, though the efficacy of online mobilization per se remains subject to debate.

During the summer of 2020, people in the UK organized Black Lives Matter protests, largely through social media. In Bristol, protestors tore down a statue of Edward Colston, a merchant who had dealings in the Atlantic slave trade, amid an ongoing reappraisal of Colston’s commemoration in the city.2 Protestors in London called for the removal of a statue of Winston Churchill due to the former prime minister’s racist views,3 while Black Lives Matters protestors in Oxford amplified a campaign first initiated in South Africa in 2015 to remove a statue of Cecil Rhodes because of his racist and imperialist views.4

In March 2021, people organized vigils and protests to commemorate Sarah Everard, a London woman was murdered by an off-duty police officer in a case that activists state highlights the dangers faced by women in British society. The events were organized online and took place during lockdown condition,5 while outrage toward the case was catalyzed by social media, particularly after police officers violently dispersed a memorial for Everard.6

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 5.005 6.006

The UK does not have a written constitution or similarly comprehensive legislation that defines the scope of governmental power and its relation to individual rights. Instead, constitutional powers and individual rights are addressed in various statutes, common law, and conventions. The provisions of the European Convention on Human Rights were adopted into law via the Human Rights Act 1998. In 2014, Conservative Party officials announced their intention to repeal the Human Rights Act in favor of a UK Bill of Rights in order to give British courts more control over the application of human rights principles.1 During the 2017 election campaign, Prime Minister Theresa May initially scaled back those ambitions.2 However, in June 2017 she reopened the possibility of significantly amending human rights legislation to allow more aggressive measures against terrorism in light of high-profile attacks in Manchester and London.3 No such legal changes were enacted during the coverage period, and as of September 2021, this point seems to have disappeared from the government’s legislative agenda.

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Political expression and other forms of online speech or activity are generally protected, but there are legal restrictions on hate speech, online harassment, and copyright infringement, and some measures—including a 2019 counterterrorism law—could be applied in ways that violate international human rights standards.

The Counter-Terrorism and Border Security Act, which received royal assent in February 2019, included several provisions related to online activity (see C5).1 The legislation, intended to update the Terrorism Act 2000, came in response to attacks in London and Manchester in 2017, among other events.2 The new provisions make it an offense to view terrorist material (as defined in the act) over the internet. Individuals can face up to 15 years in prison for viewing or accessing material that is useful or likely to be useful in preparing or committing a terrorist act, even if there is no demonstrated intent to commit such acts. The law includes exceptions for journalists or academic researchers who access such materials in the course of their work, but it does not address other possible circumstances in which access might be legitimate.3 “Reckless” expressions of support for banned organizations are also criminalized under the law. A number of civil society organizations argued that the legislation was dangerously broad, with unclear definitions that could be abused.4 In April 2021, the Countering Terrorism and Sentencing Act, which stipulates prison sentences of up to 14 years for anyone who “supports a proscribed terrorist organization,” also received royal assent.5

Stringent bans on hate speech are encapsulated in a number of laws and some rights groups have said they are too vaguely worded. Defining what constitutes an offense has been made more difficult by the development of new communications platforms.

  • Section 5 of the Public Order Act 1986 penalizes “threatening, abusive or insulting words or behavior.” In 2013, the provision was amended to remove insults.6 The maximum penalty is an unlimited fine and six months in prison
  • Section 1 of the Malicious Communications Act 1988 criminalizes targeting individuals with abusive and offensive content online “with the purpose of causing distress or anxiety.”7 In 2015, it was amended to criminalize the sharing of sexual images without the subject’s consent and with the intent to cause harm.8 A violation of the Act is punishable with up to two years in prison.
  • Section 127 of the Communications Act 2003 punishes “grossly offensive” communications sent through the internet.9 The maximum penalty is an unlimited fine and six months in prison
  • Section 1 of the Terrorism Act 2006 prohibits the publishing of statements likely to encourage the commission, preparation, or instigation of terrorism. On indictment, violators face imprisonment for seven years and an unlimited fine. On summary conviction, violators face imprisonment for one year and an unlimited fine.

The Crown Prosecution Service (CPS) publishes specific guidelines for the prosecution of crimes “committed by the sending of a communication via social media.”10 Updates in 2014 placed digital harassment offenses committed with the intent to coerce the victims into sexual activity under the Sexual Offences Act 2003, which carries a maximum of 14 years in prison.11 Revised guidelines issued in March 2016 identified four categories of communications that are subject to possible prosecution: credible threats; abusive communications targeting specific individuals; breaches of court orders; and grossly offensive, false, obscene, or indecent communications.12 They also advised prosecutors to consider the age and maturity of the user in question. Some observers said this could restrict the creation of pseudonymous accounts, though only in conjunction with activity that is considered abusive.13 In October 2016, the CPS updated its guidelines again to cover more abusive online behaviors, including organized harassment campaigns or “mobbing,” and doxing, the deliberate and unauthorized publication of personal information online to facilitate harassment.14

The CPS divides online crime into three categories. “Cybercrime” encompasses unauthorized computer access (that is, hacking), malicious software and denial of service attacks. The category of “social media offenses” includes online harassment, “trolling,” threats, disclosure of sexual images without consent, grooming, stalking online, and online mobbing. Finally, “cyber-enabled fraud” covers all crimes wherein victims are tricked into giving sensitive information away to facilitate identity theft and impersonation.15

The Copyright, Designs, and Patents Act 1988 carries a maximum two-year prison sentence for offenses committed online. In 2015, the government held a public consultation regarding a proposal to increase the sentence to 10 years. Of the 1,011 responses, only 21 supported the proposal,16 but a 2016 government consultation paper nevertheless announced plans to introduce an amendment that included the 10-year maximum sentence “at the earliest available legislative opportunity.”17 The penalty was ultimately incorporated into the Digital Economy Act 2017.

The libel laws in England and Wales have historically tended to favor the plaintiff, leading foreign litigants to file suits there that had only a tenuous connection to the UK, a phenomenon known as “libel tourism.” The Defamation Act 2013 was intended to address the problem by requiring claimants to prove that England and Wales are the most appropriate forum for the action; setting a serious-harm threshold for claims; and codifying certain defenses such as truth and honest opinion. Defamation cases filed in London, most of which involve social media posts, increased significantly in 2019,18 reversing a previous trend.19

In March 2021, the Scottish parliament passed the Hate Crime and Public Order (Scotland) Bill, through which lawmakers aimed to extend and modernize existing hate crimes; it became law in April 2021. The law creates criminal offenses for speech and acts intentionally “stirring up hatred” against groups based on protected characteristics, including age, disability, race, religion, sexual orientation, and transgender identity.20 Violators of the law face up to 12 months imprisonment and a fine for summary conviction, and up to 7 years on a conviction by jury trial. Civil society groups, including the Open Rights Group, have raised concerns that the law has a wide remit and low threshold for prosecution,21 particularly noting that criteria for “insult” is not clearly defined and could make sharing online material that is offensive a crime.22

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 5.005 6.006

Police have arrested internet users for promoting terrorism, issuing threats, or engaging in racist abuse, and in some past cases the authorities have been accused of overreaching in their enforcement efforts. The frequency of these cases appear to be declining, and prison sentences for political, social, and cultural speech remain rare.

Guidelines clarifying the scope of offenses involving digital communications may be helping to cut down on the more problematic speech-related prosecutions observed in the past (see C2). The scale of arrests remains a concern, though many investigations are dropped before prosecution. Figures obtained by the Times newspaper showed that in 2016 and 2017, more than 3,000 individuals were detained and questioned for offensive online comments under Section 127 of the Communications Act, 2003.1 In Scotland, almost 8,600 people were charged under Section 127 from 2008 to 2018.2 The devolved Scottish government also passed legislation to address “hate crimes”,3 which opposition parties and civil society believe will have a stifling effect on free speech (see C2).4

Local police departments have the discretion to pursue criminal complaints that would be treated as civil cases in many democracies. There is an online portal to facilitate the reporting of hate crimes to the police.5 In May 2020, police in Newcastle arrested three teenagers who posted a Snapchat video mocking the death of George Floyd, who was killed by police officers in the United States earlier that month. The incident was reportedly being investigated as a hate crime;6 no updates were reported as of the end of the coverage period.

In February 2020, the High Court ruled that police officers acted unlawfully when they interviewed Henry Miller, a former police officer in Lincolnshire, in relation to tweets by Miller that mocked transgender people. The officers interviewed Miller at his workplace and informed him that the tweets would be recorded as a non-crime hate incident under the 2014 Hate Crime Operational Guidance, which encourages law enforcement to collect data on incidents motivated by prejudice that do not constitute hate crimes.7 The High Court found that the interview of Miller at his place of work curtailed his freedom of speech. The court did not invalidate the guidance, a part of the ruling that Miller planned to appeal.8

Cases of offensive humor have been prosecuted. For example, in May 2020, an officer of the Devon and Cornwall police force was arrested and charged with “sending an offensive, indecent, obscene or menacing image via a public electronic communications network” for sharing a meme about the death of George Floyd in a private WhatsApp group, contrary to the Communications Act 2003. However, in April 2021, the defendant was cleared as the judge felt that the prosecution had not made a strong enough case that the image had sufficiently malicious intent.9

In another case of offensive humor from November 2018, a group of friends at a party burned an effigy of the Grenfell Tower—a public housing facility in London where a fast-moving fire had killed more than 70 people in June 2017—and posted video of the act to their WhatsApp group. The video was subsequently uploaded to YouTube, where it spread widely and received public condemnation. There was a police investigation,10 and one of the accused, Paul Bussetti, was charged under the Communications Act 2003.11 In August 2019, Bussetti was found not guilty.12

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 2.002 4.004

Users are not required to register to obtain a SIM card, allowing for the anonymous use of mobile devices.1 However, some laws provide authorities with the means to undermine encryption, and security officials have pushed for further powers.

There are several laws that could allow authorities to compel decryption or require a user to disclose passwords, including the Regulation of Investigatory Powers Act 2000 (RIPA), the Terrorism Act 2000, and the Investigatory Powers Act 2016 (see C5 and C6).2 Although such powers are seldom invoked in practice, some users have faced detention for failure to provide passwords.3

In October 2019, Home Secretary Priti Patel and her counterparts in the United States and Australia wrote to Facebook opposing the company’s plans to implement end-to-end encryption across its messaging platforms.4 The letter followed communiques in July and October 2019 from the Five Country Ministerial, a Five Eyes consortium of which the UK is a member, criticizing technology companies that provide encrypted products that preserve anonymity and preclude law enforcement access to content.5 In October 2020, the Five Eyes, the government of India, and the government of Japan, issued a statement requesting backdoor access to encrypted messages.6 In April 2021, Patel gave a speech at the National Society for the Prevention of Cruelty to Children in which she urged Facebook and other platforms to consider encryption’s impact on “public safety” and provide mechanisms for law enforcement to access encrypted conversations.7

In late 2018, GCHQ representatives released a proposal, the so-called “Ghost Proposal,” calling for more cooperation mechanisms between communications services and intelligence bodies that would allow the decryption of criminal and terrorist communications in “exceptional” circumstances.8 The proposal would require companies to facilitate the addition of “ghost” users—law enforcement agents—to encrypted conversations without the knowledge of participants. Civil society organizations, service providers, technology platforms, and other experts criticized the idea as a serious infringement on privacy that would undermine cybersecurity.9 As of September 2021, no further developments on the proposal seem to have occurred.

A new law in 2017 requiring age verification for access to online pornography also threatened anonymity, though the government has abandoned implementation of the law because of technical limitations (see B1).

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

The UK authorities are known to engage in surveillance of digital communications, including mass surveillance, for intelligence, law enforcement, and counterterrorism purposes. A 2016 law introduced some oversight mechanisms to prevent abuses, but it also authorized bulk collection of communications data and other problematic practices. A 2019 counterterrorism law empowered border officials to search travelers’ devices, undermining the privacy of their online activity.

The Counter-Terrorism and Border Security Act (see C2) gives border agents the ability to search electronic devices at border crossings and ports of entry with the aim of detecting “hostile activity”—a broad category including actions that threaten national security, threaten the economic well-being of the country in a way that touches on security, or are serious crimes. However, border agents do not need to have a ”reasonable suspicion” that an individual is engaged in such “hostile activity” in order to initiate a search, giving them broad discretion to stop and search travelers.1 Those stopped are required to provide information when requested by border officers, including the passwords to unlock devices.2

In September 2018, the European Court of Human Rights found that parts of the UK’s bulk surveillance regime under the RIPA violated the European Convention on Human Rights, specifically the law’s provisions on privacy and free expression. The court noted that, for example, there were insufficient safeguards to protect confidential journalistic material.3 However, the court controversially ruled that bulk surveillance was not always incompatible with human rights and could fall within a state’s “margin of appreciation in choosing how best to achieve” national security..4 The ruling addressed three petitions filed by UK civil society groups and individuals following former US National Security Agency contractor Edward Snowden’s 2013 revelations about UK surveillance.5 Some of the problems that were raised in the case were addressed in the Investigatory Powers Act 2016 (IP Act).

The IP Act codified law enforcement and intelligence agencies’ surveillance powers in a single omnibus law, whereas they were previously scattered across multiple statutes and authorities.6 It covers interception, equipment interference, and data retention, among other topics.7 In general, the IP Act has been criticized by industry associations, civil rights groups, and the wider public, particularly regarding the range of powers it authorizes and its legalization of bulk data collection.8

The act specifically enables the bulk interception and acquisition of communications data sent or received by individuals outside the UK, as well as bulk equipment interference involving “overseas-related” communications and information. When both the sender and receiver of a communication are in the UK, targeted warrants are required, though several individuals, groups, or organizations may be covered under a single warrant in connection with a single investigation. Moreover, the internet’s distributed architecture means that privacy protections based on an individual’s physical location are highly porous. Communications exchanged within the UK may be routed overseas, a fact that intelligence agencies have exploited in the past to conduct bulk surveillance programs like Tempora (see below).

Part 7 of the IP Act introduced warrant requirements for intelligence agencies to retain or examine “personal data relating to a number of individuals” where “the majority of the individuals are not, and are unlikely to become, of interest to the intelligence service in the exercise of its functions.”9 Datasets may be “acquired using investigatory powers, from other public sector bodies or commercially from the private sector.”10 Under Section 220, an initial examination of bulk datasets must occur within three months “where the set of information was created in the United Kingdom” and within six months otherwise.

The IP Act established a new commissioner appointed by the prime minister to oversee investigatory powers under Section 227.11 The law includes some other safeguards, such as “double-lock” interception warrants. These require approval from both the relevant secretary of state and an independent judge, though the secretary alone can approve urgent warrants. Under Section 32, urgent warrants last five days; others expire after six months unless renewed under the same double-lock procedure. The act allows authorities to prohibit telecommunications providers from disclosing the existence of a warrant. Intercepting authorities that may apply for targeted warrants include police commissioners, intelligence service heads, and revenue and customs commissioners.12 Applications for bulk interception, bulk equipment interference, and bulk personal dataset warrants can only be made to the secretary of state “on behalf of the head of an intelligence service by a person holding office under the Crown” and must be reviewed by a judge.

Bulk surveillance is an especially contentious issue in the UK because intelligence agencies developed secret programs under older laws that bypassed oversight mechanisms and possible means of redress for affected individuals. These programs affected an untold number of people within the UK, even if they were meant to have only foreign targets. Tempora, a secret surveillance project documented in the Snowden leaks, is one example. A number of other legislative measures authorized surveillance,13 including RIPA.14 RIPA was not repealed by the IP Act, though many of its competences were transferred to the newer legislation. A clause within Part I of RIPA allowed the foreign or home secretary to sign off on bulk surveillance of communications data arriving from or departing to foreign soil, providing the legal basis for Tempora.15 Since the UK’s fiber-optic network often routes domestic traffic through international cables, this provision legitimized mass surveillance of UK residents.16 Working with telecommunications companies, GCHQ installed interception probes at the British landing points of undersea fiber-optic cables, giving the agency direct access to data carried by hundreds of cables, including private calls and messages.17

The Investigatory Powers Tribunal was established under RIPA to adjudicate disputes regarding government surveillance. In 2015, it found procedural irregularities in the retention of communications intercepted from Amnesty International and the South Africa–based Legal Resources Center, though it concluded that the interceptions themselves were lawful.18 In early 2016, the tribunal ruled that computer network exploitation carried out by GCHQ was in principle lawful within the limitations in the European Convention on Human Rights.19 The tribunal also noted that network exploitation is legal if the warrant is as specific and narrow as possible.

In July 2016, the Investigatory Powers Tribunal found that bulk data collection by GCHQ and two other intelligence agencies known as MI5 and MI6 was unlawful from March 1998 until the practice was disclosed in November 2015.20 The practice had been authorized under Section 94 of the Telecommunications Act 1984, which the Interception of Communications Commissioner described in June 2016 as lacking “any provision for independent oversight or any requirements for the keeping of records.”21 The tribunal also said that the use of bulk personal datasets by GCHQ and MI5, commencing in 2006, was likewise unlawful until disclosed in March 2015. The datasets contained personal information that could include financial, health, and travel data as well as communications details.22 There were hearings in June and October 2017 on the process and legality of collecting and sharing these datasets.23

In May 2021, the UK High Court ruled that security agencies cannot use “general warrants,” outlined in section 5 of the 1994 Intelligence Services Act, to order the hacking of computers or mobile devices. For example, under a “general warrant,” a security agency could request information from “all mobile phones used by members of a criminal network” to justify the hacking of these devices without having to obtain a specific warrant for each individual in the network. The ruling came after Privacy International, a UK-based NGO, challenged a 2016 decision from the Investigative Powers Tribunal that held that the government could use these warrants to hack computers or mobile devices.24

UK authorities have been known to monitor social media platforms.25 In London, for example, police reportedly monitored nearly 9,000 activists from across the political spectrum—many of whom had no criminal background—using geolocation tracking and sentiment analysis of data scraped from Facebook, Twitter, and other platforms.26 This information was then compiled in secret dossiers on each campaigner. In another example, the Online Hate Speech Dashboard, a joint project led by the National Online Hate Crime Hub of the National Police Chiefs’ Council and Cardiff University, received ₤1 million ($1.4 million) in 2018 to use artificial intelligence for real-time monitoring of social media platforms meant to identify hate speech and “preempt hate crimes.”27

Unlike the EU, UK did not operate a COVID-19 passport to perform daily social activities as of the end of the coverage period; however, the UK does have a system of digital verification of vaccination.28

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 3.003 6.006

Companies are required to capture and retain user data under certain circumstances, though the government issued regulatory changes in 2018 to address flaws in the existing rules. While the government has legal authority to require companies to assist in the decryption of communications, the extent of its use and efficacy in practice remains unclear.

The UK has incorporated the GDPR into domestic law through the Data Protection Act 2018.1 Therefore, even once the post-Brexit arrangements are finalized, the GDPR in its entirety will continue to regulate data protection within the UK.

The government’s response to the COVID-19 pandemic involved subscriber data obtained from telecommunications providers. In March 2020, mobile network O2 confirmed that it was providing anonymized aggregate location data from smartphones belonging to subscribers so that the government could monitor trends in compliance with social distancing guidelines.2

Data retention provisions under the IP Act allow the secretary of state to issue notices requiring telecommunications providers to capture information about user activity, including browser history, and retain it for up to 12 months. The Data Retention and Investigatory Powers Act 2014 (DRIPA), the older law on which the IP Act requirement was modeled, was ruled unlawful in the UK and the EU in 2015.3 In January 2018, the Court of Appeal described DRIPA as being inconsistent with European law, since the data collected and retained were not limited to the purpose of fighting serious crime.4 In April 2018, the High Court ruled that part of the IP Act’s data retention provisions similarly violated EU law, and that the government should amend the legislation by November 2018.5

In response, the government issued the Data Retention and Acquiring Regulations 2018, which entered into force in October 2018. The regulations limited the scope of the government’s collection and retention of data and enhanced the transparency of the process.6 Furthermore, a newly created Office for Communications Data Authorisations would oversee data requests and ensure that official powers are used in accordance with the law.

According to a March 2021 report, the government issued orders under the IP Act to two service providers to install surveillance technology that would record users’ web history, creating an internet connection record (ICR). It is unclear which providers are involved and what data are collected, though one order was reportedly issued in July 2019 and a second order in October 2019.7

Another problematic provision of the IP Act enables the government to order companies to decrypt content, though the extent to which companies would be willing or able to comply remains uncertain (see C4).8 Under Section 253, technical capability notices can be used to impose obligations on telecommunications operators both inside and outside the country “relating to the removal … of electronic protection applied by or on behalf of that operator to any communications or data,” among other requirements. The approval process for issuing a technical capability notice is similar to that of an interception warrant.9 In March 2018, after consultations with the industry and civil society groups,10 the government issued the Investigatory Powers (Technical Capability) Regulations 2018, which governs how the notices are issued and implemented.11 The regulations specify companies’ responsibilities in ensuring that they are able to comply with lawful warrants for communications data.

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 4.004 5.005

There were no reported instances of violence against internet users in reprisal for their online activities during the coverage period, though cyberbullying, particularly harassment of women, is widespread.1 A June 2018 study found that one in three female members of Parliament had experienced online abuse, harassment, or threats.2 Online harassment of Muslims and other minorities is also a significant problem.3

The online harassment environment in the UK worsened during the COVID-19 pandemic, particularly for women and people of Chinese descent. Support services reported a surge in reports of cyberstalking and online harassment.4 Racist incidents involving people of Chinese or other Asian descent were reported throughout the UK, including several cases involving social media.5

A 2017 study found an increase in abusive comments targeting politicians on Twitter, which peaked on the day of the 2016 Brexit referendum.6 News reports indicated that hate crimes against minorities increased after the vote to leave the EU, which was driven in part by campaigns that depicted immigration as a threat to the British way of life. However, a 2016 analysis of cyberbullying in different parts of the UK found that regions with high levels of online hate speech or racial intolerance did not necessarily vote in favor of Brexit, and concluded that other issues were also driving the trend.7

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 2.002 3.003

Nongovernmental organizations, media outlets, and activists are generally not targeted for technical attacks by government or nonstate actors. Financially motivated fraud and hacking continue to present a challenge to authorities and the private sector. Cyberattacks have increased in recent years, and observers have questioned the security of a trend in which various machines, appliances, and objects are connected to the internet, creating what is known as the Internet of Things.1 In the most recent government survey of cybercrime in 2020, 46 percent of businesses report experiencing a cyberattack.2

During the COVID-19 pandemic, the UK saw an increase in cybercrime, particularly phishing and ransomware attempts, with victims up by one third from previous years.3 In an assessment on the origin of the attacks, British intelligence agencies noted that Russian actors played a large role. In July 2020, British intelligence officials asserted that Russian operatives attempted to steal information related to vaccine research.4

In July 2020, a report to Parliament stated that evidence shows that actors associated with the Russian government had hacked into the national infrastructure and launched phishing attacks against various government departments.5 The government responded that while the cyber capabilities of the Russian government represented a threat, they noted that there was no evidence of Russian interference in the 2019 election.6

In March 2019, the information systems of the Police Federation of England and Wales were infected with ransomware—malicious software that blocks access to the contents of a computer or network and demands a ransom payment for access to be restored.7 The federation said there was no evidence that any information was leaked, although its data back-ups were deleted and other information was rendered inaccessible.8 The attack was limited to the federation’s headquarters in Surrey and did not spread to its 43 associated offices.

In May 2017, the National Health Service suffered a ransomware attack affecting 40 organizations, effectively barring workers from patient case files.9 The attack had severe consequences, delaying or denying essential services for vulnerable individuals.10

During the 2019 election, the opposition Labour Party was subject to multiple cyberattacks, including a denial of service and a leak of donor identities.11 These attacks were not attributed to state actors and the party received ongoing support from the National Cyber Security Centre.

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    93 100 free
  • Internet Freedom Score

    79 100 free
  • Freedom in the World Status

  • Networks Restricted

  • Websites Blocked

  • Pro-government Commentators

  • Users Arrested