United Kingdom

Free
78
100
A Obstacles to Access 24 25
B Limits on Content 29 35
C Violations of User Rights 25 40
Last Year's Score & Status
79 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the methodology and report acknowledgements.
united-kingdom_hero_map

header1 Key Developments, June 1, 2023 – May 31, 2024

The internet remained free for users in the United Kingdom (UK) during the coverage period, with widespread access and few major constraints on content. In recent years, the government has taken actions to counter illegal and “harmful” online content, though the full implications of such efforts had yet to be seen. The state’s surveillance practices and other potential privacy violations remained ongoing concerns.

  • The websites of the Russian state media outlets Sputnik News and RT continued to present signs of blocking in the UK (see B1).
  • Lawmakers passed the Online Safety Act 2023 in September 2023, and it became law the following month. The act established new duties for online services to proactively identify and remove illegal and certain “harmful” content from their platforms, imposed sanctions for noncompliance, created new communications offenses, and generated concerns about protections for UK users’ anonymity and access to encryption (see B3, B6, C2, and C4).
  • Concerns about the spread of false online information escalated during the coverage period, with some high-profile incidents linked to foreign actors (see B5).
  • Parliament passed the Investigatory Powers (Amendment) Act 2024 in April 2024, and it received royal assent that month. The law weakened privacy safeguards for the collection of certain bulk datasets and established a requirement for telecommunications companies to notify the government of any changes to services that could impede the companies’ ability to comply with state requests for user data (see C5 and C6).
  • Private businesses and the public sector—particularly large British companies and state entities—remained under threat from cyberattacks (see C8).

header2 Political Overview

The UK—which includes the constituent countries of England, Scotland, and Wales along with the territory of Northern Ireland—is a stable democracy that regularly holds free elections and hosts a vibrant media sector. While the government generally enforces robust protections for political rights and civil liberties, recent years have featured new restrictions on the right to protest as well as rising Islamophobia and anti-immigrant sentiment.

A Obstacles to Access

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

Access to the internet is considered a key element of societal and democratic participation in the UK. Fixed-line broadband access is almost ubiquitous, and nearly 100 percent of households are within range of asymmetric digital subscriber line (ADSL) connections. The four largest national mobile service providers—EE, O2, Vodafone, and Three—all offer fifth-generation (5G) network technology.

The Digital Economy Act 2017 obliges service providers to offer minimum connection speeds of 10 Mbps (megabits per second).1 In 2023, the proportion of “superfast” home broadband connections, with advertised download speeds of at least 30 Mbps, increased to 93 percent, up from 91 percent the year prior. The proportion of lines with an advertised download speed of 300 Mbps or more increased from 8 percent to 11 percent.2 Fiber-optic coverage was available to 57 percent of UK homes as of December 2023, an increase of 15 percentage points since 2022.3

Mobile telephone penetration is extensive. As of December 2023, the Office of Communications (Ofcom), the primary telecommunications regulator, estimated that 5G connectivity was available from at least one mobile service provider outside at least 85 to 93 percent of premises.4 At that time, 4G coverage was available from at least one mobile service provider across 93 percent of the country’s landmass and outside more than 99 percent of premises.5

The government’s UK Wireless Infrastructure Strategy, published in April 2023, set the goal of providing standalone 5G service—which does not rely on existing 4G long-term evolution (LTE) infrastructure—to all populated areas of the country by 2030.6

In July 2020, the government banned the purchase of 5G technology from the Chinese telecommunications company Huawei beginning in 2021 and ordered existing Huawei equipment to be removed by the end of 2027 due to security concerns (see A4).7 In October 2022, the government issued legal notices to 35 service providers reiterating these mandates, including incremental deadlines to remove all Huawei technology from 5G networks and other infrastructure.8 In January 2024, mobile service provider BT reported that it had failed to fully remove Huawei equipment from its network core by the government’s December 31 deadline. However, the company said that more than 99 percent of its “core traffic” utilized non-Huawei equipment at that time.9

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 3.003 3.003

Despite ongoing cost-of-living challenges, the internet remains widely accessible in the UK.

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries. According to analytics company Cable, the average cost of 1 GB (gigabyte) of mobile data in 2023 was £0.50 ($0.63).1 In December 2023, Ofcom reported that the average monthly price for mobile service (excluding the cost of a device) had dropped by 33 percent in real terms since 2018. In the third quarter of 2023, mobile service was less expensive in the UK than in four of the five peer countries that Ofcom analyzed: Germany, Italy, Spain, and the United States—though costs were higher than in France.2

Meanwhile, the average monthly cost of a fixed-line broadband connection in 2024 was £31.00 ($39.20).3 Several fixed-line broadband providers offer low-cost packages, however, including social tariffs that cost between £12 ($15) and £23 ($29) per month.4 Ofcom has warned that eligible customers may not be aware of social tariffs, so take-up of the affordable packages remained low, at 8.3 percent of eligible households as of October 2023.5 Ofcom estimated that 28 percent of UK households were struggling to afford a communications service in January 2024, down slightly from 29 percent from April 2023. When services were assessed individually, only 8 percent of households struggled to afford fixed-line broadband, and 9 percent struggled to afford mobile service.6 In December 2023, Ofcom reported that high inflation had contributed to sizable price increases—often over 10 percent—for many users in 2023.7

While internet service is broadly accessible in the UK, 4 percent of the country’s population was estimated to be “offline” by Lloyds Bank’s 2023 Consumer Digital Index.8 In a June 2023 report, the House of Lords’ Communications and Digital Committee expressed concern about the government’s ability to combat digital exclusion, noting that 1.7 million households had no broadband or mobile internet access, and that as many as one million people had reduced or canceled their internet service in the year before due to affordability issues.9

In October 2021, the government announced the £5 billion ($6.3 billion) Project Gigabit to bring faster and more reliable high-speed service to 570,000 rural premises.10 The government issued a progress update in February 2024, reporting that gigabit-capable broadband—that is, service at speeds of more than 1,000 Mbps—was available at around 80 percent of premises in the UK, up from 6 percent in early 2019, and the country was set to reach 85 percent coverage by 2025.11

Ofcom data published in March 2023 indicated that 92 percent of UK adults used the internet from their homes or other locations.12 There is no gender gap in internet use, according to 2024 data from Oxford University’s Digital Gender Gaps report.13

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

The government does not exercise control over the internet infrastructure and does not routinely restrict connectivity.

The government does not place limits on the amount of bandwidth that internet service providers (ISPs) can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services, such as peer-to-peer (P2P) file sharing and television streaming. Mobile service providers have previously cut back on unlimited access packages for smartphones, reportedly because of network congestion concerns.

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 5.005 6.006

There are few obstacles to the establishment of service providers in the UK, allowing for a competitive market that benefits internet users.

As of 2023, the UK’s major fixed-line ISPs, by percentage of household users, included BT (formerly British Telecom) with 24 percent, Sky with 21 percent, Virgin Media with 17 percent, TalkTalk with 8 percent, and other ISPs constituting the remaining 30 percent.1 In March 2024, Ofcom initiated a review of regulations for the wholesale telecommunications market, with the aim of improving competition and infrastructure investments.2

ISPs are only required to obtain a license from Ofcom for use of the radioelectric spectrum, such as for mobile internet service.3 Other ISPs are not subject to licensing, but they must comply with general conditions set by Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.4

Among mobile service providers, EE, which has been owned by BT since 2016, leads the market with 23 percent of subscribers, followed by O2 with 18 percent, Vodafone with 14 percent, Three with 9 percent, and Tesco with 6 percent.5 Mobile virtual network operators (MVNOs) like Tesco provide service using the infrastructure owned by one of the other companies.

The Telecommunications (Security) Act 2021, which received royal assent that November and amended the Communications Act 2003, places stronger legal obligations on telecommunications service providers to identify and reduce the risk of cybersecurity breaches and prepare for their occurrence.6 The law empowers the government to use secondary legislation to regulate and issue codes of practice for service providers in pursuit of these goals. Providers that do not comply could be ordered to pay fines of up to 10 percent of their global turnover. Following a consultation process, the Electronic Communications (Security Measures) Regulations 2022 came into force in October 2022, and the accompanying Telecommunications Security Code of Practice was issued in December 2022, providing guidance for complying with the regulations.7 The earliest implementation date for the most straightforward and least resource-intensive measures was March 31, 2024, meaning that the full consequences of the regulations were not observed during the coverage period. The government previously extended this deadline by one year in response to concerns from service providers that it would be onerous to implement the security measures on such a short timeframe.8 In December 2023, Ofcom published Draft Resilience Guidance for Communications Providers, meant to provide best practices on infrastructural and network resilience.9

The Telecommunications (Security) Act 2021 also allows the government to issue Designated Vendor Directions (DVDs) regarding high-risk vendors that are deemed threats to national security. The government produced a DVD for Huawei, for instance, when it banned the purchase of Huawei equipment and mandated its eventual removal by service providers (see A1). The act legally enshrines these measures and has been criticized by some legal scholars for potentially limiting market diversity.10

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 4.004 4.004

The various entities responsible for regulating internet service and content in the UK generally operate impartially and transparently.

Ofcom is an independent statutory body. It has broadly defined responsibility for the needs of “citizens” and “consumers” regarding “communications matters” under the Communications Act 2003.1 It is overseen by Parliament and also regulates the broadcasting and postal sectors.2 Ofcom has some authority to regulate content with implications for the internet, such as video-on-demand (VOD) content,3 and it is tasked with enforcing the Online Safety Act 2023 (see B3, B6, C2, and C4).4 The appointment of Michael Grade as the Ofcom chair in 2022 sparked controversy. 5 Grade, a former British Broadcasting Corporation (BBC) board chairman, was confirmed as Ofcom’s chairman in April 2022 and began his four-year term in May. Politicians and civil society representatives questioned his independence from the government, which was then controlled by the Conservative Party, as well as his expertise.6

Nominet, a nonprofit company operating in the public interest, manages access to the .uk, .wales, and .cymru country domains.

Other groups regulate services and content through voluntary ethical codes or coregulatory rules under independent oversight. In 2012, major ISPs published a Voluntary Code of Practice in Support of the Open Internet, which commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade competitors’ services.7 Amendments to the code clarify that signatories could deploy content filtering for public Wi-Fi access.8 Ofcom also maintains voluntary codes of practice related to internet speed provision, dispute resolution, and the sale and promotion of internet services.9

Criminal online content is addressed by the Internet Watch Foundation (IWF), an independent self-regulatory body funded by Nominet and industry associations,10 though some of this content was set to be regulated by Ofcom under the Online Safety Act 2023. The Advertising Standards Authority and the Independent Press Standards Organisation regulate newspaper websites.

The Digital Regulation Cooperation Forum (DRCF)—formed in July 2020 by the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), and Ofcom, and later joined by the Financial Conduct Authority—was created to promote greater cooperation between entities on online regulatory matters, including the regulation of artificial intelligence (AI).11

B Limits on Content

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content, particularly material that is protected by international human rights standards? 4.004 6.006

Blocking generally does not affect political and journalistic content or other internationally protected forms of online expression. Service providers block and filter some content that falls into one of three categories: copyright infringement, promotion of terrorism, and depiction of child sexual abuse. Optional filtering can be applied to additional content, particularly material that is considered unsuitable for children.

According to measurements conducted by the Open Observatory for Network Interference (OONI), the Russian state media outlets Sputnik and RT both showed signs of blocking around March 2022, shortly after the Russian military launched its full-scale invasion of Ukraine, in several European countries, including the UK. The sites continued to present signs of blocking in the UK through the end of the current coverage period.1

ISPs are required to block domains and URLs found to be hosting material that infringes copyright when ordered to do so by the High Court (see B3).2

URLs that are based overseas and host content that has been reported by police for violating the Terrorism Act 2006, which prohibits the glorification or promotion of terrorism, are included in the optional child filters supplied by many ISPs.3 “Public estates” like schools and libraries also block such URLs.4 The content can still be accessed on private devices.5

ISPs block URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content in accordance with the Internet Service Providers’ Association’s voluntary code of practice.6 Mobile service providers also block URLs identified by the IWF as containing such content.

All mobile service providers and major ISPs that provide home service filter legal content that is considered unsuitable for children.7 Mobile service providers enable these filters by default, requiring customers to prove that they are over age 18 to access the unfiltered internet.8 Content that is considered suitable only for adults includes the promotion of illegal drugs, sex education, and discriminatory language.

These optional filters can affect a range of legitimate content pertaining to public health, LGBT+ topics, drug awareness, and even information published by civil society groups and political parties (see B3).9 A 2014 Ofcom report found that ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”10 For instance, the proxy website Anonymouse.org was blocked for certain users during the current coverage period.11

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content, particularly material that is protected by international human rights standards? 3.003 4.004

Political, social, and cultural content is generally not subject to forced removal, though excessive enforcement of rules against illegal content can affect protected speech (see B1). The government, through the Online Safety Act 2023, continues to develop regulations that would compel platforms to restrict content that is deemed harmful to children, but not necessarily illegal (see B3, B6, C2, and C4).

In March 2022, after the European Union (EU) banned broadcasts, sharing of social media content, and app downloads from RT and Sputnik, then UK culture secretary Nadine Dorries requested that the social media platforms Facebook, Twitter (known as X after mid-2023), and TikTok block access to the outlets’ content in the UK as well (see B1).1 Facebook owner Meta reported that it would restrict access to both outlets across the UK.2 Ofcom revoked RT’s broadcasting license later that month.3

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful to terrorists, or incites people to carry out or support terrorism.

When child sexual abuse images or criminally obscene adult materials are hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. When content is hosted on servers overseas, the IWF coordinates with international hotlines and police to have the offending content taken down in the host country.4 Similar processes exist under the oversight of True Vision, a site that is managed by the National Police Chiefs’ Council (NPCC), for the investigation of online materials that incite hatred.5

The government’s National Security Online Information Team (NSOIT), an entity initially formed to counteract online disinformation (see B5), is known to flag certain content for voluntary removal by social media platforms—particularly if it violates a platform’s terms of service or is considered harmful.6 However, this unit does not issue orders for platforms to remove such content.7 According to statistics shared by the digital rights group Big Brother Watch in June 2024, the government flagged 779 pieces of content in 2020, but has made just 35 flags since January 2023.8

In 2019, the European Court of Justice ruled that search engines do not have to apply the right to be forgotten—the removal of links from search results at the request of individuals if the items in question are deemed to be inadequate or irrelevant—for all global users after receiving an appropriate request to do so in the EU.9 In April 2018, the UK’s High Court had ordered Google to delist search results about a businessman’s past criminal conviction in its first decision on the right to be forgotten. In another case, the court rejected a similar claim made by a businessman who was sentenced for a more serious crime.10

Despite the UK’s official withdrawal from the EU in 2020, the British government and data protection regulator, the ICO, committed to implementing the EU’s General Data Protection Regulation (GDPR),11 which came into force in May 2018 (see C6). The right to be forgotten, along with other rights enshrined in the GDPR, continue to apply in the UK under the UK GDPR.12

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 3.003 4.004

The regulatory framework and procedures for managing online content are largely proportional, transparent, and open to correction. However, the optional filtering systems operated by ISPs and mobile service providers—particularly those meant to block material that is unsuitable for children—have been criticized for a lack of transparency, inconsistency among providers, and excessive application that affects legitimate content.

The Online Safety Act (see B6, C2, and C4), a regulatory framework that compels search engines and online platforms to address and remove illegal and certain harmful content, was adopted by Parliament in September 2023 and received royal assent to become law in October.1 The government had initially published the draft bill in May 2021, defining the statutory duties of care as an obligation “to moderate user-generated content in a way that prevents users from being exposed to illegal and harmful content online.”2

The act, as passed, applies to illegal content and “content that is harmful to children.” Provisions that would have mandated the removal of content that is “legal but harmful” to adults were dropped from the draft bill in November 2022.3 Priority illegal content includes child sexual exploitation and abuse (CSEA), terrorist content, and additional categories specified in Schedule 7 of the act, such as content threatening to kill someone.4 Section 60 of the law broadly defines “content that is harmful to children” as “primary priority content that is harmful to children,” “priority content that is harmful to children,” or content “of a kind which presents a material risk of significant harm to an appreciable number of children in the United Kingdom.”5 The two former categories are defined in Sections 61 and 62 of the act, respectively, and include various types of content, including pornographic content and content that encourages suicide, an act of self-harm, or an eating disorder.6 While the law does not expressly require the use of automated content-moderation tools, Ofcom may order online services to use “accredited technology” to remove content related to terrorism or CSEA, and to “swiftly take down that content.” The act grants Ofcom the power to issue such notices when “necessary and proportionate.”7 These provisions have the potential to undermine end-to-end encryption (see C4).

The law targets search engines and “user-to-user” services, defined as an internet service that hosts user-generated content or facilitates public or private interaction between at least two people.8 After provisions related to “legal but harmful” content were dropped from the bill, certain platforms designated as “Category 1” services, based on their function and number of users, were instead required to provide optional “user empowerment” tools, allowing adult users to filter certain harmful content, including that which promotes self-harm or incites hatred.9 Category 1 services are also required to protect “journalistic content,” which includes content “generated for the purposes of journalism” that is “UK-linked,” meaning it is aimed at UK users or “is or is likely to be of interest” to such individuals.10 Protections for “news publisher content” and “journalistic content” include an expedited content-removal appeals process for journalists and creators of such content.11 Platforms must provide “recognized news publishers” an opportunity to appeal before removing journalistic content, except in cases where the provider could reasonably expect to be held civilly or criminally liable for hosting such content.12

Ofcom is empowered to enforce these duties through several provisions. For noncompliant services, Ofcom may issue notices and fines of up to £18 million ($22.8 million) or 10 percent of global turnover, whichever is higher.13 In addition, Ofcom has the ability to request a court order for an interim or permanent suspension of service for noncompliant platforms.14 If the suspension orders are deemed ineffective, Ofcom is then empowered to petition a court for interim and long-term access-restriction orders.15

Ofcom is also responsible for drafting relevant codes of practice for compliance with duties established by the law, a process that began during the current coverage period.16 In November 2023, Ofcom started the first phase of the law’s implementation, with draft codes and guidance on companies’ illegal harms duties.17 Ofcom indicated that it would reach final decisions on this guidance in late 2024, before it is transmitted to Parliament.18 In May 2024, Ofcom opened consultations on draft codes of practice related to the protection of children,19 and said it expected to publish draft guidance for the protection of women and girls by the first half of 2025.20 The final stage of implementation was set to focus on additional requirements, including transparency reports and the deployment of user empowerment tools, for certain categorized services. Ofcom planned to publish draft proposals concerning the additional duties of categorized services in early 2025 and issue transparency notices in mid-2025.21

Under the Digital Economy Act 2017, ISPs are legally empowered to use both blocking and filtering methods, if allowed by their terms and conditions of use.22 Civil society groups have criticized the default filters used by ISPs and mobile service providers to review content deemed unsuitable for children, arguing that they lack transparency and affect too much legitimate content, which makes it difficult for consumers to make informed choices and for content owners to appeal wrongful blocking (see B1).

ISPs block URLs using content-filtering technology known as Cleanfeed, which was developed by BT in 2004.23 The process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables blocking of individual URLs rather than entire domains.

ISPs are notified about websites hosting content that has been determined to violate or potentially violate UK law under at least three different procedures. The IWF compiles and distributes a list of specific URLs containing photographic or computer-generated depictions of CSEA or criminally obscene adult content to ISPs;24 the police’s Counter-Terrorism Internet Referral Unit (CTIRU) compiles an unpublished list of URLs hosted overseas that contain material considered to glorify or incite terrorism under the Terrorism Act 2006, and the listed URLs are filtered on public-sector networks;25 and the High Court can order ISPs to block websites found to be infringing on copyright under the Copyright, Designs, and Patents Act 1988.26 Copyright-related blocking has been criticized for its inefficiency and opacity.27

In some cases, mobile service providers’ filtering activity may be outsourced to third-party contractors, further limiting transparency.28 MVNOs are believed to be capable of using their parent service’s filtering infrastructure.29 The filtering is based on a classification framework for mobile content published by the British Board of Film Certification (BBFC), the designated age-verification regulator.30 The BBFC adjudicates appeals from content owners and publishes the results quarterly.31

Website owners and companies that knowingly host illicit material and fail to remove it may be held liable, even if the content was created by users—an intermediary liability regime that the British government has continued to uphold from EU Directive 2000/31/EC (the E-Commerce Directive).32 Updates to the Defamation Act, effective since 2014, limit companies’ liability for user-generated content that is considered defamatory. However, the Defamation Act only offers protection from private libel suits based on third-party postings if the plaintiff is able to identify the user responsible for the allegedly defamatory content.33 The act does not specify what sort of information the website operator must provide to plaintiffs, but it raised concerns that websites would register users and restrict anonymity in order to avoid civil liability.34

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Self-censorship, though difficult to assess, is not understood to be a widespread problem in the UK. However, due to factors including the government’s extensive surveillance practices, it appears likely that some users censor themselves when discussing sensitive topics to avoid potential government intervention or other repercussions (see C5).1

While pro-Palestinian activists were mobilizing support online during the coverage period (see B8), civil society organizations—including Amnesty International UK and Article 19—warned that the government’s restrictive stance toward pro-Palestinian protests could have a chilling effect on such demonstrations,2 and there were concerns that the same effect could extend to online expression.

In March 2023, during the previous coverage period, BBC soccer broadcaster Gary Lineker was temporarily suspended from his broadcasting duties after he criticized the government’s asylum policies in a Twitter post.3 In explaining its decision, the BBC cited impartiality guidelines, but some considered the suspension to be an attack on free expression.4

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 3.003 4.004

Score Change: The score declined from 4 to 3 due to a deteriorating environment with respect to online content manipulation, including reports that foreign actors were spreading false information.

Reports of online content manipulation have typically increased during general elections and at other politically sensitive moments, with foreign, partisan, and extremist groups allegedly using automated “bot” accounts, fabricated news, and altered images to shape discussions on social media platforms. During the coverage period, such reports escalated ahead of the UK’s July 2024 general elections.

In March 2024, Cardiff University security researchers reported that a Russia-based disinformation network had amplified social media conspiracy theories about the health of Catherine, Princess of Wales. According to the researchers, the network attempted “to fan the online flames of an existing story” with the aim of causing confusion and sowing social distrust,1 particularly toward the media and the royal family.2 In one instance, a manipulated image of Catherine, which was used to question the media’s trustworthiness, was disseminated by a network of 45 accounts on X; the accounts were all potentially linked to a Russian disinformation network called Doppelgänger, which has reportedly targeted other European countries and the United States.3

Media coverage documented potential efforts to inauthentically shape online conversations in the weeks leading up to the July 2024 general elections. In June 2024, the BBC reported that dozens of accounts on Facebook, Instagram, TikTok, and X had posted hundreds of messages in support of the right-wing Reform UK party. While the BBC confirmed that at least some of the accounts belonged to real users, it identified more than 50 accounts that demonstrated signs of inauthentic behavior, such as sharing repetitive content—potentially in an effort to artificially boost support for the party. These developments were reported after the coverage period, and it remained unclear whether the accounts were already active in May 2024,4 when the elections were announced.5

According to Meta, the UK was targeted with the third-highest number of coordinated inauthentic behavior networks in the world, after the United States and Ukraine, between 2017 and 2022.6 In December 2021, Meta reported removing a network of eight Facebook accounts and 126 Instagram accounts, evidently based in Iran, that primarily targeted Scotland and the UK as a whole to promote Scottish independence.7

The government has launched sometimes-controversial initiatives to counter the spread of false and misleading information online. It runs a counter-disinformation campaign called SHARE—previously known as Don’t Feed the Beast—that provides users with a checklist of features to note before sharing posts and media online.8 In 2019, the government established the Counter Disinformation Unit (CDU) under the Department for Digital, Culture, Media, and Sport.9 The CDU, which was formed to combat false information that undermines public health, national security, or public safety, was rebranded as the NSOIT in November 2023 after critics claimed that it had been used to quiet criticism of the government. The unit, which is known to flag certain content for voluntary removal by social media platforms (see B2), allegedly flagged posts by Conservative, Labour, and Green Party politicians that were accurate yet critical of the government; officials rejected these accusations.10 The NSOIT conducts open-source social media monitoring as part of its work (see C5).11

In 2021, the government published the RESIST 2 Toolkit for civil servants and other stakeholders to help protect their audiences and defend their organizations against the threat of mis- and disinformation.12 The Online Safety Act requires Ofcom to create a disinformation and misinformation advisory committee to provide guidance to online services (see B3, B6, C2, and C4).13

In March 2023, during the previous coverage period, leaked emails and WhatsApp messages from 2020–22 appeared to show evidence that the Conservative government had pressured the BBC over the use of the word “lockdown” during the early COVID-19 pandemic, and asked journalists to be more critical of the Labour Party. One source at the BBC alleged that the government had directly influenced headlines published on the BBC’s website “on a very regular basis.”14

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

Online media outlets face economic constraints that negatively impact their financial sustainability, but these are the result of market forces, not political intervention.

Publishers have struggled to find a profitable model for their digital platforms, though more than half of the population reportedly consumes news online. In 2023, a survey conducted for Ofcom found that 68 percent of adults used the internet to access news, with Facebook being the most popular means to do so among social media platforms.1

Ofcom is responsible for enforcing the EU’s 2015 Open Internet Regulation, which includes an obligation for ISPs to ensure net neutrality—the principle that internet traffic should not be throttled, blocked, or otherwise disadvantaged on the basis of content. This regulation was revised slightly but largely preserved in UK law after the country's exit from the EU was finalized in 2020.2

The Online Safety Act empowers Ofcom to fine online services the greater of £18 million ($22.8 million) or 10 percent of their global turnover if they do not comply with the act’s provisions (see B3, C2, and C4), which could impact their ability to operate in the UK.

In May 2024, the Media Act 2024 received royal assent and became law after it was passed by Parliament earlier that month.3 The law empowered Ofcom to regulate and sanction VOD services, such as Netflix and Disney+, in line with traditional broadcasters for the first time, even if they are not based in the UK.4 Under the law, Ofcom must draft and enforce a new code that creates standards for harmful or offensive content and due accuracy in news for certain regulated VOD services, among other requirements.5 Ofcom planned to publish this code in mid-2025, following consultations with stakeholders.6

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity and reliability? 4.004 4.004

The online information landscape is diverse and lively. Users have access to the online content of virtually all national and international news organizations. While there are a range of sources that present diverse views and appeal to various audiences and communities, the ownership of leading news outlets is relatively concentrated,1 and particular media groups have been accused of political bias.

The publicly funded BBC, which maintains an extensive online presence, has an explicit diversity and inclusion strategy designed to increase the representation of women and LGBT+ people, as well as people from different age ranges and ethnic and religious groups.2 Similar models have been adopted by other national broadcasters.3

In recent years, the CMA has endeavored to boost competition among digital platforms. In June 2022, the CMA vowed to examine the dominance of Apple and Google’s mobile browsers, citing their “effective duopoly” in the mobile environment,4 and it commenced a market investigation reference into the mobile browsers and cloud gaming market in November 2022. After Apple appealed this action, in March 2023 the Competition Appeal Tribunal found that the CMA lacked the standing to make a market investigation reference. The CMA appealed in turn and, in November 2023, the Court of Appeal reinstated the authority’s original decision to launch the investigation. The investigation remained on hold during the coverage period, pending Apple’s potential appeal to the Supreme Court.5

After the coverage period, online information falsely claiming that a Muslim asylum seeker was responsible for the July 2024 murder of three girls in Southport served as a catalyst for several days of riots across much of the country.6 The false claims, which also alleged that the supposed perpetrator had entered the UK illegally, and subsequent violence were fueled by far-right and anti-immigrant groups and individuals.7

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

Online mobilization tools are freely available and commonly used to organize offline,1 and collective action remains robust in terms of both numbers of participants and variety of campaigns.

Some groups use digital tools to document and combat bigotry, including Tell MAMA (Measuring Anti-Muslim Attacks), which tracks reports of attacks or abuse submitted by British Muslims online.2 Petition and advocacy platforms such as 38 Degrees have emerged, and nongovernmental organizations (NGOs) view online communication as an indispensable part of any campaign strategy. Such tools have been used extensively in recent pro-Palestinian campaigns and protests; for instance, a petition that circulated on Parliament’s official website urged the government “to recognise the state of Palestine immediately” and garnered more than 283,000 signatures before it closed in May 2024, meaning Parliament must consider it for debate.3 However, there are concerns that the government’s response to pro-Palestinian activism, which has included some restrictions on in-person protests, could have a chilling effect on such online mobilization (see B4).4

Other prominent campaigns in recent years have included the “Don’t Scan Me!” campaign by the Open Rights Group (ORG), which opposed provisions of the Online Safety Act that could weaken encryption (see C4),5 and the “Hands Off Our Data” campaign against the Data Protection and Digital Information Bill and other potential threats to privacy safeguards in the UK (see C6).6 Another ORG campaign, called “End Pre-Crime,” warns against law enforcement agencies’ use of advanced technology to preemptively identify individuals who are supposedly likely to commit crimes.7

Meanwhile, Big Brother Watch’s campaign “Stop Facial Recognition” opposes the use of facial recognition cameras by law enforcement agencies and private companies, urging these actors through a digital petition to halt the deployment of such technology.8

C Violations of User Rights

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 5.005 6.006

The UK does not have a written constitution or similarly comprehensive legislation that defines the scope of governmental power and its relation to individual rights. Instead, constitutional powers and individual rights are addressed in common law as well as various statutes and conventions. The provisions of the Council of Europe’s European Convention on Human Rights were adopted into law through the Human Rights Act 1998.

In December 2021, the government launched a consultation on reforming the Human Rights Act.1 In June 2022, the government published the Bill of Rights Bill, which would repeal and replace the Human Rights Act.2 It included significant changes to the UK’s human rights framework, requiring claimants to prove that they have suffered “significant disadvantage” and giving Parliament, rather than the courts, primacy in decision-making when competing rights and interests are at stake. The bill maintained that courts must give “great weight” to the importance of freedom of speech, but also established exemptions in some areas, including criminal proceedings and matters relating to immigration, citizenship, and national security.3 In January 2023, Parliament’s Joint Committee on Human Rights recommended that the government make substantial changes to the bill or withdraw it entirely, saying it would significantly weaken the protections offered by the Human Rights Act.4 The government officially scrapped the bill in June 2023, during the current coverage period.5

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities, particularly those that are protected under international human rights standards? 2.002 4.004

Political expression and other forms of online speech or activity are generally protected, but there are legal restrictions on hate speech, online harassment, and copyright infringement. Some measures—including a 2019 counterterrorism law—could be applied in ways that violate international human rights standards.

The Counter-Terrorism and Border Security Act, which received royal assent in February 2019, includes several provisions related to online activity (see C5).1 Individuals can face up to 15 years in prison for viewing or accessing material that is useful or likely to be useful in preparing or committing a terrorist act, even if there is no demonstrated intent to commit such acts. The law includes exceptions for journalists or academic researchers who access such materials in the course of their work, but it does not address other possible circumstances in which access might be legitimate.2 “Reckless” expressions of support for banned organizations are also criminalized under the law. A number of NGOs argued that the legislation was dangerously broad and that its unclear definitions could be abused.3 Another counterterrorism law, the Counter-Terrorism and Sentencing Act, which received royal assent in April 2021, established prison sentences of up to 14 years for anyone who “supports a proscribed terrorist organization.”4

Stringent bans on hate speech are included in a number of laws, and some rights groups have said they are too vaguely worded.5 Defining what constitutes an offense has been made more difficult by the development of new communications platforms. One ban included in Section 127 of the Communications Act 2003 punishes “grossly offensive” communications sent through the internet.6 The maximum penalty is an unlimited fine and six months in prison.

The Online Safety Act 2023 repealed certain provisions of the Communications Act 2003 and the Malicious Communications Act 1988,7 and outlined several new communications offenses,8 which took effect in January 2024.9 Among these, the law criminalized the intentional dissemination of false and threatening communications; “cyberflashing,” in which an individual sends an unsolicited intimate image to another; and encouraging or aiding in serious self-harm.10 Such offenses can be punished by a fine, imprisonment of up to five years, or both.11 Critics have warned that the vagueness of these portions of the act may affect legitimate speech.12

The Copyright, Designs, and Patents Act 1988 carried a maximum two-year prison sentence for offenses committed online. In 2015, the government held a public consultation regarding a proposal to increase the maximum sentence to 10 years, and the change was ultimately incorporated into the Digital Economy Act 2017.

In March 2021, the Scottish Parliament passed the Hate Crime and Public Order (Scotland) Act, through which lawmakers aimed to extend and modernize existing hate crime legislation; it became law in April 2021 and entered into force in April 2024.13 The law creates criminal offenses for speech and acts aimed at “stirring up hatred” against groups based on protected characteristics, including age, disability, race, religion, sexual orientation, and transgender identity.14 Violators of the law face up to 12 months’ imprisonment and a fine for summary conviction, and up to seven years in prison for a conviction by jury trial. Civil society groups, including the ORG, have raised concerns that the law has a wide area of responsibility and low threshold for prosecution,15 particularly noting that the criteria for “insult” are not clearly defined and could criminalize the sharing of online material that is merely offensive.16 In March 2024, ahead of the law’s implementation, Police Scotland indicated that they would not target performing artists—including actors and comedians—under the law.17

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities, particularly those that are protected under international human rights standards? 5.005 6.006

Police have arrested internet users for promoting terrorism, issuing threats, or engaging in racist abuse or other hate speech. In some past cases, the authorities have been accused of overreach in their enforcement efforts.1 Prison sentences for internationally protected political, social, and cultural speech remain rare.2

In February 2023, a man received a suspended sentence of 16 weeks in prison after he sent a threatening email to Member of Parliament Jeremy Hunt in October 2022. The email stated that Hunt’s “house will be on fire this winter.”3 In March 2022, Twitter user Joseph Kelly was sentenced to 150 hours of community service and 18 months of supervision under Section 127 of the Communications Act 2003 for a “grossly offensive” post about a British Army officer in February 2021. Kelly’s post, published the day after the officer’s death, said that “the only good Brit soldier is a [dead] one.”4 Local police departments have the discretion to pursue criminal complaints in cases that would be treated as civil offenses in many democracies. The NPCC operates True Vision, an online portal to facilitate the reporting of hate crimes to law enforcement agencies.5

Cases of offensive humor have been prosecuted in recent years. In June 2022, a former officer for the West Mercia Police was convicted on charges of “sending an offensive, indecent, obscene or menacing image via a public electronic communications network” after he posted 10 racist memes, some of which mocked the recent murder by police of Black civilian George Floyd in the United States, in a WhatsApp group chat in 2020. The man, who was an officer at the time the messages were sent, was sentenced to 20 weeks in jail.6

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 2.002 4.004

Users are not required to register to obtain a SIM card, allowing for the anonymous use of mobile devices.1 However, some laws provide authorities with the means to undermine encryption, and provisions of the newly adopted Online Safety Act 2023 could facilitate additional restrictions.

There are several laws that could allow authorities to compel decryption or require a user to disclose passwords, including the Regulation of Investigatory Powers Act 2000 (RIPA), the Terrorism Act 2000, and the Investigatory Powers Act 2016 (IP Act) (see C5 and C6).2 Although such powers are seldom invoked in practice, some users have faced detention for failing to provide passwords.3

The Online Safety Act (see B3, B6, and C2), which requires age verification for access to online pornography, has ignited civil society concerns over its potential to compromise anonymity and encryption.4 Part 5 of the law introduces duties for providers that publish or display pornographic content to ensure that children are not able to encounter pornography on their services. These services must introduce means of “age assurance,” through age verification, age estimation, or a combination of both, that are “highly effective” at determining whether or not someone is a child.5 The duties had not been implemented yet during the coverage period, as guidance from Ofcom was still pending. The draft guidance was published in December 2023,6 and the final guidance was expected to be published in early 2025, after which the duties would enter into force.7

Under the act, Ofcom can mandate that online services employ government-approved software, referred to as “accredited technology,” to find images depicting CSEA, “whether communicated publicly or privately by means of the service.”8 These orders, which can be issued to services that use end-to-end encryption and consequently cannot technically inspect user messages, have been criticized as an attempt to compel companies to abandon or compromise their encryption systems.9 Unsuccessful amendments addressing the matter had been considered in the House of Lords, in particular suggesting the removal of “or privately” from the text.10 Amid pushback from leading messaging applications, the government claimed in August 2023 that the accredited technologies could be compatible with encryption.11

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

UK authorities are known to engage in surveillance of digital communications, including mass surveillance, for intelligence, law enforcement, and counterterrorism purposes. A 2016 law, which was reformed during the coverage period, introduced some oversight mechanisms to prevent abuses, but it also authorized bulk collection of communications data and other problematic practices. A 2019 counterterrorism law empowered border officials to search travelers’ devices, undermining the privacy of their online activity.

The Counter-Terrorism and Border Security Act gives border agents the ability to search electronic devices at border crossings and ports of entry with the aim of detecting “hostile activity”—a broad category including actions that threaten national security, threaten the economic well-being of the country in a way that touches on security, or are serious crimes (see C2).1 Those stopped are required to provide information when requested by border officers, including device passwords.2 In April 2023, French book publisher Ernest Moret was arrested by British border officials after he refused to provide the passwords to his phone and computer; Moret had been stopped over his role in antigovernment protests in France.3 In July 2023, an independent review found that British authorities did not have reasonable cause to detain Moret or demand his passwords.4

The IP Act codified law enforcement and intelligence agencies’ surveillance powers, which had previously existed in multiple statutes and authorities, in a single omnibus law.5 It covers interception, equipment interference, and data retention, among other topics.6 The IP Act has been criticized by industry associations, civil rights groups, and the wider public, particularly for the range of powers it authorizes and its legalization of bulk data collection.7

The IP Act specifically enables the bulk interception and acquisition of communications data sent or received by individuals outside the UK, as well as bulk equipment interference involving “overseas-related” communications and information. When both the sender and receiver of a communication are in the UK, targeted warrants are required, though several individuals, groups, or organizations may be covered under a single warrant in connection with a single investigation.8 Part 7 of the IP Act introduced warrant requirements for intelligence agencies to retain or examine “personal data relating to a number of individuals” who are “unlikely to become of interest to the intelligence service in the exercise of its functions.”9

The IP Act established a new commissioner appointed by the prime minister to oversee investigatory powers under Section 227.10 The law includes other safeguards, such as “double-lock” interception warrants. These require approval from both the relevant secretary of state and an independent judge, though the secretary alone can approve urgent warrants.11 The act allows authorities to prohibit telecommunications providers from disclosing the existence of a warrant. Intercepting authorities that may apply for targeted warrants include police commissioners, intelligence service heads, and revenue and customs commissioners.12 Applications for bulk interception, bulk equipment interference, and bulk personal dataset warrants can only be made to the secretary of state “on behalf of the head of an intelligence service by a person holding office under the Crown” and must be reviewed by a judge.

In November 2023, the government introduced a series of amendments to the IP Act, which were swiftly approved by Parliament and enacted in April 2024, as the Investigatory Powers (Amendment) Act 2024.13 These reforms introduced fewer privacy safeguards for the retention and examination of bulk personal datasets where there is “low or no expectation of privacy” in the data. Intelligence agencies will no longer be expressly required to obtain a warrant prior to retaining such data; the approval of a judicial commissioner (a serving or retired judge) will not be required in cases where “there is an urgent need to grant the authorisation.”14 In January 2024, leading civil society organizations jointly expressed concerns about the reforms, arguing that they weakened privacy safeguards for the collection of bulk datasets by intelligence services and permitted the collection of internet connection records “for generalized, massive surveillance.”15

In May 2021, the High Court ruled that security agencies cannot use “general warrants,” outlined in Section 5 of the 1994 Intelligence Services Act, to order the hacking of computers or mobile devices. For example, under a “general warrant,” a security agency could request information from “all mobile phones used by members of a criminal network” to justify the hacking of these devices without having to obtain a specific warrant for each individual in the network. The ruling came after Privacy International, a UK-based NGO, challenged a 2016 decision by the Investigatory Powers Tribunal holding that the government could use these warrants to hack computers or mobile devices.16

UK authorities have been known to monitor social media platforms.17 A September 2023 investigation reported that the Department for Education had tracked the social media activities of at least nine educational experts, at times using the information to determine whether individuals were “unsuitable” to participate in government-sponsored events.18

Separate reporting from October 2021 detailed the recent expansion of the Metropolitan Police Service’s social media monitoring operations. A database used by the service’s Project Alpha Team, which was created in 2019 and employs covert methods to monitor social media platforms, compiled information gathered from both public and private social media accounts; the number of categories of data being gathered more than doubled, from 16 to 34, during the intervening years. While authorities claimed that Project Alpha’s goal was to combat online gang-related content, civil society groups warned of potential privacy violations and online racial profiling.19 Project Alpha continued to receive funding from the Home Office during the coverage period.20

A January 2023 investigation by Big Brother Watch documented the existence of government “disinformation units” that had reportedly been used to monitor the social media activities of users in the UK, including those who criticized the government. The investigation raised concerns about the government’s surveillance capabilities and its transparency surrounding such practices.21 The NSOIT, one of these units, has been known to flag certain social media content for voluntary removal by platforms (see B2), and it is forbidden from monitoring content that is not publicly accessible.22

C6 1.00-6.00 pts0-6 pts
Does monitoring and collection of user data by service providers and other technology companies infringe on users’ right to privacy? 3.003 6.006

Companies are required to capture and retain user data under certain circumstances, though the government issued regulatory changes in 2018 to address flaws in the existing rules. While the government has legal authority to require companies to assist in the decryption of communications, the extent of its use and efficacy in practice remains unclear.

The UK has incorporated the EU’s GDPR into domestic law through the Data Protection Act 2018.1 The GDPR was envisioned as a way to regulate data protection within the UK after the country’s exit from the EU. During the coverage period, however, the government considered the Data Protection and Digital Information Bill, which represented a significant departure from the GDPR. Compared with the GDPR, the bill would have loosened requirements for companies to complete data protection impact assessments, instead focusing on actions to address “high risk” processing, among other actions.2 The bill raised significant concerns among civil society groups, including ORG, which argued that it would weaken data subject rights compared with the GDPR.3 The bill did not pass before the dissolution of Parliament in May 2024 ahead of the July elections, meaning it was effectively scrapped.4

Data retention provisions under the IP Act allow the secretary of state to issue notices requiring telecommunications providers to capture information about user activity, including browser history, and retain it for up to 12 months.5 In response to a 2018 High Court ruling,6 the government issued the Data Retention and Acquiring Regulations 2018, which entered into force in October 2018. The regulations limited the scope of the government’s collection and retention of data and enhanced the transparency of the process.7 Furthermore, a newly created Office for Communications Data Authorisations was created to oversee data requests and ensure that official powers are used in accordance with the law.

The 2024 amendments to the IP Act established a new requirement for telecommunications service providers to notify the government of potential changes to products or services,8 such as the introduction of end-to-end encryption, that could affect “the capability of a relevant operator to provide any assistance which the operator may be required to provide in relation to any warrant, authorisation or notice” (see C5).9 In January 2024, civil society organizations warned that these amendments could prevent technology companies from improving privacy and security processes, in essence “transforming private companies into arms of the surveillance state and eroding the security of devices and the internet.”10

Meredith Whittaker, president of the nonprofit foundation behind the messaging application Signal, has warned that the government could “stop developers from patching vulnerabilities in code that the government or their partners would like to exploit.”11 The Investigatory Powers (Amendment) Act 2024 was one of the final pieces of legislation passed by the Conservative government prior to the dissolution of Parliament, and it received royal assent on April 25, 2024.12

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in relation to their online activities? 4.004 5.005

There were no reported instances of violence against internet users in reprisal for their online activities during the coverage period, though cyberbullying and harassment against women remained widespread.1 According to a UK study conducted in February 2023, more than 10 percent of the 4,000 women and girls surveyed reported that they had experienced online violence—including threats, abusive messages, and the nonconsensual sharing of intimate images.2 Online harassment of Muslims and members of other religious, ethnic, and racial minority groups is also a significant problem.3

Women public officials continue to face harassment and abuse online. Research from September 2021 confirmed that women and minority members of Parliament were at particular risk of receiving social media messages containing stereotypes about their identity or questioning their role as politicians.4 More recently, an analysis published by Internet Matters in March 2024, drawing from a survey of approximately 1,000 families, found that 77 percent of girls aged 13–16 had reported a harmful or potentially harmful online experience, which is notably higher than the figure for all children (66 percent). Many girls expressed the belief that experiencing harassment and other harms “is an intrinsic component of the digital space.”5

Politicians and elected officials have reported several incidents of online harassment and abuse in recent years.6 In a survey of 430 candidates from the May 2024 local elections in England, the Electoral Commission reported that 43 percent of candidates said they had experienced some form of abuse or intimidation, including 10 percent who described it as a serious problem. Of those who reported being targeted with a form of harassment, 55 percent said it had occurred online.7

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 2.002 3.003

NGOs, media outlets, and activists are generally not targeted for technical attacks by government or nonstate actors, though such attacks sometimes occur. Financially motivated fraud and hacking attempts continue to present a challenge to authorities and the private sector.

In the government’s 2024 cybercrime survey, 50 percent of businesses reported that they had experienced a cyberattack in the past year, including 74 percent of large businesses. According to the report, phishing remained by far the most common form of attack.1

In October 2023, the House of Commons’ Science, Innovation, and Technology Select Committee said that only the United States and Ukraine were targeted by more cyberattacks than the UK.2 In the first six months of 2023, organizations that operate critical information-technology infrastructure services reported 13 cyberattacks that significantly disrupted their operations, an escalation from the eight incidents reported in the prior two years combined.3

The government’s Cyber Security Strategy 2022–30 recognized the importance of protecting critical national infrastructure from cyberthreats.4 In its 2023 annual review, the National Cyber Security Centre reported that the threat to the UK’s critical infrastructure was “enduring and significant,” partly due to the activities of state-affiliated threat actors, especially from China.5

In August 2023, the Electoral Commission reported that a complex cyberattack had led to the breach of the names and addresses of anyone in Great Britain who was registered to vote between 2014 and 2022, the names and addresses of those registered to vote in Northern Ireland in 2018, and the names of overseas voters registered between 2014 and 2022,6 ultimately compromising the personal information of 40 million people.7 The attack, which the Electoral Commission said began in August 2021, went undetected until October 2022. In response, the commission admitted “that sufficient protections were not in place to prevent this cyber-attack.”8 In March 2024, then deputy prime minister Oliver Dowden accused the Chinese state of being responsible for the Electoral Commission breach and another hacking campaign targeting members of Parliament. He said that the UK government had imposed sanctions on two individuals and a company linked to the Chinese government.9 That month, the Chinese embassy in the UK dismissed the accusations as "malicious slander."10

During the previous coverage period, in January 2023, the Guardian reported that it was the victim of a ransomware attack in December 2022. The newspaper said that UK- and US-based employee data were accessed in the attack. Though the Guardian was able to continue publishing online and in print, it had to close its offices for several months.11 Executives indicated that they did not believe the newspaper was intentionally targeted because it is a media outlet.12

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Population

    66,970,000
  • Global Freedom Score

    92 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

    Free
  • Networks Restricted

    No
  • Websites Blocked

    Yes
  • Pro-government Commentators

    No
  • Users Arrested

    No