United Kingdom

Free
77
100
A Obstacles to Access 22 25
B Limits on Content 30 35
C Violations of User Rights 25 40
Last Year's Score & Status
77 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free)

header1 Overview

Users in the United Kingdom (UK) generally enjoy substantial internet freedom, with few major constraints on access or content. However, following the lead of its European counterparts, the government has grown increasingly interested in regulating online platforms and content hosts in an effort to limit disinformation and curb material that is deemed harmful. Such policymaking efforts culminated in the release of two reports during the coverage period, most notably the controversial Online Harms White Paper. In a novel form of restriction on access, police ordered Wi-Fi connections to be suspended in some of London’s Underground mass-transit train stations amid peaceful civil-disobedience protests.

The UK—comprising England, Scotland, Northern Ireland, and Wales—is a long-standing democracy that regularly holds free elections and is home to a vibrant media sector. While the government enforces robust protections for political rights and civil liberties, recent years have featured concerns about increased government surveillance of residents as well as rising Islamophobia and anti-immigrant sentiment. UK referendum voters in 2016 narrowly supported leaving the European Union (EU), a process known colloquially as Brexit, but political disagreement about the details of the withdrawal continued to stall implementation during the coverage period.

header2 Key Developments, June 1, 2018 – May 31, 2019

  • British transport police ordered internet service provider (ISP) Virgin Media to shut off Wi-Fi service in some London Underground stations in April 2019 amid protests and peaceful civil-disobedience actions by the environmentalist group Extinction Rebellion (see A3).
  • Also in April 2019, the government released its Online Harms White Paper, aimed at creating a new regulatory framework to tackle not only illegal online content, but also content that the government views as legal yet harmful (see B3).
  • Parliament’s Digital, Culture, Media, and Sport Committee released its report on disinformation and “fake news” in February 2019, recommending the establishment of new forms of legal liability to address the problem (see B3 and B5).
  • Separately in February 2019, the Counter-Terrorism and Border Security Act was approved, including several provisions that could affect online free expression and privacy (see C2 and C5).
  • In September 2018, the European Court of Human Rights found that parts of the UK’s bulk surveillance regime under the Regulation of Investigatory Powers Act 2000 violated the European Convention on Human Rights (see C5).

A Obstacles to Access

Information and communication technology (ICT) infrastructure is generally strong, and government policies and regulations tend to favor access. The overwhelming majority of UK residents use the internet frequently on a variety of devices, particularly smartphones, and substantial government-backed infrastructure investments have led to better service over time.

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

Access to the internet is considered a key element of societal and democratic participation in the UK. Broadband access is almost ubiquitous, and nearly 100 percent of households are within range of ADSL (asymmetric digital subscriber line) connections. All national mobile service providers offer fourth-generation (4G) network technology.

The Digital Economy Act 2017 obliges service providers to offer minimum connection speeds of 10 Mbps.1 Access to “superfast” broadband connections, with an advertised speed of at least 30 Mbps, continues to expand.2 While the geographical coverage of superfast broadband networks has reached more than 90 percent of the UK, the communications regulator—the Office of Communications (Ofcom)—has noted that the country lags behind in fully fiber-optic broadband.3 In March 2018, the government launched a new voucher scheme that provides up to ₤3,000 ($3,800) toward installation costs for full-fiber broadband connections serving small and medium-sized enterprises.4 In October 2018, ₤200 million ($260 million) was allocated to enable an “outside-in” approach to providing full-fiber broadband in rural areas.5

Mobile telephone penetration is extensive. In 2017, some 73 percent of surveyed adults used mobile phones to access the internet, up from 36 percent in 2011.6 In 2018, 78 percent of adults viewed their mobile device as their primary means of internet access.7 For those over age 54, laptop computers remained the most popular devices.8

In 2010, the UK government awarded contracts to the Chinese telecommunications company Huawei to provide infrastructure for fixed-line and mobile internet service, including fifth-generation (5G) mobile technology. In order to allay security concerns, national security and intelligence agencies such as Government Communications Headquarters (GCHQ) arranged to monitor for any problems through regular audits and other measures. In July 2018, the fourth annual report on this monitoring raised significant concerns about Huawei-provided infrastructure, especially with respect to third-party equipment suppliers such as ZTE, another Chinese telecommunications firm.9 In 2017, China had passed national intelligence legislation that gave government agencies significant authority to interfere with Chinese manufacturers of telecommunications hardware.10 In April 2019, the UK decided to bar Huawei from providing “core” elements of the country’s developing 5G network.11

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 2.002 3.003

Internet access continues to expand, gradually reducing regional and demographic disparities.

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries. The average monthly price for a typical mobile data package was ₤18.36 in 2017.1 The most affordable fixed-line broadband packages cost a little over ₤30 ($38) a month.2 Median gross weekly earnings for full-time workers were ₤569 ($730).3

According to the UK’s Office of National Statistics, 90 percent of households have access to the internet.4 However, those in the lowest income groups are significantly less likely to have home internet subscriptions, with the gap between socioeconomic groups remaining the same for the past few years. Some 22 percent of people with disabilities have no internet access.5 There is a no general gender gap in internet use, though about two-thirds of women over 75 have never used the internet. Women between 65 and 74 years old have experienced the largest rise in internet usage, from 47 percent in 2011 to 82 percent in 2019.

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 5.005 6.006

The government does not exercise control over the internet infrastructure and does not routinely restrict connectivity. On April 17, 2019, however, British transport police ordered the ISP Virgin Media to shut off Wi-Fi service in some London Underground stations.1 The restriction came in response to protests and peaceful civil-disobedience actions by the environmentalist group Extinction Rebellion, which called on the government to reduce carbon emissions and combat climate change more aggressively.2 The group had publicized its plans to peacefully disrupt Underground service. Appendix 1 of the Wi-Fi Operational Agreement between Virgin Media and Transport for London details the process for implementing orders from police or security agencies to temporarily cut or restrict Wi-Fi service.3

The government does not place limits on the amount of bandwidth that ISPs can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services, such as peer-to-peer file sharing and television streaming. Mobile providers have cut back on previously unlimited access packages for smartphones, reportedly because of concerns about network congestion.

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 5.005 6.006

There are few obstacles to the establishment of service providers in the UK, allowing for a competitive market that benefits users.

Major ISPs include BT (formerly British Telecom) with a 37 percent market share, Sky (24 percent), Virgin Media (20 percent), TalkTalk (12 percent), and others (8 percent).1 Ofcom continues to develop regulations to promote the unbundling of services so that incumbent owners of infrastructure continue to invest in their networks while also allowing competitors to make use of them.2

ISPs are not subject to licensing, but they must comply with general conditions set by Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.3

Among mobile service providers, EE (owned by BT since 2016) leads the market with 29 percent of subscribers, followed by O2 (27 percent), Vodafone (19 percent), Three (11 percent), and Tesco (8.5 percent).4 Mobile virtual network operators including Tesco provide service using the infrastructure owned by one of the other companies.

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 4.004 4.004

The various entities responsible for regulating internet service and content in the UK generally operate impartially and transparently.

Ofcom, the primary telecommunications regulator, is an independent statutory body. It has broadly defined responsibility for the needs of “citizens” and “consumers” regarding “communications matters” under the Communications Act 2003.1 It is overseen by Parliament and also regulates the broadcasting and postal sectors.2 Ofcom has some authority to regulate content with implications for the internet, such as regulating video content in keeping with the EU AudioVisual Media Services Directive.3

Nominet, a nonprofit company operating in the public interest, manages access to the .uk, .wales, and .cymru country domains. In 2013, Nominet implemented a postregistration domain name screening to suspend or remove domain names that encourage serious sexual offenses.4

Other groups regulate services and content through voluntary ethical codes or co-regulatory rules under independent oversight. In 2012, major ISPs published a Voluntary Code of Practice in Support of the Open Internet.5 The code commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade the services of a competitor. The code was amended in 2013 to clarify that signatories could deploy content filtering or provide such tools where appropriate for public Wi-Fi access.6 Ofcom also maintains voluntary codes of practice related to internet speed provision, dispute resolution, and the sale and promotion of internet services.7

Criminal online content is addressed by the Internet Watch Foundation (IWF), an independent self-regulatory body funded by the EU and industry associations (see B3).8 The Advertising Standards Authority and the Independent Press Standards Organization regulate newspaper websites. With the exception of child-abuse content, these bodies eschew prepublication censorship and operate postpublication notice and takedown procedures within the E-Commerce Directive liability framework (see B3).

B Limits on Content

Various categories of criminal content—such as depictions of child sexual abuse, promotion of extremism and terrorism, and copyright-infringing materials—are blocked by ISPs. Parental controls on content considered unsuitable for children are enabled by default on mobile networks, requiring adults to opt out to access adult material. These measures can result in excessive blocking, and there is a lack of transparency regarding the processes involved and the kinds of content affected. During the coverage period, the government advanced its plans to regulate online content that it deems harmful but not necessarily illegal.

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content? 5.005 6.006

Blocking generally does not affect political and journalistic content or other internationally protected forms of online expression. Service providers do block and filter some content that falls into one of three categories: copyright-infringing material, the promotion of terrorism, and depictions of child sexual abuse. Optional filtering can be applied to additional content, particularly material that is considered unsuitable for children.

The Digital Economy Act 2017 includes provisions that allow blocking of “extreme” pornographic material, setting standards that critics said were poorly defined and could be unevenly applied.1 The measures were set to come into force in April 2019, but they were postponed to July,2 and in June 2019 they were delayed again for at least six months due to an “administration oversight.”3

ISPs are required to block domains and URLs found to be hosting material that infringes copyright when so ordered by the High Court (see B3).4

Overseas-based URLs hosting content that has been reported by police for violating the Terrorism Act 2006, which prohibits the glorification or promotion of terrorism, are included in the optional child filters supplied by many ISPs.5 “Public estates” like schools and libraries also block such URLs.6 The content can still be accessed on private computers.7

ISPs block URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content in accordance with the Internet Services Providers’ Association’s voluntary code of practice (see A5). Mobile service providers also block URLs identified by the IWF as containing such potentially illegal content.

All mobile service providers and some ISPs that provide home service filter legal content that is considered unsuitable for children. Mobile service providers enable these filters by default, requiring customers to prove that they are over age 18 to access the unfiltered internet. In 2013, the four largest ISPs agreed with the government to present all customers with an “unavoidable choice” about whether to enable parentally controlled filters.8

Mobile UK, an industry group consisting of Vodafone, Three, EE, and O2,9 introduced filtering of content considered unsuitable for children in a code of practice published in 2004 and updated in 2013.10 Content considered suitable for adults only includes “the promotion, glamorization or encouragement of the misuse of illegal drugs”; “sex education and advice which is aimed at adults”; and “discriminatory language or behavior which is frequent and/or aggressive, and/or accompanied by violence and not condemned,” among other categories (see B3).

The four largest ISPs—BT, Sky, Virgin Media, and TalkTalk—offer all customers the choice to activate similar optional filters to protect children. The relevant categories vary by provider, but can include social networking, games, and sex education.11 Website owners can check whether their sites are filtered under one or more category, or report overblocking, by emailing the industry-backed nonprofit group Internet Matters,12 though the process and timeframe for correcting mistakes varies by provider.

These optional filters can affect a range of legitimate content pertaining to public health, LGBT+ topics, drug awareness, and even information published by civil society groups and political parties. In 2012, O2 customers were temporarily unable to access the website of the far-right British National Party.13 Civil society groups have criticized the subjectivity of the filtering criteria. A 2014 magazine article noted that all ISPs had blocked dating sites with the exception of Virgin Media, which operates one.14 An Ofcom report found that ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”15

Blocked, a site operated by the Open Rights Group, allows users to test the accessibility of websites and report excessive blocking and optional filtering by both home broadband and mobile internet providers.16 As of the beginning of October 2019, the number of blocked and filtered sites was reported to be 775,556.17 This includes sites related to advice for abuse victims, addiction counseling, LGBT+ subjects, and school websites.18

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content? 3.003 4.004

Political, social, and cultural content is generally not subject to forced removal, though excessive enforcement of rules against content that constitutes a criminal offense can affect protected speech (see B1). The government has recently worked to develop regulations that would compel platforms to restrict content that is deemed harmful but not necessarily illegal (see B3).

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful to terrorists, or incites people to carry out or support terrorism. By early 2018, the police’s Counter-Terrorism Internet Referral Unit (CTIRU), which compiles lists of such content, reported that 304,000 pieces of material had been taken down since 2010.1

In February 2018, the government announced that it had developed software that automatically detected and labelled online content associated with the Islamic State (IS) militant group. The technology is aimed at smaller platforms and services that may not have sufficient resources to carry out similar functions.2 Security Minister Ben Wallace has made statements suggesting that the government use a punitive tax to encourage social media platforms and other technology companies to remove terrorist content more expeditiously.3

When child sexual abuse images or criminally obscene adult materials are hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. For content that is hosted on servers overseas, the IWF coordinates with international hotlines and police to have the offending content taken down in the host country.

Similar processes are in place for the investigation of online materials that incite hatred under the oversight of TrueVision, a site that is managed by the police.4 The government has accused social media platforms of not doing enough to combat hate speech.5 The government in 2017 announced plans for a national hub that would monitor online hate speech and help refer certain content to online platforms for their removal.6 As of March 2018, its initial budget was reportedly ₤200,000 ($260,000).7

In 2014, the European Court of Justice gave search engines the task of removing links from their search results at the request of individuals if the stories in question were deemed to be inadequate or irrelevant. This ruling on the “right to be forgotten” has had an impact on the way content is handled in the UK. In 2016, Google expanded the right to be forgotten by removing the designated links from all versions of its search engine.8 In April 2018, in its first decision on the “right to be forgotten,” the High Court ordered Google to delist search results about a businessman’s past criminal conviction. In another case, the court rejected a similar claim made by a businessman who was sentenced for a more serious crime.9

Despite the ongoing Brexit process, the government and the data protection regulator, the Information Commissioner’s Office (ICO), have committed to implementing EU guidance on data protection, the General Data Protection Regulation (GDPR),10 which came into force in May 2018 (see C6). Under the GDPR, the right to be forgotten will continue to apply in the UK.

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 3.003 4.004

The regulatory framework and actual procedures for managing online content are largely proportional, transparent, and open to correction. However, the optional filtering systems operated by ISPs and mobile service providers—particularly those meant to block material that is unsuitable for children—have been criticized for a lack of transparency, inconsistency among different providers, and excessive application that affects legitimate content. Separately, the government has advanced plans to regulate additional content that it considers harmful but not necessarily illegal.

In April 2019, the government released its Online Harms White Paper, aimed at creating a new regulatory framework to tackle problematic content.1 It would apply to both criminal and merely harmful material, encompassing disinformation, trolling, sale of illegal items, nonconsensual pornography, hate speech, harassment, promotion of self-harm, and content uploaded by those who are incarcerated. The proposal would create an independent regulator to enforce companies’ compliance with a new statutory duty of care. Codes of practice would be established to explain how to properly manage harmful content. Companies would also be required to set up appeals processes. The regulator would be given the power to fine noncompliant companies, or impose liability on individuals within senior management. The proposal defines the affected companies as those “that allow users to talk or communicate with each other online,” including social media and messaging platforms, online forums, search engines, and file-hosting sites. At the end of the coverage period, the government was engaged in a consultation process with industry stakeholders and civil society. Civil society groups have criticized the proposal.2

Similarly, in February 2019, Parliament’s Digital, Culture, Media, and Sport Committee released its 2018 report on disinformation and “fake news” (see B5).3 The report recommended holding social media companies and similar broadcasting platforms legally liable for hosting “fake news.” It also suggested imposing other requirements in a new code of conduct for such companies.

Under the Digital Economy Act 2017, ISPs are legally empowered to use both blocking and filtering methods, if allowed by their terms and conditions of use.4 Prior to this amendment, ISPs provided only filtering mechanisms to users; the change allows ISPs to block websites at their own discretion. The Digital Economy Act also imposed a number of requirements on ISPs and content providers, notably Section 14(1), which obliges content providers to verify the age of users accessing online pornography. In February 2018, the British Board of Film Certification (BBFC) was designated as the age-verification regulator, and it launched a public consultation to develop guidance on the means and mechanisms for providers to achieve compliance.5 The age verification mechanisms were set to be implemented in April 2018,6 but they have been delayed to allow more time to identify appropriate means of verification (see B1).7

Civil society groups have criticized the default filters used by ISPs and mobile service providers to review content deemed unsuitable for children, arguing that they lack transparency and affect too much legitimate content, which makes it difficult for consumers to make informed choices and for content owners to appeal wrongful blocking.

ISPs block URLs using content-filtering technology known as Cleanfeed, which was developed by BT in 2004.8 In 2011, a judge described Cleanfeed as “a hybrid system of IP address blocking and DPI-based URL blocking which operates as a two-stage mechanism to filter specific internet traffic.” While the process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables blocking of individual URLs rather than entire domains, it does not entail “detailed, invasive analysis of the contents of a data packet,” according to the judge’s description. Similar systems adopted by ISPs other than BT are “frequently referred to as Cleanfeed,” the judge wrote.9

ISPs are notified about websites hosting content that has been determined to violate or potentially violate UK law under at least three different procedures:

  • The IWF compiles a list of specific URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content; the list is distributed to ISPs and other industry stakeholders that support the foundation through membership fees.10 ISPs block those URLs in accordance with a voluntary code of practice set forth by the Internet Services Providers’ Association (see A5). IWF analysts evaluate material for potential violations of a range of UK laws,11 in accordance with a Sexual Offences Definitive Guideline published by the Sentencing Council under the Ministry of Justice.12 The IWF recommends that ISPs notify customers about why a blocked site is inaccessible,13 but some simply display error messages.14 The IWF website allows site owners to appeal their inclusion on the list. Citizens can also report criminal content via a hotline. An independent 2014 judicial review of the human rights implications of the IWF's operations found that the body’s work was consistent with human rights law.15 The IWF appointed a human rights expert in accordance with one of the review’s recommendations, but it deferred action on a recommendation to limit its remit to child sexual abuse.16 The list of sites blocked for hosting child sexual abuse imagery is not public.
  • The CTIRU, created in 2010, compiles a list of URLs hosted overseas that contain material considered to glorify or incite terrorism under the Terrorism Act 2006,17 and these are filtered on public-sector networks. The blacklist is not made public on the grounds that releasing it would facilitate access to unlawful materials.
  • The UK High Court can order ISPs to block websites found to be infringing copyright under the Copyright, Designs, and Patents Act 1988.18 The High Court has held that publishing a link to copyright-infringing material, rather than actually hosting it, does not amount to an infringement;19 this approach was confirmed by the Court of Justice of the European Union.20 A new intellectual property framework adopted in 2014 included exceptions for making personal copies of protected work for private use, as well as for “parody, caricature, and pastiche.”21 Copyright-related blocking has been criticized for its inefficiency and lack of transparency.22 In 2014, after lobbying from the London-based Open Rights Group, BT, Sky, and Virgin Media began informing visitors to sites blocked by court order that the order can be appealed at the High Court.23 While High Court orders are not kept from the public, they can be burdensome to obtain in practice.24

The processes by which mobile service providers block content that the industry group Mobile UK deems unsuitable for children lack transparency, and their effects vary across providers. In some cases the filtering activity may be outsourced to third-party contractors, further limiting transparency.25 Child-protection filters are enabled by default in mobile internet browsers, though users can disable them by verifying that they are over age 18. Mobile virtual network operators are believed to “inherit the parent service's filtering infrastructure, though they can choose whether to make this available to their customers.”26 O2 allows its users to check how a particular site has been classified.27 The filtering is based on a classification framework for mobile content published by the BBFC.28 The BBFC adjudicates appeals from content owners and publishes the results quarterly.29

Website owners and companies that knowingly host illicit material and fail to remove it may be held liable, even if the content was created by users, according to EU Directive 2000/31/EC (the E-Commerce Directive).30 Updates to the Defamation Act effective since 2014 limit companies’ liability for user-generated content that is considered defamatory. However, the Defamation Act offers protection from private libel suits based on third-party postings only if the plaintiff is able to identify the user responsible for the allegedly defamatory content.31 The act does not specify what sort of information the website operator must provide to plaintiffs, but it raised concerns that websites would register users and restrict anonymity in order to avoid civil liability.32

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 3.003 4.004

Self-censorship, though difficult to assess, is not understood to be a widespread problem in the UK. However, due to factors including the government’s extensive surveillance practices, it appears likely that some users censor themselves when discussing sensitive topics to avoid potential government intervention or other repercussions (see C5).1

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 3.003 4.004

Concerns about content manipulation have increased in recent years, with foreign, partisan, and extremist groups allegedly using automated “bot” accounts, fabricated news, and altered images to shape discussions on social networks.

There is no evidence of widespread government manipulation of online content. However, there have been a number of allegations that the information environment was manipulated surrounding the 2016 Brexit referendum and the June 2017 elections, adding to the polarization of online political discourse. In the lead-up to the referendum, targeted online ads distributed by Leave campaign groups on Facebook included misleading statistics and wild claims, with one even accusing the EU of seeking to ban teakettles.1 In May 2017, Facebook reported that it had removed tens of thousands of fake accounts to limit the impact of deliberately misleading information disguised to look like news reports, which spread online prior to the elections.2 It was not clear whether those circulating the fake reports had a coherent agenda or how significant their influence was. One group accused Facebook and Twitter of failing to curb disinformation that depicted Muslims and migrants in a negative light.3 A recent study by the group Hope Not Hate separately examined anti-Muslim activists’ exploitation of terrorist attacks in the UK to spread their views on social media.4

There have been a number of reports about the influence of foreign states, especially Russia, on the Brexit referendum. Platforms such as Facebook and Twitter initially denied that there was substantial interference.5 However, these denials were met with skepticism, and there are continuing inquiries into the alleged meddling. After accusing Russia of waging a disinformation campaign to sow discord in democratic countries,6 Prime Minister Theresa May in January 2018 announced plans to establish a national security communications unit tasked with combating disinformation by state actors and others.7

A parliamentary inquiry into social media interference intensified in March 2018, when Christopher Wylie, a former employee at the data analytics company Cambridge Analytica, claimed that the firm had illegally obtained information on millions of accounts from Facebook and developed techniques for categorizing and influencing potential voters.8

The report on disinformation and “fake news” published in February 2019 by Parliament’s Digital, Culture, Media, and Sport Committee specifically addressed Facebook’s purported role in facilitating the spread of disinformation.9 The report called for electoral law reforms as well as the establishment of intermediary liability to help address the problem (see B3).

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

Online media outlets face economic constraints that negatively impact their financial sustainability, but these are the result of market forces, not political intervention.

Publishers have struggled to find a profitable model for their digital platforms, though more than half of the population reportedly consumes news online. In 2018, a survey found that 64 percent of adults used the internet to access news, social media being the most popular online source.1

Ofcom is responsible for enforcing the EU’s 2015 Open Internet Regulation, which includes an obligation for ISPs to ensure net neutrality—the principle that internet traffic should not be throttled, blocked, or otherwise disadvantaged based on its content. It remains to be seen whether the Brexit process will lead the UK government to change its policy on net neutrality or maintain its current approach.

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity? 4.004 4.004

The online information landscape is diverse and lively. Users have access to the online content of virtually all national and international news organizations. While there are a range of sources that present diverse views and appeal to various audiences and communities, the ownership of leading news outlets is relatively concentrated,1 and particular media groups have been accused of political bias.

The publicly funded British Broadcasting Corporation (BBC), which maintains an extensive online presence, has an explicit diversity and inclusion strategy, aiming to increase the representation of different sexualities, age ranges, and ethnic and religious groups, as well as addressing gender.2 Similar models have been adopted by other national broadcasters.3

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

Online mobilization tools are freely available, and collective action continues to grow in terms of both numbers of participants and numbers of campaigns. Some groups use digital tools to document and combat bigotry, including Tell MAMA (Measuring Anti-Muslim Attacks), which tracks reports of attacks or abuse submitted by British Muslims online.1 Petition and advocacy platforms such as 38 Degrees and AVAAZ have emerged, and civil society organizations view online communication as an indispensable part of any campaign strategy, though the efficacy of online mobilization per se remains subject to debate.

C Violations of User Rights

The government has placed significant emphasis on stopping the dissemination of terrorist content and hate speech online, and on protecting individuals from targeted harassment on social media. While users are generally free from arrest, prosecution, or extralegal violence in response to their online activity, user rights are undermined by extensive surveillance for law enforcement and foreign intelligence purposes. A new counterterrorism law was enacted in February 2019, raising concerns that its broad provisions could be used to improperly limit speech and access to information.

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 5.005 6.006

The UK does not have a written constitution or similarly comprehensive legislation that defines the scope of governmental power and its relation to individual rights. Instead, constitutional powers and individual rights are addressed in various statutes, common law, and conventions. The provisions of the European Convention on Human Rights were adopted into law via the Human Rights Act 1998. In 2014, Conservative Party officials announced their intention to repeal the Human Rights Act in favor of a UK Bill of Rights in order to give British courts more control over the application of human rights principles.1 During the 2017 election campaign, Prime Minister Theresa May initially scaled back those ambitions.2 However, in June 2017 she reopened the possibility of significantly amending human rights legislation to allow more aggressive measures against terrorism in light of high-profile attacks in Manchester and London.3 No such legal changes were enacted during the coverage period.

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities? 2.002 4.004

Political expression and other forms of online speech or activity are generally protected, but there are legal restrictions on hate speech, online harassment, and copyright infringement, and some measures—including a new counterterrorism law—could be applied in ways that violate international human rights standards.

The Counter-Terrorism and Border Security Act, which received royal assent in February 2019, included several provisions related to online activity (see C5).1 The legislation, intended to update the Terrorism Act 2000, came in response to attacks in London and Manchester in 2017, among other events.2 The new provisions make it an offense to view terrorist material (as defined in the act) over the internet. Individuals can face up to 15 years in prison for viewing or accessing material that is useful or likely to be useful in preparing or committing a terrorist act, even if there is no demonstrated intent to commit such acts. The law includes exceptions for journalists or academic researchers who access such materials in the course of their work, but it does not address other possible circumstances in which access might be legitimate.3 “Reckless” expressions of support for banned organizations are also criminalized under the law. A number of civil society organizations argued that the legislation was dangerously broad, with unclear definitions that could be abused.4

Stringent bans on hate speech are encapsulated in a number of laws (see Table 1), and some rights groups have said they are too vaguely worded. Defining what constitutes an offense has been made more difficult by the development of new communications platforms, and prosecutions are becoming more common (see C3).

Table 1: List of Legislation Regarding Offensive Speech

Statute

Details

Maximum penalty

Public Order Act 1986

Section 5 penalizes “threatening, abusive or insulting words or behavior.” In 2013, it was amended to remove insults. 5

Unlimited fine and six months in prison

Malicious Communications Act 1988

Section 1 criminalizes targeting individuals with abusive and offensive content online “with the purpose of causing distress or anxiety.” 6 In 2015, it was amended to include “revenge porn,” the sharing of sexual images without the subject’s consent and with the intent to cause harm. 7

Two years in prison

Communications Act 2003

Section 127 punishes “grossly offensive” communications sent through the internet. 8

Unlimited fine and six months in prison

Terrorism Act 2006

Section 1 prohibits the publishing of statements likely to encourage the commission, preparation, or instigation of terrorism.

On indictment, imprisonment for seven years and unlimited fine

On summary conviction, imprisonment for one year and unlimited fine

The Crown Prosecution Service (CPS) publishes specific guidelines for the prosecution of crimes “committed by the sending of a communication via social media.”9 Updates in 2014 placed digital harassment offenses committed with the intent to coerce the victims into sexual activity under the Sexual Offences Act 2003, which carries a maximum of 14 years in prison.10 Revised guidelines issued in March 2016 identified four categories of communications that are subject to possible prosecution: credible threats; abusive communications targeting specific individuals; breaches of court orders; and grossly offensive, false, obscene, or indecent communications.11 They also advised prosecutors to consider the age and maturity of the user in question. Some observers said this could restrict the creation of pseudonymous accounts, though only in conjunction with activity that is considered abusive.12 In October 2016, the CPS updated its guidelines again to cover more abusive online behaviors, including organized harassment campaigns or “mobbing,” and doxing, the deliberate and unauthorized publication of personal information online to facilitate harassment.13

The Copyright, Designs, and Patents Act 1988 carries a maximum two-year prison sentence for offenses committed online. In 2015, the government held a public consultation regarding a proposal to increase the sentence to 10 years. Of the 1,011 responses, only 21 supported the proposal,14 but a 2016 government consultation paper announced plans to introduce an amendment that included the 10-year maximum sentence “at the earliest available legislative opportunity.”15 The penalty was ultimately incorporated into the Digital Economy Act 2017.

The libel laws in England and Wales have historically tended to favor the plaintiff, leading foreign litigants to file suits that had only a tenuous connection to the UK, a phenomenon known as “libel tourism.” The Defamation Act 2013 was intended to address the problem by requiring claimants to prove that England and Wales are the most appropriate forum for the action, setting a serious-harm threshold for claims, and codifying certain defenses such as truth and honest opinion. In recent years, the overall number of defamation cases in the UK has fallen.16

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities? 5.005 6.006

Police have arrested internet users for promoting terrorism, issuing threats, or engaging in racist abuse, and in some past cases the authorities have been accused of overreaching in their enforcement efforts. However, prison sentences for political, social, and cultural speech remain rare.

Guidelines clarifying the scope of offenses involving digital communications may be helping to cut down on the more problematic speech-related prosecutions observed in the past (see C2). The scale of arrests remains a concern, though many investigations are dropped before prosecution. Figures obtained by the Times newspaper showed that in 2016 and 2017, more than 3,000 individuals had been detained and questioned for offensive online comments under Section 127 of the Communications Act.1

Local police departments have the discretion to pursue criminal complaints that would be treated as civil cases in many democracies. Some cases of offensive humor have been prosecuted. In early 2016, for example, police in Scotland detained 28-year-old Markus Meechan after he uploaded a YouTube video of himself teaching his girlfriend’s dog to perform a Nazi salute as a prank.2 Meechan was convicted of breaching Section 127 of the Communications Act 2003,3 and in April 2018 he was ordered to pay a fine of ₤800 ($1,000).4 He refused, arguing that the penalty set a dangerous precedent, and held a fundraiser for an appeal.5 The appeal was raised to the High Court of the Judiciary in Scotland with the aim of a hearing before the Supreme Court of the United Kingdom, but in January 2019 the Scottish High Court judges rejected the move.6

In another case of offensive humor from November 2018, a group of friends at a party burned an effigy of the Grenfell Tower—a public housing facility in London where a fast-moving fire had killed more than 70 people in June 2017—and posted video of the act to their WhatsApp group. The video was subsequently uploaded to YouTube, where it spread widely and received public condemnation. There was a police investigation,7 and one of the accused, Paul Bussetti, was charged under the Communications Act 2003.8 In August 2019, after the end of the coverage period, Bussetti was found not guilty.9

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 2.002 4.004

Users are not required to register to obtain a SIM card, allowing for the anonymous use of mobile devices.1 However, some laws provide authorities with the means to undermine encryption, and security officials have pushed for further powers.

There are several legal provisions that could allow authorities to compel decryption or require a user to disclose passwords, including the Regulation of Investigatory Powers Act 2000, the Terrorism Act 2000, and the Investigatory Powers Act 2016 (see C5 and C6).2 Although such powers are seldom invoked in practice, some users have faced detention for failure to provide passwords.3

In late 2018, GCHQ representatives released a proposal, the so-called “Ghost Proposal,” calling for more cooperation mechanisms between communications services and intelligence bodies that would allow the decryption of criminal and terrorist communications in “exceptional” circumstances.4 The proposal would require companies to facilitate the addition of “ghost” users—law enforcement agents—to encrypted conversations without the knowledge of participants. Civil society organizations, service providers, technology platforms, and other experts criticized the idea as a serious infringement on privacy that would undermine cybersecurity.5

Recent legal changes requiring age verification for access to online pornography have also threatened anonymity (see B3). Enforcement of the rules has been beset by delays,6

https://www.gov.uk/government/news/age-verification-for-online-pornogra… however, with no specified implementation date as of October 2019.7

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

The UK authorities are known to engage in surveillance of digital communications, including mass surveillance, for intelligence, law enforcement, and counterterrorism purposes. A 2016 law introduced some oversight mechanisms to prevent abuses, but it also authorized bulk collection of communications data and other problematic practices. A 2019 counterterrorism law empowered border officials to search travelers’ devices, undermining the privacy of their online activity.

The Counter-Terrorism and Border Security Act (see C2) gives border agents the ability to search electronic devices at border crossings and ports of entry with the aim of detecting “hostile activity”—a broad category including actions that threaten national security, threaten the economic well-being of the country in a way that touches on security, or are serious crimes. However, border agents do not need to have a ”reasonable suspicion” that an individual is engaged in “hostile activity,” giving them broad discretion to stop and search travelers.1 Those detained are required to provide information when requested by border officers, including the passwords to unlock devices.2

In September 2018, the European Court of Human Rights found that parts of the UK’s bulk surveillance regime under the Regulation of Investigatory Powers Act 2000 (RIPA) violated the European Convention on Human Rights, specifically its provisions on privacy and free expression, noting that, for example, there were insufficient safeguards to protect confidential journalistic material.3  However, the court controversially ruled that bulk surveillance was not always incompatible with human rights and could fall within a state’s “margin of appreciation in choosing how best to achieve” national security.4 The ruling addressed three petitions filed by UK civil society groups and individuals following former US National Security Agency contractor Edward Snowden’s 2013 revelations about UK surveillance.5 Some of the problems that were raised in the case were addressed in the Investigatory Powers Act 2016 (IP Act).

The IP Act codified law enforcement and intelligence agencies’ surveillance powers in a single omnibus law, whereas they were previously scattered across multiple statutes and authorities.6 It covers interception, equipment interference, and data retention, among other topics.7 In general, the IP Act has been criticized by industry associations, civil rights groups, and the wider public, particularly regarding the range of powers it authorizes and its legalization of bulk data collection.8

The act specifically enables the bulk interception and acquisition of communications data sent or received by individuals outside the UK, as well as bulk equipment interference involving “overseas-related” communications and information. When both the sender and receiver of a communication are in the UK, targeted warrants are required, though several individuals, groups, or organizations may be covered under a single warrant in connection with a single investigation. Moreover, the internet’s distributed architecture means that privacy protections based on an individual’s physical location are highly porous. Communications exchanged within the UK may be routed overseas, a fact that intelligence agencies have exploited in the past to conduct bulk surveillance programs like Tempora (see below).

Part 7 of the IP Act introduced warrant requirements for intelligence agencies to retain or examine “personal data relating to a number of individuals” where “the majority of the individuals are not, and are unlikely to become, of interest to the intelligence service in the exercise of its functions.”9 Datasets may be “acquired using investigatory powers, from other public sector bodies or commercially from the private sector.”10 An initial examination of bulk datasets must occur within three months “where the set of information was created in the United Kingdom” and within six months otherwise (Section 220).

The IP Act established a new commissioner appointed by the prime minister to oversee investigatory powers under Section 227. Adrian Fulford, an appeals court judge, was appointed to the role in March 2017.11 The law includes some other safeguards, such as “double-lock” interception warrants. These require approval from both the relevant secretary of state and an independent judge, though the secretary alone can approve urgent warrants. Under Section 32, urgent warrants last five days; others expire after six months unless renewed under the same double-lock procedure. The act allows authorities to prohibit telecommunications providers from disclosing the existence of a warrant. Intercepting authorities that may apply for targeted warrants include police commissioners, intelligence service heads, and revenue and customs commissioners.12 Applications for bulk interception, bulk equipment interference, and bulk personal dataset warrants can only be made to the secretary of state “on behalf of the head of an intelligence service by a person holding office under the Crown” and must be reviewed by a judge.

Bulk surveillance is an especially contentious issue in the UK because intelligence agencies developed secret programs under older laws that bypassed oversight mechanisms and any means of redress for affected individuals. These programs affected an untold number of people within the UK, even if they were meant to have only foreign targets. Tempora, a secret surveillance project documented in the Snowden leaks, is one example. A number of other legislative measures authorized surveillance,13 including RIPA.14 RIPA was not repealed by the IP Act, though many of its competences were transferred to the newer legislation. A clause within Part I of RIPA allowed the foreign or home secretary to sign off on bulk surveillance of communications data arriving from or departing to foreign soil, providing the legal basis for Tempora.15 Since the UK’s fiber-optic network often routes domestic traffic through international cables, this provision legitimized mass surveillance of UK residents.16 Working with telecommunications companies, GCHQ installed interception probes at the British landing points of undersea fiber-optic cables, giving the agency direct access to data carried by hundreds of cables, including private calls and messages.17

The Investigatory Powers Tribunal was established under RIPA to adjudicate disputes regarding government surveillance. It found procedural irregularities in the retention of communications intercepted from Amnesty International and the South Africa–based Legal Resources Center, though it concluded that the interception itself was lawful.18 In early 2016, the tribunal ruled that computer network exploitation carried out by GCHQ was in principle lawful within the limitations in the European Convention on Human Rights.19 The tribunal also noted that network exploitation is legal if the warrant is as specific and narrow as possible.

In July 2016, the Investigatory Powers Tribunal found that bulk data collection by GCHQ and two other intelligence agencies, known as MI5 and MI6, was unlawful from March 1998 until the practice was disclosed in November 2015.20 The practice had been authorized under Section 94 of the Telecommunications Act 1984, which the Interception of Communications Commissioner described in June 2016 as lacking “any provision for independent oversight or any requirements for the keeping of records.”21 The tribunal also said that the use of bulk personal datasets by GCHQ and MI5, commencing in 2006, was likewise unlawful until disclosed in March 2015. The datasets contained personal information that could include financial, health, and travel data as well as communications details.22 There were hearings in June and October 2017 on the process and legality of collecting and sharing these datasets.23

UK authorities have been known to monitor social media platforms.24 In London, for example, police reportedly monitored nearly 9,000 activists from across the political spectrum—many of whom had no criminal background—using geolocation tracking and sentiment analysis of data scraped from Facebook, Twitter, and other platforms.25 This information was then compiled in secret dossiers on each campaigner. In another example, the Online Hate Speech Dashboard, a joint project led by the National Online Hate Crime Hub of the National Police Chiefs’ Council and Cardiff University, received ₤1 million ($1.3 million) in 2018 to use artificial intelligence for real-time monitoring of social media platforms meant to identify hate speech and “preempt hate crimes.”26

C6 1.00-6.00 pts0-6 pts
Are service providers and other technology companies required to aid the government in monitoring the communications of their users? 3.003 6.006

Companies are required to capture and retain user data under certain circumstances, though the government issued regulatory changes in 2018 to address flaws in the existing rules. While the government has legal authority to require companies to assist in the decryption of communications, the extent of its use and efficacy in practice remains unclear.

The UK has incorporated the GDPR into domestic law through the Data Protection Act 2018.1 Therefore, even if the Brexit process is completed and the country leaves the EU, the GDPR in its entirety will continue to regulate data protection within the UK.

Data retention provisions under the IP Act allow the secretary of state to issue notices requiring telecommunications providers to capture information about user activity, including browser history, and retain it for up to 12 months. The Data Retention and Investigatory Powers Act 2014 (DRIPA), the older law this requirement was modeled on, was ruled unlawful in the UK and the EU in 2015.2 In January 2018, the Court of Appeal described DRIPA as being inconsistent with European law, since the data collected and retained were not limited to the purpose of fighting serious crime.3 In April 2018, the High Court ruled that part of the IP Act’s data retention provisions similarly violated EU law, and that the government should amend the legislation by November 2018.4

In response, the government issued the Data Retention and Acquiring Regulations 2018, which entered into force in October 2018. The regulations limited the scope of the government’s collection and retention of data, and enhanced the transparency of the process.5 Furthermore, a newly created Office for Communications Data Authorisations would oversee data requests and ensure that official powers are used in accordance with the law.

Another problematic provision of the IP Act enables the government to order companies to decrypt content, though the extent to which companies would be willing or able to comply remains uncertain (see C4).6 Under Section 253, technical capability notices can be used to impose obligations on telecommunications operators both inside and outside the country “relating to the removal … of electronic protection applied by or on behalf of that operator to any communications or data,” among other requirements. The approval process for issuing a technical capability notice is similar to that of an interception warrant.7 After consultations with the industry and civil society groups,8 the government issued the Investigatory Powers (Technical Capability) Regulations 2018 in March 2018, which will govern how the notices are issued and implemented.9 The regulations specify companies’ responsibilities in ensuring that they are able to comply with lawful warrants for communications data.

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in retribution for their online activities? 4.004 5.005

There were no reported instances of violence against internet users in reprisal for their online activities during the coverage period, though cyberbullying, particularly harassment of women, is widespread.1 A recent study found that one in three women members of Parliament had experienced online abuse, harassment, or threats.2 Online harassment of Muslims and other minorities is also a significant problem.3

A 2017 study found an increase in abusive comments targeting politicians on Twitter, which peaked on the day of the 2016 Brexit referendum.4 News reports indicated that hate crimes against minorities increased after the vote to leave the EU, which was driven in part by campaigns that depicted immigration as a threat to the British way of life. However, a 2016 analysis of cyberbullying in different parts of the UK found that regions with high levels of online hate speech or racial intolerance did not necessarily vote in favor of Brexit, and concluded that other issues were also driving the trend.5

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 2.002 3.003

Nongovernmental organizations, media outlets, and activists are generally not targeted for technical attacks by government or nonstate actors. Financially motivated fraud and hacking continue to present a challenge to authorities and the private sector. Cyberattacks have increased in recent years, and observers have questioned the security of a trend in which various machines, appliances, and objects are connected to the internet, creating what is known as the Internet of Things.1 During 2018, nearly 70 percent of companies and other commercial entities in the UK are reported to have been affected by cyberattacks.2

In March 2019, the information systems of the Police Federation of England and Wales were infected with ransomware—malicious software that blocks access to the contents of a computer or network and demands a ransom payment for access to be restored.3 The federation said there was no evidence that any information was leaked, although its data back-ups were deleted and other information was rendered inaccessible.4 The attack was limited to the federation’s headquarters in Surrey and did not spread to its 43 associated offices.

In May 2017, the National Health Service suffered a ransomware attack affecting 40 organizations, effectively barring workers from patient case files.5 The attack had severe consequences, delaying or denying essential services for vulnerable individuals.6

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    94 100 free
  • Internet Freedom Score

    78 100 free
  • Freedom in the World Status

    Free
  • Networks Restricted

    Yes
  • Websites Blocked

    No
  • Pro-government Commentators

    No
  • Users Arrested

    No