United Kingdom

Free
77
100
A Obstacles to Access 23 25
B Limits on Content 29 35
C Violations of User Rights 25 40
Last Year's Score & Status
76 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free)

header1 United Kingdom

*0=most free, 100=least free

**A total score of 0-30=Free, 31-60=Partly Free, 61-100=Not Free

header2 Key Developments, June 1, 2017 - May 31, 2018

  • A parliamentary inquiry continued its extensive investigation into the impact of disinformation, foreign interference, and targeted online advertising during elections, and suggested tightening tech companies’ liabilities to act against harmful content (see Media, Diversity, and Content Manipulation).
  • In the first “right to be forgotten” judgement in the UK, the High Court ordered Google to delist search results referring to a spent criminal conviction of a businessman. Another similar claim made by a businessman who was convicted for a more serious crime was rejected (see Content Removal).
  • In April 2018, the High Court ruled that part of the Investigatory Powers Act regarding access to retained communications data was incompatible with EU law due to lack of sufficient safeguards. The government must amend the legislation by November 2018 (see Surveillance, Privacy, and Anonymity).

header3 Introduction

Internet freedom in the UK improved slightly in 2018, as the country recovered from the crippling impact of the ransomware attack Wannacry, and sentences for speech protected under international human rights norms remained rare. However, there were heightened concerns about the use of online disinformation and manipulation tactics.

The UK has consistently been an early adopter of new information and communication technologies (ICTs). Internet coverage is almost universal, with competitive prices and generally fast speeds. Mobile devices, especially smartphones, have become the most prevalent means of internet access. However, concerns about hate speech and disinformation online have increased, and recent studies have documented the spread of anti-Muslim propaganda through social media bots and image manipulation.1 An extensive parliamentary inquiry looking into the impact of disinformation and data misuse has joined calls for stricter regulation of social media companies.2 The inquiry into social media interference intensified in March 2018, when Christopher Wylie, a former employee of the data analytics company Cambridge Analytica, claimed that "cheating" implicating the Canadian data firm Aggregate IQ may have influenced the Brexit vote. As part of the investigation, Facebook released a number of misleading ads used by the Vote Leave group and created by Aggregate IQ.

Despite ongoing findings about UK surveillance overreach, the Investigatory Powers Act came into force in December 2016. The law was devised to clarify, update, and define powers of surveillance available to intelligence, police, and security services. Though it introduced some new oversight mechanisms, it was a step back for privacy in several respects. The law authorized bulk surveillance measures that the United Nations special rapporteur on privacy called “disproportionate” and “privacy-intrusive.” It also increased requirements for internet companies to cooperate with investigations, including potentially “removing electronic protection” from encrypted communications or data where possible. In April 2018, the High Court ruled that parts of the act on data retention were incompatible with fundamental rights in EU law, noting that access to retained data should be limited to combating “serious crimes” and requires prior review by a court or independent body.

A Obstacles to Access

ICT infrastructure is generally strong and policies and regulation tend to favor access. The overwhelming majority of UK citizens use the internet frequently on a variety of devices, particularly smartphones, and substantial investments led by the government have led to better levels of service.

Availability and Ease of Access

Access to the internet is considered to be a key element for societal and democratic participation in the UK. Broadband access is almost ubiquitous, and nearly 100 percent of all households are within range of ADSL connections. All national mobile network operators offer 4G mobile communication technology, with subscriptions totaling 52.4 million in 2016, representing nearly two-thirds of all mobile subscriptions.1

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries. A sample mobile data plan costs around GBP 10 (US $12) a month in 2017; the most affordable fixed-line broadband packages were available for a little over GBP 30 (US $37) a month.2 Median gross weekly earnings for full-time workers were GBP 550 (approximately US $700).3

The Digital Economy Act 2017 provides that a minimum of 10 Mbps broadband access is effectively a legal right.4 Progress continues towards the expansion of “superfast” broadband that has an advertised speed of at least 30 Mbps.5 While superfast broadband is accessible to more than 90 percent of all UK premises, the communications regulator Ofcom has noted that it lags behind in full-fibre broadband.6 In March 2018, the government launched a new voucher scheme providing up to GBP 3,000 (approximately US $4,000) towards installation costs of full-fibre broadband for small and medium enterprises.7

Mobile telephone penetration is extensive. In 2017, some 73 percent of adults surveyed used mobile phones to access the internet, up from 36 percent in 2011.8 In 2017, two-thirds of under 34 year olds viewed their mobile as their primary internet access device, with 44 percent of 35-54 year olds claiming the same. For those over 54, the laptop remained the most popular device.9

People in the lowest income groups are significantly less likely to have home internet subscriptions, with the gap between socioeconomic groups remaining the same for the past few years. There is a no general gender gap in internet use, though some two-thirds of women over 75 have never used the internet.10

Restrictions on Connectivity

The government does not place limits on the amount of bandwidth internet service providers (ISPs) can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services (such as peer-to-peer file sharing and television streaming). Mobile providers have cut back on previously unlimited access packages for smartphones, reportedly because of concerns about network congestion.

ICT Market

The major ISPs are British Telecom (BT) with a 37 percent market share (having gained five percentage points after its acquisition of EE in 2016), Sky (24 percent), Virgin Media (20 percent), TalkTalk (12 percent), and others (8 percent).11 Regulator Ofcom continues to develop regulation to promote unbundling of services, such that the incumbent owners of infrastructure continue to invest in said infrastructure, whilst also allowing competitors to make use of it.12

ISPs are not subject to licensing but must comply with general conditions set by Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.13

Among mobile operators, EE leads the market with 29 percent of subscribers, followed by O2 (27 percent), Vodafone (19 percent), Three (11 percent), and Tesco (8.5 percent).14 Mobile Virtual Network Operators, including Tesco, provide service using the infrastructure of one of the other four.

Regulatory Bodies

Ofcom, an independent statutory body, is the primary telecommunications regulator under broad definitions of responsibility for “citizens,” “consumers,” and “communications matters” granted to it under the Communications Act 2003.15 It is responsible to Parliament and also regulates the broadcasting and the postal sectors.16 Ofcom has some content regulatory functions with implications for the internet, such as regulating video content in keeping with the European Union (EU) AudioVisual Media Services Directive.17

Nominet, a nonprofit company operating in the public interest, manages access to the .uk, .wales, and .cymru domains. In 2013, Nominet implemented a post-registration domain name screening to suspend or remove domain names that encourage serious sexual offenses.18

Other groups regulate services and content through voluntary ethical codes or co-regulatory rules under independent oversight. In 2012, major ISPs published a “Voluntary Code of Practice in Support of the Open Internet.”19 The code commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade the services of a competitor. The code was amended in 2013 to clarify that signatories could deploy content filtering or provide such tools where appropriate for public Wi-Fi access.20

Criminal online content is managed by the Internet Watch Foundation (IWF), an independent self-regulatory body funded by the EU and industry bodies (see “Blocking and Filtering”).21 The Advertising Standards Authority and the Independent Press Standards Organization regulate newspaper websites. With the exception of child abuse content, these bodies eschew pre-publication censorship and operate post-publication notice and takedown procedures within the E-Commerce Directive liability framework (see “Content Removal”).

B Limits on Content

Various categories of criminal content such as depictions of child sexual abuse, promotion of extremism and terrorism, and copyright infringing materials are blocked by UK ISPs. Parental controls over content considered unsuitable for children are enabled by default on mobile networks, requiring adults to opt out to access adult material. These measures can result in overblocking, and a lack of transparency persists regarding the processes involved and the kind of content affected. Allegations of online content manipulation were made during the reporting period.

Blocking and Filtering

The Digital Economy Act 2017, passed in April 2017, generated controversy by including provisions to allow blocking of “extreme” pornographic material under standards which critics said were poorly defined and unevenly applied.1 The legislation introduced a number of requirements on ISPs and content providers, notably Section 14(1) requires content providers to verify the age of users accessing online pornography. In February 2018, the British Board of Film Certification (BBFC) was designated as the age-verification regulator, and launched a public consultation to develop guidance on the means and mechanisms for providers to achieve compliance.2 The age verification mechanisms were set to be implemented in April 2018,3 but have been delayed as officials cited the need for more time to identify appropriate means of verification.4

Service providers already block and filter some illegal and some legal content in the UK, with varying degrees of transparency. Illegal content falls into three categories. First, ISPs block illegal content depicting child sexual abuse. Second, overseas-based URLs hosting content that has been reported by police for violating the Terrorism Act 2006 —which prohibits the glorification or promotion of terrorism—are included in the child filters supplied by many ISPs, and are inaccessible in schools, libraries, and other facilities considered part of the “public estate.” The list of sites in these two categories is kept from the public to prevent access to unlawful materials. Finally, ISPs are also required to block domains and URLs found to be hosting material that infringes copyright when ordered by the High Court. Those orders are not kept from the public, but can be hard to obtain.5

Separately, all mobile service providers and some ISPs providing home service filter legal content considered unsuitable for children. Mobile service providers enable these filters by default, requiring customers to prove they are over 18 to access the unfiltered internet. In 2013, the four largest ISPs agreed with the government to present all customers with an “unavoidable choice” about whether to enable parentally controlled filters.6 Civil society groups say those filters lack transparency and affect too much legitimate content, making it hard for consumers to make informed choices and for content owners to appeal wrongful blocking.

ISPs block URLs using content filtering technology known as Cleanfeed, which was developed by BT in 2004.7 In 2011, a judge described Cleanfeed as “a hybrid system of IP address blocking and DPI-based URL blocking which operates as a two-stage mechanism to filter specific internet traffic.” While the process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables ISPs to block individual URLs rather than entire domains, it does not enable “detailed, invasive analysis of the contents of a data packet,” according to the judge’s description. Other, similar systems adopted by ISPs besides BT are also “frequently referred to as Cleanfeed,” the judge wrote.8

ISPs are notified about websites hosting content that has been determined to break or potentially break UK law under different procedures:

  • The Internet Watch Foundation (IWF) compiles a list of specific URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content to distribute to ISPs and other industry stakeholders who support the foundation through membership fees.9 ISPs block those URLs in accordance with a voluntary code of practice set forth by the Internet Services Providers’ Association (see “Regulatory Bodies”). IWF analysts evaluate sites hosting material that potentially violate a range of UK laws,10 in accordance with a Sexual Offences Definitive Guideline published by the Sentencing Council under the Ministry of Justice.11 The IWF recommends that ISPs notify customers why the site is inaccessible,12 but some have returned error messages instead.13 The IWF website allows site owners to appeal their inclusion on the list. Citizens can also report criminal content via a hotline. For example, in 2008, the IWF blacklisted a Wikipedia page displaying an album cover depicting a naked girl. Other Wikipedia users reported that the block affected their ability to edit the site’s user-generated content,14 and the IWF subsequently removed the page from the list.15 An independent judicial review of the human rights implications of IWF's operations conducted in 2014 said the body’s work was consistent with human rights law.16 The IWF appointed a human rights expert in accordance with one of the review’s recommendations, but deferred action on another to restrict its remit to child sexual abuse.17
  • The police Counter Terrorism Internet Referral Unit compiles a list of URLs hosted overseas containing material considered to glorify or incite terrorism under the Terrorism Act 2006,18 which are filtered on networks of the public estate, such as schools and libraries; they can still be accessed on private computers.19 In 2014, the four largest ISPs, BT, Virgin, Sky, and TalkTalk, said they would also include this content in parental filters.20
  • The UK High Court can order ISPs to block websites found to be infringing copyright under the Copyright, Designs, and Patents Act 1988.21 The High Court has held that publishing a link to copyright infringing material, rather than actually hosting it, does not amount to an infringement;22 this approach was confirmed by the Court of Justice of the European Union.23 In 2014, a new intellectual property framework included exceptions for making personal copies of protected work for private use, as well as for “parody, caricature and pastiche.”24 Copyright-related blocking has been criticized for its inefficiency and lack of transparency.25 In 2014, after lobbying from the London-based Open Rights Group, BT, Sky, and Virgin Media began informing visitors to sites blocked by court order that the order can be appealed at the High Court.26

Mobile service providers also block URLs identified by the IWF as containing potentially illegal content. However, Mobile UK, an industry group which consists of Vodafone, Three, EE, and O2,27 introduced additional filtering of content considered unsuitable for children in a code of practice published in 2004 and updated in 2013.28 These child filters are enabled by default in mobile internet browsers, though users can disable them by verifying they are over 18. Mobile Virtual Network Operators are believed to “inherit the parent service's filtering infrastructure, though they can choose whether to make this available to their customers.”29 Transparency about what content is affected depends on the provider. O2 allows its users to check how a particular site has been classified.30

The filtering is based on a classification framework for mobile content published by the BBFC.31 Definitions of content the BBFC considers suitable for adults only include “the promotion, glamorization or encouragement of the misuse of illegal drugs;” “sex education and advice which is aimed at adults;” and “discriminatory language or behavior which is frequent and/or aggressive, and/or accompanied by violence and not condemned,” among others. The BBFC adjudicates appeals from content owners about overblocking and publishes the results quarterly.32

The four largest ISPs, BT, Sky, Virgin Media and TalkTalk, offer all customers the choice to activate similar filters to protect children under categories that vary by provider, but can include social networking, games, and sex education.33 Website owners can check whether their site is filtered under one or more category, or report overblocking, by emailing the industry-backed nonprofit group Internet Matters,34 though the process and timeframe for correcting mistakes varies by provider.

These optional filters can affect a range of legitimate content about public health, homosexuality, drug awareness, and even information published by civil society groups and political parties. In 2012, O2 customers were temporarily unable to access the website of the right-wing nationalist British National Party.35 Civil society groups also have criticized the subjectivity of the content selected for filtering. A 2014 magazine article noted that all ISPs had blocked dating sites with the exception of Virgin Media, which operates one.36 An Ofcom report said that ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”37 Transparency about the process remains lacking. In August 2015, when a watchmaking business complained to BT that their company website was blocked by its Parental Control software, the provider responded that the process had been outsourced to “an expert third party,” and that BT was “not involved.”38

Blocked!, a site operated by the Open Rights Group, allows users to test the accessibility of websites and report overblocking of content by both home broadband and mobile internet providers.39 In mid-2016, the website listed 11,715 sites blocked by default filters, meaning a user would have to proactively disable the filter in order to view the content affected. A further 21,239 sites were blocked by filters which users enable by choice. By early 2018, these figures had changed to 20,390 sites blocked by strict filters and 10,558 by default filters. In early 2018, the number of total blocked sites was reported to be 615,427.

Content Removal

Some content, including extremism and hate speech, may be subject to removal in the UK, though authorities often struggle to enforce the relevant laws.

Different regulations affect content removal. Material blacklisted by the IWF because it constitutes a criminal offense (see “Blocking and Filtering”) can also be subject to removal. When the content in question is hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. For content that is hosted on servers overseas, the IWF coordinates with international hotlines and police to get the offending content taken down in the host country.

Similar processes are in place for the investigation of online materials inciting hatred under the oversight of TrueVision, a site that is managed by the police.40 On the other hand, the government has accused social media platforms of not doing enough to combat hate speech.41 The government announced in October 2017 its plans for a national hub that would monitor online hate speech and help refer certain content to online platforms for their removal.42 As of March 2018, its initial budget was reportedly GBP 200,000 (approximately US $265,000).43

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful for terrorists, or incites people to carry out or support terrorism. A Counter Terrorism Internet Referral Unit (CTIRU) was set up in 2010 to investigate internet materials and take down instances of “jihadist propaganda.”44 The CTIRU compiles lists of URLs hosting such material outside its jurisdictions, which are then passed on to service providers for voluntary filtering (see “Blocking and Filtering”). By early 2018, CTIRU reported that 304,000 pieces of material had been taken down since 2010.45 In February 2018, the government announced that it had developed software that automatically detected and labelled Islamic State content online. The technology is aimed at smaller platforms and services that may not have sufficient resources.46 Security minister Ben Wallace has made statements that social media platforms and other technology companies may be subject to a tax in order to make the companies work faster to remove terrorist content.47

Website owners and companies who knowingly host illicit material and fail to remove it may be held liable, even if the content was created by users, according to EU Directive 2000/31/EC (the E-Commerce Directive).48 Subsequent updates to the Defamation Act effective since 2014 limited companies’ liability for user-generated content that is considered defamatory. However, the Defamation Act offers protection to website operators from private libel suits based on third-party postings only if the victim alleging defamation can find the user responsible.49 The act does not specify what sort of information the website operator must provide to plaintiffs, but raised concerns that unauthenticated ID or anonymous internet use may prevent the operator from benefiting from the act’s liability protections, thus encouraging website operators to register users to avoid civil liability.50

In May 2014, the European Court of Justice gave search engines the task of removing links from their search results at the request of individuals if the stories in question were deemed to be inadequate or irrelevant. The so-called “right to be forgotten” ruling has had an impact on the way content is handled in the UK. In 2016, Google expanded the right to be forgotten by removing links from all versions of its search engine.51 In April 2018, in its first decision on the “right to be forgotten,” the High Court ordered Google to delist search results about a spent criminal conviction of a businessman. In another case, a similar claim made by another businessman who was sentenced for a more serious crime was rejected.52

Despite the UK ending its EU membership, the government and the data protection regulator, the Information Commissioner’s Office (ICO), have committed to implementing EU guidance on data protection, the General Data Protection Regulation,53 which comes into force in May 2018. Under this guidance, the right to be forgotten will continue to apply in the UK.

Media, Diversity, and Content Manipulation

Concerns about content manipulation have increased in the UK, as hate groups have used bots, fake news, and image manipulation to spread propaganda on social networks. The UK parliamentary committee has in turn launched an extensive investigation into the impact of online disinformation on political campaigning. Its preliminary report published in July 2018 highlighted concerns surrounding Russian attempts to influence elections via social media, the role of private companies, and online campaign practices in the UK. Much attention during the past year also focused on new revelations about the misuse of personal data to target voters on social media.

There is no evidence of widespread government manipulation of online content. However, there have been a number of allegations that the quality of media made available of social networks was manipulated around the 2016 referendum and the June 2017 election, adding to the polarization of political discourse online. In the lead-up to the referendum, targeted online ads used by “Leave” campaign groups on Facebook included misleading statistics and wild claims, one even accusing the EU of wanting to ban tea kettles.54 In May 2017, Facebook reported it had removed tens of thousands of fake accounts to limit the impact of deliberately misleading information disguised to look like news reports, which spread online prior to the June election.55 It was not clear if actors circulating these fake reports had a coherent agenda or how big their influence was. One group accused Facebook and Twitter of failing to curb disinformation depicting Muslims and migrants in a bad light.56 A more recent study by the group “Hope not Hate” also looked at how anti-Muslim activists exploited recent terror attacks in the UK to spread extremist viewpoints on social media.57

There have also been a number of reports about the influence of foreign states, especially Russia, on the Brexit referendum. Platforms such as Facebook and Twitter initially denied that there was substantial interference.58 However, these denials were met with skepticism and there are continuing inquiries into their alleged effects. After accusing Russia of waging a fake news war to sow discord in the West,59 Prime Minister Theresa May announced the establishment of a national security communications unit charged with combating disinformation by state actors and others, though further details about this initiative are still forthcoming.60

The inquiry into social media interference intensified in March 2018, when Christopher Wylie, a former employee at data analytics company Cambridge Analytica, claimed that the company had illegally obtained millions of profiles from Facebook and developed techniques which categorized and targeted potential voters. The UK Parliament continued with its inquiry, pushing for evidence to be provided by Mark Zuckerberg, CEO and founder of Facebook.61

Online media outlets face economic constraints that negatively impact their financial sustainability, but these are due to market forces, not political intervention. Publications have struggled to find a profitable system for digital platforms, though more than half the population report consuming news online. Diverse views are present online, but may not be widely read. In 2018, a survey found that 64 percent of adults used the internet for news, social media being the most popular online source.62

The UK lacks explicit protections for net neutrality, the principle that ISPs should not throttle, block, or otherwise discriminate against internet traffic based on content. Ofcom called for a self-regulatory approach to the issue in 2011,63 describing the blocking of services and sites by ISPs as “highly undesirable” but subject to self-correction based on market forces.64 Developments at the EU level could have an impact on net neutrality provisions in the UK, after agreement has been reached to ban paid prioritization—content owners being able to pay ISPs to push their content first—across the EU as part of the Digital Single Market policy package, which seeks to strengthen the digital economy through increased support and access.65 As the United Kingdom is ending its membership of the EU, it remains to be seen whether the government will adopt another position or maintain its current approach.

Self-censorship is difficult to measure in the UK, but not a grave concern. Due to the UK’s extensive surveillance practices, it is possible that certain online groups self-censor to avoid potential government interference. Media and civil society groups filed legal challenges after former National Security Agency (NSA) employee Edward Snowden made public the surveillance practices of the Government Communications Headquarters (GCHQ), indicating heightened concern about the privacy of their communications. In September 2018, the European Court of Human Rights found that parts of the UK’s bulk surveillance regime under the Regulation of Investigatory Powers Act 2000 violated the European Convention on Human Rights, noting that there were insufficient safeguards to protect confidential journalistic material.66

Digital Activism

Online political mobilization continues to grow both in terms of numbers of participants and numbers of campaigns. Some groups use digital tools to document and combat racist abuse, including TellMAMA (Measuring Anti-Muslim Attacks), a group which tracked reports of attacks or abuse submitted by British Muslims online.67 Petition and advocacy platforms such as 38 Degrees and AVAAZ have emerged, and civil society organizations view online communication as an indispensable part of a wider campaign strategy, though the efficacy of online mobilization remains subject to debate.

C Violations of User Rights

The government has placed significant emphasis on stopping the dissemination of terrorist content and hate speech online and on protecting individuals from targeted harassment on social media. User rights are undermined by extensive surveillance measures used to monitor the flow of information for law enforcement and foreign intelligence purposes. These were expanded upon in the 2016 Investigatory Powers Act. In April 2018, the High Court ordered the government to amend the legislation because some parts regarding access to retained data were not compatible with EU law.

Legal Environment

The UK does not have a written constitution or other omnibus legislation detailing the scope of governmental power and individual rights. Instead, these constitutional powers and individual rights are encapsulated in various statutes, common law, and conventions. The provisions of the European Convention on Human Rights (ECHR) were adopted into law via the Human Rights Act 1998. In 2014, Conservative Party officials announced intentions to repeal the Human Rights Act in favor of a UK Bill of Rights in order to give British courts more control over the application of human rights principles.1 During the 2017 election campaign, Prime Minister Theresa May had initially scaled back those ambitions.2 However, in June 2017, she reopened the possibility of significantly amending human rights legislation in order to more aggressively target terrorism in light of high profile attacks in Manchester and London.3 There has been no further comment from the government about this since the last report.

The UK has stringent hate speech offenses encapsulated in a number of laws (see Table 1). Some rights groups say they are too broadly worded. Defining what constitutes an offense has been made more difficult by the development of communications platforms, and prosecutions are becoming more common (see “Prosecutions and Detentions for Online Activities”).

Table 1: List of Legislation Regarding Offensive Speech

4567

The Crown Prosecution Service (CPS) publishes specific guidelines for the prosecution of crimes “committed by the sending of a communication via social media.”8 Updates in 2014 put digital harassment offenses committed with the intent to coerce the victims into sexual activity under the Sexual Offences Act 2003, which carries a maximum of 14 years in prison.9 Revised guidelines issued in March 2016 identified four categories of communications subject to possible prosecution: credible threats; communications targeting specific individuals; breach of court orders; and grossly offensive, false, obscene, or indecent communications.10 They also advised prosecutors to consider the age and maturity of the poster. Some observers said this could criminalize the creation of pseudonymous accounts, although only in conjunction with activity considered abusive.11 In October 2016, the CPS updated its guidelines to cover more abusive online behaviors, including organized harassment campaigns or “mobbing,” and doxxing, the deliberate publication of personal information online without permission to facilitate harassment.12

The Copyright, Designs, and Patents Act 1988 carries a maximum two-year prison sentence for offenses committed online. In July 2015, the government held a public consultation regarding a proposal to increase the sentence to 10 years. Of the 1,011 responses, only 21 supported the proposal,13 but a 2016 government consultation paper announced plans to submit an amendment to include the 10-year maximum sentence to parliament “at the earliest available legislative opportunity,”14 and it was incorporated into law with the passage of the Digital Economy Act 2017.

Libel laws that tended to favor the plaintiff had previously led to a large number of libel suits with only tenuous connection to the UK being brought in its courts, a phenomenon known as “libel tourism.” This has had a chilling effect on free speech in the UK, which the Defamation Act 2013 intended to reduce. Sections which took effect in January 2014 require claimants to prove that England and Wales is the most appropriate forum for the action, set a serious harm threshold for claims, and codify certain defenses such as truth and honest opinion. The overall number of defamation cases in the UK had fallen.15 By 2017, the number of cases had reached a nine-year low, with only 58 cases, although there was an increase in social media cases.16

Prosecutions and Detentions for Online Activities

Police frequently arrest individuals for posts promoting terrorism, issuing threats, or containing racist abuse, and have been accused of overreaching in the past. However, prison sentences for speech protected under international human rights norms remain rare.

Guidelines clarifying the scope of offenses involving digital communications may be helping to cut down on the more egregious speech-related prosecutions observed in the past (see “Legal Environment”). The scale of arrests remain a concern, although many investigations are dropped before prosecution. Figures obtained by The Times showed that 2016 and 2017, more than 3,000 individuals had been detained and questioned for offensive comments made online under section 127 of the Communications Act.17

There remains scope for local police departments to pursue complaints that many democracies would view as civil cases. In early 2016, for example, police in Scotland detained 28-year-old Markus Meechan, aka Count Dankula on YouTube, overnight after he uploaded a video of himself onto YouTube, teaching his girlfriend’s dog to perform a Nazi salute as a prank.18 The trial heard arguments at the end of 2017, with Meecham being convicted of breaching section 127 of the Communications Act 200319 and was ordered to pay a fine of GBP 800.20 Meecham refused to pay the fine, arguing that it sets a dangerous precedent, and held a fundraiser for an appeal.21

Surveillance, Privacy, and Anonymity

The Investigatory Powers Act 2016 (IP Act) authorizes law enforcement and intelligence agencies’ surveillance powers in a single omnibus act.22 Surveillance became a major point of contention in the UK following revelations about the mass surveillance activities of GCHQ and its international counterparts in documents leaked by Edward Snowden and published in the Guardian and other outlets since June 2013. Bulk surveillance poses a challenge to maintaining the integrity of civil rights as it affects individuals who are not considered of interest to intelligence and security services, without those individuals being informed about it. Subsequent independent reviews of law enforcement and intelligence agencies’ investigatory powers found surveillance regulation in need of reform.

The Investigatory Powers Act, which passed on November 29, 2016, covers interception, equipment interference, and data retention, among other areas.23 In general, the IP Act has been subject to criticism from industry, civil rights groups, and the wider public, particularly regarding the range of powers authorized and the legalization of bulk surveillance.24 Guardian journalist Ewan MacAskill, who helped publish the Snowden leaks, said the Investigatory Powers Act had introduced “the most sweeping surveillance powers in the western world.”25

Data retention provisions under the IP Act allow the Secretary of State to issue notices requiring telecommunications providers to capture information about user activity, including browser history, and retain it for up to 12 months. DRIPA, the law this requirement is modelled on, was ruled unlawful in the UK and the EU in 2015.26 In January 2018, the Court of Appeal described DRIPA as being inconsistent with European law, as the data collected and retained was not limited to the purpose of fighting serious crime.27 Since the IP Act 2016 was already in force in that judgement, the challenge remained to address the legislation that carried over from DRIPA, especially with regards to information retention. In April 2018, the High Court ruled that part of the act’s data retention provisions did not comply with EU law and that the government should amend the legislation by November 2018.28

The act specifically enables the bulk interception and bulk acquisition of communications data sent or received by individuals outside the British Isles, as well as bulk equipment interference involving “overseas-related” communications, information, and equipment data. Communications where both the sender and the receiver are in the UK are subject to targeted warrants, though several individuals, groups, or organizations may be covered under a single warrant in connection with a single investigation. However, the internet’s distributed architecture renders privacy protections based on the physical location of the subject of interception highly porous. Communications exchanged within the UK may be rerouted overseas, a fact that intelligence agencies have exploited in secret to conduct bulk surveillance programs like Tempora (see below).

Part 7 of the IP Act introduces warrant requirements for intelligence agencies to retain or examine “personal data relating to a number of individuals” where “the majority of the individuals are not, and are unlikely to become, of interest to the intelligence service in the exercise of its functions.”29 Datasets may be “acquired using investigatory powers, from other public sector bodies or commercially from the private sector.”30 An initial examination of bulk datasets must be within three months “where the set of information was created in the United Kingdom” and within six months otherwise (Section 220).

The IP Act establishes a new commissioner appointed by the prime minister to oversee investigatory powers under Section 227. Lord Justice Fulford, an appeal court judge, was appointed to the role in March 2017.31 The law also includes some other safeguards like “double-lock” interception warrants. These require approval from the Secretary of State (meaning the Home Secretary in security and terrorism investigations) or the Scottish Ministries in Scottish cases. The warrants must then be independently approved by a judge, although the Secretary alone approves urgent warrants. Under Section 32, urgent warrants last five days; others expire after six months unless renewed under the same double-lock procedure. The act allows authorities to prohibit telecommunications providers from disclosing the existence of a warrant. Intercepting authorities authorized to apply for targeted warrants include police commissioners, intelligence service heads, and revenue and customs commissioners.32 Applications for bulk interception, bulk equipment interference, and bulk personal dataset warrants can only be made to the Secretary of State “on behalf of the head of an intelligence service by a person holding office under the Crown” and must be reviewed by a judge.

In one problematic provision, the IP Act enables the government to order companies to decrypt content, though how far companies will be willing or able to comply remains unclear.33 Under Section 253, technical capability notices could be used to impose obligations on telecommunications operators both inside and outside the country “relating to the removal…of electronic protection applied by or on behalf of that operator to any communications or data,” among other requirements. The approval process for issuing a technical capability notice is similar to that of an interception warrant.34 Further regulations governing the notices were under consultation in mid-2017.35

Bulk surveillance is a particular issue in the UK context because intelligence agencies developed secret bulk programs under other laws that bypassed oversight mechanisms and means of redress for affected individuals. These programs have affected an untold number of people within the UK, even if they were meant to have only foreign targets. Tempora, a secret surveillance project documented in the Snowden leaks, is one example. A number of other legislative measures authorize surveillance,36 including the Regulation of Investigatory Powers Act 2000 (RIPA).37 (RIPA was not repealed by the IP Act, though many of RIPA’s competences are now transferred to the newer legislation.) A clause within Part I allowing the foreign or home secretary to sign off on bulk surveillance of communications data arriving from or departing to foreign soil provided the legal basis for Tempora.38 Since the UK’s fiber-optic network often routes domestic traffic through international cables, this provision legitimized widespread surveillance over UK citizens.39 Working with telecom companies, GCHQ installed intercept probes at the British landing points of undersea fiber-optic cables, giving the agency direct access to data carried by hundreds of cables, including private calls and messages.40

The Investigatory Powers Tribunal was established under RIPA to adjudicate issues regarding government surveillance. It also found procedural irregularities in the retention of communications intercepted from Amnesty International and the South Africa-based Legal Resources Center, though it found that the interception itself was lawful.41 In early 2016, the Tribunal ruled that computer network exploitation carried out by GCHQ was in principle lawful within the limitations in the European Convention of Human Rights.42 The tribunal also noted that network exploitation is legal if the warrant is as specific and narrow as possible.

Other issues relating to bulk surveillance were still being adjudicated during the reporting period. In July 2016, the Investigatory Powers Tribunal gave a partial ruling that bulk data collection by Britain’s three intelligence agencies GCHQ, MI5, and MI6, was unlawful from March 1998 until the practice was avowed in November 2015.43 That practice had been authorized under Section 94 of the Telecommunications Act 1984, which the Interception of Communications Commissioner described in June 2016 as lacking “any provision for independent oversight or any requirements for the keeping of records.”44 The Tribunal also said that the use of bulk personal datasets by GCHQ and MI5, commencing from 2006, were likewise unlawful until avowed in March 2015. The datasets contained personal information that could include financial, health, and travel information as well as communications details.45 There were hearings in June and October 2017 on the process and legality of collecting and sharing these datasets, with the oversight body unaware of the nature and extent of the practices.46

Intimidation and Violence

There were no reported incidences of violence against internet users for online activities over the coverage period, though cyberbullying, particularly targeting women, is widespread.47 A recent study found that one in three women MPs had experienced online abuse, harassment, or threats.48

One study reported an increase in abusive comments targeting politicians on Twitter, peaking on the day of the EU referendum.49 News reports said hate crime against minorities increased after the vote to leave, which was driven in part by campaigns which depicted immigration as a threat to the British way of life. One analysis of cyberbullying in different parts of the UK found that regions with high levels of online hate speech or racial intolerance did not necessarily vote with the Leave campaign, and said other issues were also driving the trend.50

Technical Attacks

Nongovernmental organizations, media outlets, and activists are not generally targeted for technical attacks by government or nonstate actors. Financially motivated fraud and hacking continue to present a challenge to authorities and the private sector. Incidents of cyberattacks have increased in recent years. Observers also question the security of devices connected to the network, known as the Internet of Things.51

During the previous year, a technical attack affecting public infrastructure had a significant impact on UK residents. In May 2017, the National Health Service suffered a ransomware attack in 40 organizations, effectively barring workers from patient case files.52 The attack had severe consequences, with delays and disruption to NHS services, denying essential services to vulnerable individuals.53

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    93 100 free
  • Internet Freedom Score

    78 100 free