United Kingdom

Free
76
100
A Obstacles to Access 23 25
B Limits on Content 30 35
C Violations of User Rights 23 40
Last Year's Score & Status
77 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Key Developments, June 1, 2016 - May 31, 2017

  • In November 2016, the controversial Investigatory Powers Act 2016 reformed the legal framework governing the surveillance powers available to law enforcement and intelligence agencies, significantly undermining privacy (see Surveillance, Privacy, and Anonymity).
  • The WannaCry attack was one of the first major instances of a cyberattack affecting UK public-facing health service infrastructure (see Technical Attacks).

header2 Introduction

Internet freedom declined in 2017 as the Investigatory Powers Act (IP Act) authorized a range of surveillance powers, including some bulk surveillance of individuals who are not the targets of criminal or national security investigations. The WannaCry ransomware attack also exploited vulnerabilities in national public health infrastructure to impede care for patients.

The UK has consistently been an early adopter of new information and communication technologies (ICTs). Internet coverage is almost universal, with competitive prices and generally fast speeds. Mobile devices, especially smartphones, have become the most prevalent means of internet access. However, strategies to combat extremist as well as offensive speech online periodically threaten to curb legitimate expression.

The IP Act was given royal assent on November 29, 2016. The law was devised to clarify, update, and define powers of surveillance available to intelligence, police, and security services. Though it introduced some new oversight mechanisms, it was a step back for privacy in several respects. The law authorized bulk surveillance measures that the United Nations special rapporteur on privacy called “disproportionate” and “privacy-intrusive.” It also increased requirements for internet companies to cooperate with investigations, including potentially “removing electronic protection” from encrypted communications or data where possible.

The law was passed despite ongoing findings about UK surveillance overreach. The Investigatory Powers Tribunal ruled that UK intelligence agencies had unlawfully conducted bulk data collection for 17 years before the activity was publicly disclosed in 2015. Separately, an EU court found that that data retention requirements for companies operating in the UK “cannot be considered to be justified” in a democratic society. But the IP Act codified similar practices and others that were even more concerning.

A Obstacles to Access

ICT infrastructure is generally strong and policies and regulation tend to favor access. The overwhelming majority of UK citizens use the internet frequently on a variety of devices, particularly smartphones, and substantial investments led by the government have led to better levels of service.

Availability and Ease of Access

Access to the internet is considered to be a key element determining societal and democratic participation in the UK. Broadband access is almost ubiquitous, and nearly 100 percent of all households are within range of ADSL connections. All national mobile network operators offer 4G mobile communication technology, with outdoor 4G coverage from at least one network accessible in over 89 percent of UK premises.1

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries. Prices remain competitive as the scope of services increases. A sample mobile data plan cost around GBP 10 (US$ 12) a month in 2017; the most affordable fixed-line broadband packages were available for a little over GBP 30 (US$ 37) a month.2 The average monthly income was GBP 2,774 (US$ 3,480) in 2016.3

The Digital Economy Act 2017 provides that a minimum of 10 Mbps broadband access is effectively a legal right.4 Progress continues towards the expansion of “superfast” broadband that has an advertised speed of at least 30 Mbps.5 In 2015, 30 percent of all broadband connections were superfast, compared to 0.2 percent in 2009,6 and more than 80 percent of all UK premises have superfast broadband access availability.7 A voucher scheme covering up to GBP 3,000 (US$ 4,440) of installation costs for small and medium enterprises has been in place in 50 British cities since 2015.8

Mobile telephone penetration is extensive. In 2016, 66 percent of adults reported a smartphone was their primary device for accessing the internet,9 and reported valuing their smartphone over any other communication or media device;10 the smartphone was identified as the primary device for access in five out of nine online activities.11

People in the lowest income groups are significantly less likely to have home internet subscriptions, with the gap between socioeconomic groups remaining the same for the past few years. However, in 2016 it was found that internet use in the 65 to 74 age group has increased by nearly 70 percent since 2011.12 Of the 15 percent of adults without household internet access, 12 percent reported having no intention to obtain it.13 There is a no general gender gap in internet use, though two-thirds of women over 75 have never used the internet.14

Restrictions on Connectivity

The government does not place limits on the amount of bandwidth ISPs can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services (such as peer-to-peer file sharing and television streaming). Mobile providers have cut back on previously unlimited access packages for smartphones, reportedly because of concerns about network congestion.

ICT Market

The five major internet service providers (ISPs) are British Telecom (BT) with a 32 percent market share, Sky (23 percent), Virgin Media (19 percent), TalkTalk (13 percent), EE (4 percent) and others (8 percent).15 Through local loop unbundling—where communications providers offer services to households using infrastructure provided mainly by BT and Virgin—a wider number of companies provide internet access. Ninety-five percent of homes could receive unbundled telecommunications services by 2015.16

ISPs are not subject to licensing but must comply with general conditions set by Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.17

Among mobile operators, EE leads the market with 29 percent of subscribers, followed by O2 (27 percent), Vodafone (19 percent), Three (11 percent), and Tesco (8.5 percent).18 Mobile Virtual Network Operators, including Tesco, provide service using the infrastructure of one of the other four.

Regulatory Bodies

Ofcom, an independent statutory body, is the primary telecommunications regulator under broad definitions of responsibility for “citizens,” “consumers,” and “communications matters” granted to it under the Communications Act 2003.19 It is responsible to Parliament and also regulates the broadcasting and the postal sectors.20 Ofcom has some content regulatory functions with implications for the internet, such as regulating video content in keeping with the European Union (EU) AudioVisual Media Services Directive.21

Nominet, a nonprofit company operating in the public interest, manages access to the .uk, .wales, and .cymru domains. In 2013, Nominet implemented a post-registration domain name screening to suspend or remove domain names that encourage serious sexual offenses.22

Other groups regulate services and content through voluntary ethical codes or co-regulatory rules under independent oversight. In 2012, major ISPs published a “Voluntary Code of Practice in Support of the Open Internet.”23 The code commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade the services of a competitor. The code was amended in 2013 to clarify that signatories could deploy content filtering or provide such tools where appropriate for public Wi-Fi access.24

Criminal online content is managed by the Internet Watch Foundation (IWF), an independent self-regulatory body funded by the EU and industry bodies (see “Blocking and Filtering”).25 The Advertising Standards Authority and the Independent Press Standards Organization regulate newspaper websites. With the exception of child abuse content, these bodies eschew pre-publication censorship and operate post-publication notice and takedown procedures within the E-Commerce Directive liability framework (see “Content Removal”).

B Limits on Content

Various categories of criminal content such as depictions of child sexual abuse, promotion of extremism and terrorism, and copyright infringing materials are blocked by UK ISPs. Parental controls over content considered unsuitable for children are enabled by default on mobile networks, requiring adults to opt out to access adult material. These measures can result in overblocking, and a lack of transparency persists regarding the processes involved and the kind of content affected. Allegations of online content manipulation were made during the reporting period.

Blocking and Filtering

The Digital Economy Act 2017, passed in April, introduces a number of requirements on ISPs and content providers. Section 14(1) requires content providers to verify the age of users accessing online pornography. The legislation envisions a regulator which will provide guidance on the means and mechanisms for providers to achieve compliance. The regulator may issue fines of up to GBP 250,000 (US$ 330,000) or 5 percent of the provider’s turnover, whichever is greater, for noncompliance.1 In mid-2017, the British Board of Film Certification (BBFC) signed letters of understanding with the government to take up the role.2 The legislation also generated controversy by including provisions to allow blocking of “extreme” pornographic material under standards which critics said were poorly defined and unevenly applied.3

Service providers already block and filter some illegal and some legal content in the UK, with varying degrees of transparency. Illegal content falls into three categories. First, ISPs block illegal content depicting child sexual abuse. Second, overseas-based URLs hosting content that has been reported by police for violating the Terrorism Act 2006 —which prohibits the glorification or promotion of terrorism—are included in the child filters supplied by many ISPs, and are inaccessible in schools, libraries, and other facilities considered part of the “public estate.” The list of sites in these two categories is kept from the public to prevent access to unlawful materials. Finally, ISPs are also required to block domains and URLs found to be hosting material that infringes copyright when ordered by the High Court. Those orders are not kept from the public, but can be hard to obtain.4

Separately, all mobile service providers and some ISPs providing home service filter legal content considered unsuitable for children. Mobile service providers enable these filters by default, requiring customers to prove they are over 18 to access the unfiltered internet. In 2013, the four largest ISPs agreed with the government to present all customers with an “unavoidable choice” about whether to enable parentally controlled filters.5

measures,” December 16, 2015, http://stakeholders.ofcom.org.uk/binaries/internet/fourth_internet_safe… Civil society groups say those filters lack transparency and affect too much legitimate content, making it hard for consumers to make informed choices, and for content owners to appeal wrongful blocking.

ISPs block URLs using content filtering technology known as Cleanfeed, which was developed by BT in 2004.6 In 2011, a judge described Cleanfeed as “a hybrid system of IP address blocking and DPI-based URL blocking which operates as a two-stage mechanism to filter specific internet traffic.” While the process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables ISPs to block individual URLs rather than entire domains, it does not enable “detailed, invasive analysis of the contents of a data packet,” according to the judge’s description. Other, similar systems adopted by ISPs besides BT are also “frequently referred to as Cleanfeed,” the judge wrote.7

ISPs are notified about websites hosting content that has been determined to break, or potentially break UK law under different procedures:

  • The Internet Watch Foundation (IWF) compiles a list of specific URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content to distribute to ISPs and other industry stakeholders who support the foundation through membership fees.8 ISPs block those URLs in accordance with a voluntary code of practice set forth by the Internet Services Providers’ Association (see “Regulatory Bodies”). IWF analysts evaluate sites hosting material that potentially violate a range of UK laws,9 in accordance with a Sexual Offences Definitive Guideline published by the Sentencing Council under the Ministry of Justice.10 The IWF recommends that ISPs notify customers why the site is inaccessible,11 but some have returned error messages instead.12 The IWF website allows site owners to appeal their inclusion on the list. Citizens can also report criminal content via a hotline. In 2008, the IWF blacklisted a Wikipedia page displaying an album cover depicting a naked girl based on a complaint. Other Wikipedia users reported that the block affected their ability to edit the site’s user-generated content,13 and the IWF subsequently removed the page from the list.14 An independent judicial review of the human rights implications of IWF's operations conducted in 2014 said the body’s work was consistent with human rights law.15 The IWF appointed a human rights expert in accordance with one of the review’s recommendations, but deferred action on another to restrict its remit to child sexual abuse.16
  • The police Counter Terrorism Internet Referral Unit compiles a list of URLs hosted overseas containing material considered to glorify or incite terrorism under the Terrorism Act 2006,17 which are filtered on networks of the public estate, such as schools and libraries; they can still be accessed on private computers.18 In 2014, the four largest ISPs, BT, Virgin, Sky, and TalkTalk, said they would also include this content in parental filters.19
  • The UK High Court can order ISPs to block websites found to be infringing copyright under the Copyright, Designs, and Patents Act 1988.20 The High Court has held that publishing a link to copyright infringing material, rather than actually hosting it, does not amount to an infringement;21 this approach was confirmed by the Court of Justice of the European Union.22 In 2014, a new intellectual property framework included exceptions for making personal copies of protected work for private use, as well as for “parody, caricature and pastiche.”23 Copyright-related blocking has been criticized for its inefficiency and lack of transparency.24 In 2014, after lobbying from the London-based Open Rights Group, BT, Sky, and Virgin Media began informing visitors to sites blocked by court order that the order can be appealed at the High Court.25

Mobile service providers also block URLs identified by the IWF as containing potentially illegal content. However, Mobile UK, an industry group which consists of Vodafone, Three, EE, and O2,26 introduced additional filtering of content considered unsuitable for children in a code of practice published in 2004 and updated in 2013.27

mobiles,” version 3, July 1, 2013, http://www.mobilebroadbandgroup.com/documents/UKCodeofpractice_mobile_1… These child filters are enabled by default in mobile internet browsers, though users can disable them by verifying they are over 18. Mobile Virtual Network Operators are believed to “inherit the parent service's filtering infrastructure, though they can choose whether to make this available to their customers.”28 Transparency about what content is affected depends on the provider. O2 allows its users to check how a particular site has been classified.29

The filtering is based on a classification framework for mobile content published by the BBFC.30 Definitions of content the BBFC considers suitable for adults only include “the promotion, glamorization or encouragement of the misuse of illegal drugs;” “sex education and advice which is aimed at adults;” and “discriminatory language or behavior which is frequent and/or aggressive, and/or accompanied by violence and not condemned,” among others. The BBFC adjudicates appeals from content owners about overblocking and publishes the results quarterly.31

The four largest ISPs, BT, Sky, Virgin Media and TalkTalk, offer all customers the choice to activate similar filters to protect children under categories that vary by provider, but can include social networking, games, and sex education.32 Website owners can check whether their site is filtered under one or more category, or report overblocking, by emailing the industry-backed nonprofit group Internet Matters,33 though the process and timeframe for correcting mistakes varies by provider.

These optional filters can affect a range of legitimate content about public health, homosexuality, drug awareness, and even information published by civil society groups and political parties. In 2012, O2 customers were temporarily unable to access the website of the right-wing nationalist British National Party.34 Civil society groups also have criticized the subjectivity of the content selected for filtering. A 2014 magazine article noted that all ISPs had blocked dating sites with the exception of Virgin Media, which operates one.35 During the coverage period of this report, an Ofcom report said that ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”36 Transparency about the process remains lacking. In August 2015, when a watchmaking business complained to BT that their company website was blocked by its Parental Control software, the provider responded that the process had been outsourced to “an expert third party,” and that BT was “not involved.”37

Blocked!, a site operated by the Open Rights Group, allows users to test the accessibility of websites and report overblocking of content by both home broadband and mobile internet providers.38 In mid-2016, the website listed 11,715 sites blocked by default filters, meaning a user would have to proactively disable the filter in order to view the content affected. A further 21,239 sites were blocked by filters which users enable by choice. By early 2017, these figures have changed to 20,390 sites blocked by strict filters and 10,558 by default filters.

Content Removal

Content in different categories, including extremism and hate speech, may be subject to removal in the UK, though authorities often struggle to enforce the relevant laws.

During the coverage period, the government accused social media platforms of not doing enough to combat hate speech.39 In one example revealed during a trial for terrorism offenses committed online, police said their attempts to remove content published by the defendant from Twitter and YouTube had repeatedly failed, as the authorities had no powers to compel these platforms to remove the content.40 The Home Office published guidance for tackling hate speech, which included more efforts to monitor and regulate online activities.41 This should be viewed within the context of the IP Act (see “Surveillance, Privacy, and Anonymity”), which empowers law enforcement agencies to require telecommunication service providers to retain content data about users.

Different regulations affect content removal. Material blacklisted by the IWF because it constitutes a criminal offense (see “Blocking and Filtering”) can also be subject to removal. When the content in question is hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. For content that is hosted on servers overseas, the IWF coordinates with international hotlines and police to get the offending content taken down in the host country. Similar processes are in place for the investigation of online materials inciting hatred under the oversight of TrueVision, a site that is managed by the police.42

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful for terrorists, or incites people to carry out or support terrorism. A Counter Terrorism Internet Referral Unit (CTIRU) was set up in 2010 to investigate internet materials and take down instances of “jihadist propaganda.”43 The CTIRU compiles lists of URLs hosting such material outside its jurisdictions, which are then passed on to service providers for voluntary filtering (see “Blocking and Filtering”). In June 2015, then-Home Secretary Theresa May said the unit was taking down “about 1,000 pieces of terrorist-related material per week.”44

Website owners and companies who knowingly host illicit material and fail to remove it may held liable, even if the content was created by users, according to EU Directive 2000/31/EC (the E-Commerce Directive).45 Subsequent updates to the Defamation Act effective since 2014 limited companies’ liability for user-generated content that is considered defamatory. However, the Defamation Act offers protection to website operators from private libel suits based on third-party postings only if the victim alleging defamation can find the user responsible.46 The act does not specify what sort of information the website operator must provide to plaintiffs, but raised concerns that unauthenticated ID or anonymous internet use may prevent the operator from benefiting from the act’s liability protections, thus encouraging website operators to register users to avoid civil liability.47

In May 2014, the European Court of Justice gave search engines the task of removing links from their search results at the request of individuals if the stories in question were deemed to be inadequate or irrelevant. The so-called “right to be forgotten” ruling has had an impact on the way content is handled in the UK. Google reported receiving 70,498 requests involving the UK, requesting the removal of 272,570 URLs from its search results by July 2017, and complied in 39 percent of cases.48 The BBC publishes regular lists of its news stories which have been delisted by search engines.49 In 2016, Google expanded the right to be forgotten by removing links from all versions of its search engine.50 Despite the UK ending its EU membership, the government and the data protection regulator, the Information Commissioner’s Office (ICO), have committed to implementing EU guidance on data protection, the General Data Protection Regulation,51 which comes into force in May 2018. Under this guidance, the right to be forgotten will continue to apply in the UK.

Media, Diversity and Content Manipulation

Self-censorship is difficult to measure in the UK, but not a grave concern. After the January 2015 attack on the French publication Charlie Hebdo some news outlets refrained from publishing the magazine’s controversial cartoons of the prophet Muhammad,52 but the decision was not government influenced or mandated.

Due to the UK’s extensive surveillance practices (see “Surveillance, Privacy and Anonymity”), it is possible that certain online groups self-censor to avoid potential government interference. Media and civil society groups filed legal challenges after former National Security Agency (NSA) Edward Snowden made public the surveillance practices of the Government Communications Headquarters (GCHQ), indicating heightened concern about the privacy of their communications. In September 2014, the London-based Bureau for Investigative Journalism filed an application with the European Court of Human Rights to rule on whether UK legislation properly protects journalists’ sources and communications from government scrutiny and mass surveillance.53 In January 2015, the European Court of Human Rights prioritized the case,54 but in mid-2017 it remained pending.

There is no evidence of widespread government manipulation of online content, though a secretive unit of GCHQ, the Joint Threat Research Intelligence Group, is reported to have pseudonymously created content and social media accounts as part of an online propaganda strategy designed to “discredit, promote distrust, dissuade, deter, delay, or disrupt” targets, among other goals. The unit’s operations were publicized in a 2011 document leaked by Snowden, and apparently targeted individuals or specific organizations “who pose criminal, security, and defense threats” rather than a general readership. It’s not clear if the unit remains active.55

There were allegations that the quality of media made available of social networks was manipulated around the 2016 referendum and the 2017 election, adding to the polarization of political discourse online. The main beneficiary of such activities was not immediately clear.

In the lead up to the June 2016 referendum on UK membership of the European Union, the political discourse was largely conducted online. Both sides of the referendum had their messages artificially amplified by social media bots, or automated accounts.56 But hashtags associated with the leave side dominated on Twitter, with research demonstrating that bots played a “small but strategic” role.57 Quantitative analysis of other social media sites found more posts sympathetic to the leave campaign;58 the same was found in independent research on Instagram users.59 Racially motivated online abuse was also documented around the Brexit vote (see “Digital Activism” and “Intimidation and Violence”).

In May 2017, Facebook reported it had removed tens of thousands of fake accounts to limit the impact of deliberately misleading information disguised to look like news reports, which spread online prior to the June election.60 It was not clear if actors circulating these fake reports had a coherent agenda or how big their influence was. One group accused Facebook and Twitter of failing to curb disinformation depicting Muslims and migrants in a bad light.61 The government initiated a Parliamentary inquiry on fake news during the reporting period, but the inquiry was automatically concluded by the general election before it had published any findings.62

Online media outlets face economic constraints that negatively impact their financial sustainability, but these are due to market forces, not political intervention. Publications have struggled to find a profitable system for digital platforms, though more than half the population report consuming news online. Diverse views are present online, but may not be widely read. In 2014, 59 percent of people said they obtain news from the BBC website or app, 18 percent through Google, and 17 percent on Facebook.63

The UK lacks explicit protections for net neutrality, the principle that ISPs should not throttle, block or otherwise discriminate against internet traffic based on content. Ofcom called for a self-regulatory approach to the issue in 2011,64 describing the blocking of services and sites by ISPs as “highly undesirable” but subject to self-correction based on market forces.65 Developments at EU level could have an impact on net neutrality provisions in the UK, after agreement has been reached to ban paid prioritization—content owners being able to pay to ISPs to push their content first—across the EU as part of the Digital Single Market policy package, which seeks to strengthen the digital economy through increased support and access.66 As the United Kingdom is ending its membership of the European Union, it remains to be seen whether the government will adopt another position or maintain its current approach.

Digital Activism

Online political mobilization continues to grow both in terms of numbers of participants and numbers of campaigns, and some groups used digital tools to document and combat racist abuse during the reporting period, including TellMAMA (Measuring Anti-Muslim Attacks), a group which tracked reports of attacks or abuse submitted by British Muslims online.67

Petition and advocacy platforms such as 38 Degrees and AVAAZ continue to grow, and civil society organizations view online communication as an indispensable part of a wider campaign strategy, though efficacy of online mobilization remains subject to debate and it is generally impossible to explain success with reference to online campaigns alone.

C Violations of User Rights

The government has placed significant emphasis on stopping the dissemination of terrorist and hate speech online and on protecting individuals from targeted harassment on social media. User rights are undermined by extensive surveillance measures used by the government to monitor the flow of information for law enforcement and foreign intelligence purposes. These were expanded upon in the Investigatory Powers Act that passed during the reporting period. Technical attacks also exposed vulnerabilities in public infrastructure.

Legal Environment

The UK does not have a written constitution or other omnibus legislation detailing the scope of governmental power and individual rights. Instead, these constitutional powers and individual rights are encapsulated in various statutes and common law. The provisions of the European Convention on Human Rights (ECHR) were adopted into law via the Human Rights Act 1998. In 2014, Conservative Party officials announced intentions to repeal the Human Rights Act in favor of a UK Bill of Rights in order to give British courts more control over the application of human rights principles.1 During the 2017 election campaign, Prime Minister Theresa May had initially scaled back those ambitions.2 However, in June 2017, she reopened the possibility of significantly amending human rights legislation in order to more aggressively target terrorists in light of high profile attacks in Manchester and London.3

The UK has stringent hate speech offenses encapsulated in a number of laws (see Table 1). Some rights groups say they are too broadly worded. Defining what constitutes an offense has been made more difficult by the development of communications platforms, and prosecutions are becoming more common (see “Prosecutions and Detentions for Online Activities”).

Table 1: List of Legislation Regarding Offensive Speech

4 5 6 7

The Crown Prosecution Service (CPS) publishes specific guidelines for the prosecution of crimes “committed by the sending of a communication via social media.”8 Updates in 2014 put digital harassment offenses committed with the intent to coerce the victims into sexual activity under the Sexual Offences Act 2003, which carries a maximum of 14 years in prison.9 Revised guidelines issued in March 2016 identified four categories of communications subject to possible prosecution: credible threats; communications targeting specific individuals; breach of court orders; and grossly offensive, false, obscene, or indecent communications.10 They also advised prosecutors to consider the age and maturity of the poster. Some observers said this could criminalize the creation of pseudonymous accounts, although only in conjunction with activity considered abusive.11 In October 2016, the CPS updated its guidelines to cover more abusive online behaviors, including organized harassment campaigns or “mobbing,” and doxxing, the deliberate publication of personal information online without permission to facilitate harassment.12

The Copyright, Designs, and Patents Act 1988 carries a maximum two-year prison sentence for offenses committed online. In July 2015, the government held a public consultation regarding a proposal to increase the sentence to 10 years. Of the 1,011 responses, only 21 supported the proposal,13 but a 2016 government consultation paper announced plans to submit an amendment to include the 10-year maximum sentence to parliament “at the earliest available legislative opportunity,”14 and it was incorporated into law with the passage of the Digital Economy Act 2017.

Libel laws that tended to favor the plaintiff had previously led to a large number of libel suits with only tenuous connection to the UK being brought in its courts, a phenomenon known as “libel tourism.” This has had a chilling effect on free speech in the UK, which the Defamation Act 2013 intended to reduce. Sections which took effect in January 2014 require claimants to prove that England and Wales is the most appropriate forum for the action, set a serious harm threshold for claims, and codify certain defenses such as truth and honest opinion. The overall number of defamation cases in the UK had fallen by 40 percent in 2015, according to the latest available data.15

Prosecutions and Detentions for Online Activities

Police frequently arrest individuals for posts promoting terrorism, issuing threats, or containing racist abuse, and have been accused of overreaching in the past. However, jail sentences for speech that is protected under international human rights norms remain rare. Criminal charges publicized in the past year involved violent threats; some are included here for reference, not because free speech advocates in the UK have challenged the prosecutions.

Guidelines clarifying the scope of offenses involving digital communications may be helping to cut down on the more egregious speech-related prosecutions observed in the past (see “Legal Environment”). But the scale of prosecutions remain a concern. According to a Freedom of Information (FOI) request in October 2014,16 12,000 people were prosecuted for offensive speech on social media between 2008 and 2013. Another FOI request made to the Metropolitan Police in London revealed 3,669 arrests for online communications were made in the city between 2010 and 2015.17 There remains scope for local police departments to pursue complaints that many democracies would view as civil cases. In early 2016, for example, police in Scotland detained 28-year-old Markus Meechan overnight after he uploaded a video of himself teaching his girlfriend’s dog to perform a Nazi salute on YouTube as a prank.18 The trial was set for the end of 2017.

Other cases involve terrorism offenses. In September 2016, for example, an extremist Muslim cleric was sentenced to five and a half years out of a maximum ten years in prison for urging support of the Islamic State militant group in YouTube videos and other social media posts. Addressing the defendant, the judge said “you knowingly crossed the line between the legitimate expression of your own views and the criminal act of inviting support for an organisation which was at the time engaged in appalling acts of terrorism,” according to news reports.19

Other criminal cases publicized in the past year involved threats of violence:

  • In December 2016, police in East London arrested a man in relation to a post on Twitter asking someone to “Jo Cox” Anna Soubry MP. Cox, also a member of Parliament, was murdered in June 2016 while intervening in a fight during the Brexit campaign.20 The man was given a suspended jail sentence of 10 weeks under section 127 of the Communications Act in June 2017.21
  • In March 2017, police charged 50-year-old Rhodri Philipps on three counts of sending racially aggravated malicious communications. He had been arrested in January and released on bail. News reports said he was accused of publishing “menacing” Facebook posts, including one offering anyone GBP 5,000 (US$ 6,560) to run over Gina Miller, the principle claimant in a legal case which led to a Supreme Court ruling that the government must seek approval from parliament before starting the process for the UK to leave the EU. He pleaded not guilty in May, and the charges were pending at the end of the coverage period. 22 Another man arrested in a separate case involving online threats against Miller was released without charge in December 2016. Police also issued eight cease and desist orders warning other individuals to stop threatening Miller or face police action.23 In the wake of the Supreme Court’s ruling, some social media users called for Miller to be hung, hunted, and shot.24

Personal slurs, on the other hand, were punished with community orders:

  • A father in Essex county was required to complete unpaid community service under a one-year community order for calling staff of his son’s school “child abusers” on Facebook; he was barred from posting on the school’s Facebook page under a restraining order in June 2016.25
  • In September 2016, a man in Sunderland was sentenced to an 18 month community order, including hours of rehabilitation activities and supervision, for two offenses—posting a “gross racial slur” about a nurse who was treating him in hospital on Facebook, and an unrelated assault.26

Other arrests for online content are periodically reported:

  • In June 2017, just outside the coverage period of this report, a man and a woman were arrested on suspicion of inciting racial hatred in connection with online videos depicting a man burning a Koran.27
  • Also in June, police in London arrested a man on suspicion of sending malicious communications and obstructing a coroner. News reports said he had posted pictures of the body of one of the victims of a fatal fire in a west London tower block on social media.28

A high profile libel verdict was issued during the reporting period. The case was the first high profile case of defamation on Twitter, and the media personalities involved attracted intense media scrutiny. Jack Monroe, a food blogger and social activist, successfully sued columnist Katie Hopkins for the latter’s insinuation on Twitter that Monroe had defaced or condoned the defacement of a World War II memorial during May 2015 protests in response to the Conservative Party general election victory. Hopkins had confused Monroe with another commentator who defended the protesters’ actions,29 and Monroe filed suit after Hopkins refused her offer to accept a public apology along with a GBP 5,000 (USD 6,600) donation to charity. The judge held that two tweets by Hopkins contained “meanings with a defamatory tendency which were published to thousands,” causing Monroe “substantial distress, but also harm to her reputation which was serious, albeit not “very serious” or “grave.”” He ordered Hopkins to pay GBP 24,000 (USD 31,700) in damages and an estimated GBP 107,000 (USD 141,400) in costs. 30

Surveillance, Privacy, and Anonymity

The Investigatory Powers Act 2016 (IP Act) was signed into law during the reporting period, even while provisions it was based on were ruled unlawful. It attracted criticism from a wide range of political perspectives.

The new law authorizes law enforcement and intelligence agencies’ surveillance powers in a single omnibus act.31 Surveillance became a major point of contention in the UK following revelations about the mass, or bulk surveillance activities of GCHQ and its international counterparts in documents leaked by Edward Snowden and published in the Guardian and other outlets since June 2013. Bulk surveillance poses a challenge to maintaining the integrity of civil rights as it affects individuals who are not considered of interest to intelligence and security services, without those individuals being informed about it. Subsequent independent reviews of law enforcement and intelligence agencies’ investigatory powers found surveillance regulation in need of reform. But Guardian journalist Ewan MacAskill, who helped publish the Snowden leaks, said the IP Act had introduced “the most sweeping surveillance powers in the western world.”32

The Investigatory Powers Act, which passed on November 29, 2016, covers interception, equipment interference, and data retention, among other areas.33 Equipment interference ranges “from remote access to computers, to downloading covertly the contents of a mobile phone during a search.”34

The act distinguishes between domestic and overseas targets. It specifically enables the bulk interception and bulk acquisition of communications data sent or received by individuals outside the British Isles, as well as bulk equipment interference involving “overseas-related” communications, information, and equipment data defined under Section 176. Communications where both the sender and the receiver are in the United Kingdom are subject to targeted warrants, though several individuals, groups, or organizations may be covered under a single warrant in connection with a single investigation.

However, the internet’s distributed architecture renders privacy protections based on the physical location of the subject of interception highly porous. Communications exchanged within the UK may be rerouted overseas, a fact that intelligence agencies have exploited in secret to conduct bulk surveillance programs like Tempora (see below).

Part 7 of the IP Act introduces warrant requirements for intelligence agencies to retain or examine “personal data relating to a number of individuals” where “the majority of the individuals are not, and are unlikely to become, of interest to the intelligence service in the exercise of its functions.”35 Datasets may be “acquired using investigatory powers, from other public sector bodies or commercially from the private sector.”36 Time limits for the initial examination of bulk datasets are set at three months “where the set of information was created in the United Kingdom” and six months otherwise (Section 220).

The IP Act establishes a new commissioner appointed by the prime minister to oversee investigatory powers under Section 227. Lord Justice Fulford, an appeal court judge, was appointed to the role in Mach 2017.37 The law also includes some other safeguards like “double-lock” interception warrants. These require approval from the Secretary of State (meaning the Home Secretary in security and terrorism investigations) or the Scottish Ministries in Scottish cases. The warrants must then be independently approved by a judge, although the Secretary alone approves urgent warrants. Under Section 32, urgent warrants last five days; others expire after six months unless renewed under the same double-lock procedure. The act allows authorities to prohibit telecommunications providers from disclosing the existence of a warrant.

Intercepting authorities authorized to apply for targeted warrants include police commissioners, intelligence service heads, and revenue and customs commissioners.38 Applications for bulk interception, bulk equipment interference, and bulk personal dataset warrants can only be made to the Secretary of State “on behalf of the head of an intelligence service by a person holding office under the Crown” and must be reviewed by a judge.

Provisions under Part 3 of the act allow the Secretary of State to issue data retention notices requiring telecommunications providers to capture information about user activity, including browser history, and retain it for up to 12 months. Providers of front-end communications platforms and cybercafe operators could also be required to comply. DRIPA, the law this requirement is modelled on, has been ruled unlawful in the UK and the EU (see below). The law defines the telecommunications operators who comply with warrants and requests as anyone who “offers or provides a telecommunications service to persons in the United Kingdom.”

These records or metadata reveal anything about a communication except the actual content,39 and a range of bodies can access them. Any “relevant public authorities” may request communications data with the approval of a designated senior official of a relevant public authority. This appears to cover practically any public body, though Section 62 attaches conditions to requests for internet-specific connection records, like limiting them to investigations of crimes punishable by more than one year in prison.40 Local authorities must also obtain a magistrate’s approval. Applications to view data “for the purpose of identifying or confirming a source of journalistic information” are explicitly allowed, with judicial review, under Section 77.

Public authorities already access communications data with some frequency. The 2012 Protection of Freedoms Act also requires local authorities to obtain the approval of a magistrate to access communications data.41 In 2015, 761,702 items of communications data were acquired by public authorities, according to the Interception of Communications Commissioner, who acts as a reviewer and ombudsman for surveillance and data collection. Of these data, about 94 percent were acquired by police and a little over 5 percent by intelligence services. The remaining 0.5 percent of requests were made by other public bodies such as local authorities.42 About half the data requested was subscriber information.

But Sections 67-69 of the IP Act added a “request filter” maintained by the Home Office to the existing process for accessing communications data, which the government characterized as a safeguard to minimize access to “irrelevant data.”43 The Open Rights Group said the filter would automate cross-referencing of complex data sets, pointing out that Parliament had described a similar provision in an earlier bill as “essentially a federated database of all UK citizens’ communications data.”44

In one problematic provision, the IP Act enables the government to order companies to decrypt content, though how far companies will be willing or able to comply remains unclear.45 Under Section 253, technical capability notices could be used to impose obligations on telecommunications operators both inside and outside the country “relating to the removal…of electronic protection applied by or on behalf of that operator to any communications or data,” among other requirements. The approval process for issuing a technical capability notice is similar to that of an interception warrant.46 Further regulations governing the notices were under consultation in mid-2017.47

David Anderson, an independent expert appointed by the home secretary to evaluate the operation of counterterrorism law,48 said the IP Act had “world-leading” oversight features, though he characterized the double-lock procedure as cumbersome, and recommended that the government publish its own interpretation of technical concepts within the act.49

In general, however, the IP Act has been subject to criticism from industry, civil rights groups, and the wider public. Multiple stakeholders had taken issue when it was still a draft bill. Apple argued that weakening encryption would weaken individual security.50 More than 200 lawyers called the bill “not fit for purpose.”51 The United Nations’ Special Rapporteur for Privacy, Joseph Cannataci, recommended that “disproportionate, privacy-intrusive measures such as bulk surveillance and bulk hacking as contemplated in the Investigatory Powers Bill be outlawed.”52 Criticisms continued to be reported in the media after the bill became law, particularly regarding the range of powers authorized and the legalization of bulk surveillance.53

Bulk surveillance is a particular issue in the UK context because intelligence agencies developed secret bulk programs under other laws that bypassed oversight mechanisms and means of redress for affected individuals. These programs have affected an untold number of people within the UK, even if they were meant to have only foreign targets.

Tempora, a secret surveillance project documented in the Snowden leaks, is one example. A number of other legislative measures authorize surveillance,54 including the Regulation of Investigatory Powers Act 2000 (RIPA).55 (RIPA was not repealed by the IP Act, though many of RIPA’s competences are now transferred to the newer legislation.) A clause within Part I allowing the foreign or home secretary to sign off on bulk surveillance of communications data arriving from or departing to foreign soil provided the legal basis for Tempora.56 Since the UK’s fiber-optic network often routes domestic traffic through international cables, this provision legitimized widespread surveillance over UK citizens.57 Working with telecom companies, GCHQ installed intercept probes at the British landing points of undersea fiber-optic cables, giving the agency direct access to data carried by hundreds of cables, including private calls and messages.58 The arrangement allowed GCHQ to pass on information to its US counterparts in the NSA regarding U.S. citizens, thereby bypassing American restrictions on domestic surveillance.59 Documents leaked by Snowden and published by The Intercept in 2015 revealed that systems set up to process that information in the UK included an operation designed to record the website browsing habits of “every visible user on the internet.”60

A government tribunal has ruled that sharing of information intercepted from internet communications between GCHQ and the NSA was lawful after some of the procedures were publicly disclosed, but that the activity violated European human rights standards prior to that public disclosure, between 2007 and 2014.61 The Investigatory Powers Tribunal was established under RIPA to adjudicate issues regarding government surveillance. It also found procedural irregularities in the retention of communications intercepted from Amnesty International and the South Africa-based Legal Resources Center, though it found that the interception itself was lawful.62 In early 2016, the Tribunal ruled that computer network exploitation carried out by GCHQ was in principle lawful within the limitations in the European Convention of Human Rights.63 The tribunal also noted that network exploitation is legal if the warrant is as specific and narrow as possible.

Other issues relating to bulk surveillance were still being adjudicated during the reporting period. In July 2016, the Investigatory Powers Tribunal gave a partial ruling that bulk data collection by Britain’s three intelligence agencies GCHQ, MI5, and MI6, was unlawful from March 1998 until the practice was avowed in November 2015.64 That practice had been authorized under Section 94 of the Telecommunications Act 1984, which the Interception of Communications Commissioner described in June 2016 as lacking “any provision for independent oversight or any requirements for the keeping of records.”65 The Tribunal also said that the use of bulk personal datasets by GCHQ and MI5, commencing from 2006, were likewise unlawful until avowed in March 2015. The datasets contained personal information that could include financial, health, and travel information as well as communications details.66 A hearing on the legality of foreign access to such information was still pending in mid-2017.

In December 2016, a European court separately ruled against one of the laws which preceded the IP Act. The government passed the temporary UK Data Retention and Investigatory Powers Act (DRIPA) in July 2014, requiring telecommunication companies to retain users’ metadata for up to 12 months.67 The legislation was a hurried response to a ruling by the Court of Justice of the European Union (CJEU) which struck down a European data retention directive68 requiring providers to retain user metadata for 18 months.69 Its scheduled expiration at the end of 2016 was part of the impetus for the swift passage of the IP Act, which mirrors much of the powers encapsulated by DRIPA, and in certain instances goes further than its predecessor.

Those powers have already been ruled as overreaching by the UK courts.70 In 2015, the High Court found that Sections 1 and 2 of DRIPA are unlawful, as they fail to provide clear and concise rules for ensuring that data is accessed for the purpose of serious offenses, and that access is not authorized by a court or other independent body.71 The government appealed the ruling, and the Court of Appeal referred to the CJEU for clarification.72

On December 21, 2016,73 the CJEU held that DRIPA “exceeds the limit of what is strictly necessary and cannot be considered to be justified, within a democratic society.“74 The court stated unequivocally that indiscriminate mass surveillance contravenes EU law, especially the European Charter of Fundamental Rights.75 It remains to be seen how the judiciary will use the judgement of the CJEU as the UK negotiates its exit from the EU.

Intimidation and Violence

There were no reported incidences of violence against internet users for online activities over the coverage period, though cyberbullying, particularly targeting women, is widespread.76 Many reported threats in the past year, including some with political implications; some assailants were prosecuted (see “Prosecutions and Detentions for Online Activities”).

One study reported an increase in abusive comments targeting politicians on Twitter, peaking on the day of the EU referendum.77 News reports said hate crime against minorities increased after the vote to leave, which was driven in part by campaigns which depicted immigration as a threat to the British way of life. One analysis of cyberbullying in different parts of the UK found that regions with high levels of online hate speech or racial intolerance did not necessarily vote with the Leave campaign, and said other issues were also driving the trend.78

Technical Attacks

Nongovernmental organizations, media outlets, and activists are not generally targeted for technical attacks by government or nonstate actors. Financially motivated fraud and hacking continue to present a challenge to authorities and the private sector. Incidents of cyberattacks have increased in recent years. Observers also question the security of devices connected to the network, known as the Internet of Things.79

One technical attack affecting public infrastructure had a significant impact on citizens. In May 2017, the National Health Service suffered a ransomware attack in 40 organizations, effectively barring workers from patient case files.80 The ransomware encrypts a device, making any files that are not backed up unavailable, and demands payment to restore access. In this case, the attackers demanded GBP 233 (USD 300) per infected machine, with the price doubling in three days, and all files being lost after seven.81 The ransomware did not target the NHS, but exploited a vulnerability in Microsoft’s implementation of the Server Message Block protocol, which manages communications through a network. The attack had severe consequences, with delays and disruption to NHS services, denying essential services to vulnerable individuals.82

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    93 100 free
  • Internet Freedom Score

    79 100 free