United Kingdom

A Obstacles to Access 23 25
B Limits on Content 30 35
C Violations of User Rights 24 40
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Key Developments June 2015—May 2016

  • The Investigatory Powers Bill was introduced in March 2016 to consolidate and reform government surveillance laws, but critics said it lacked adequate privacy safeguards; it was still being debated in parliament in mid-2016 (see Surveillance, Privacy, and Anonymity).
  • In March 2016, the Crown Prosecution Service in England and Wales issued guidelines for offenses related to social media, particularly online harassment (see Legal Environment).
  • On February 16, 2016, Police Scotland arrested a man who made controversial Facebook posts about Syrian refugees (see Prosecutions and Detentions for Online Activities).

header2 Editor’s Note

On June 23, 2016, outside the coverage period of this report, citizens of the United Kingdom voted to leave the European Union in a closely contested popular referendum. Prime Minister David Cameron resigned as leader of the ruling Conservative Party. He was replaced by Teresa May, who was previously the home secretary, in July.

header3 Introduction

Internet freedom improved slightly, with few reports of political and social websites blocked by mistake, though transparency about content controls remains lacking. Online harassment, extremist speech, and privacy remained priority issues in the United Kingdom’s internet policy in 2015 and 2016.

The UK has consistently been an early adopter of new information and communication technologies (ICTs). Internet access is rapidly approaching universal, with competitive prices and generally fast speeds. Mobile devices, especially smartphones, have become the most prevalent means of internet access.

Strategies to combat extremist as well as offensive speech online periodically threaten to curb legitimate expression. At least two people were briefly detained following derogatory—though nonviolent—social media posts during the coverage period of this report. In February 2016, police were called to a school in Southampton by staff who reported a 15-year-old pupil for accessing the website of the populist right-wing United Kingdom Independence Party, concerned about the site’s views on immigration and other matters.

The past year saw fierce debate regarding surveillance powers. In June 2015, the Investigatory Powers Tribunal identified irregularities in the Government Communications Headquarters (GCHQ) intelligence agency’s handling of communications data intercepted from two civil society groups, Amnesty International and the South Africa-based Legal Resources Center. The tribunal ruled that those irregularities violated human rights standards, though the interception itself was lawful. In February 2016, in a separate case, the tribunal ruled that GCHQ computer network exploitation or hacking activities were also lawful.

However, an independent report commissioned by the government and released in June 2015 called the existing legislative framework on surveillance “undemocratic, unnecessary and—in the long run—intolerable.” On March 1, 2016, the government introduced the Investigatory Powers Bill to consolidate and reform surveillance laws. The polarizing piece of legislation was criticized for authorizing overreaching surveillance and undermining privacy. In mid-2016, it was still being debated in parliament.

A Obstacles to Access

Access to the internet is considered to be a key element determining societal and democratic participation in the UK.1 ICT infrastructure is generally strong, allowing high levels of access. The overwhelming majority of UK citizens use the internet frequently on a widening variety of devices, particularly smartphones.2 In recent years, substantial investments led by the government have led to better levels of service for many citizens and businesses. For financial and literacy reasons, those over the age of 75 and people in the lowest socioeconomic groups still lack access.3 Policies and regulation in the country tend to favor access, although continuing revelations regarding extensive government surveillance practices may impact how citizens choose to access the internet.

Availability and Ease of Access

Internet penetration was reported at 87 percent, with the share of homes with fixed and mobile broadband at 80 percent.4 At the beginning of 2016, there were 24.4 million fixed broadband connections, representing a 4 percent increase over the previous year.5 The average broadband speed in 2014 was 22.8 Megabits per second (Mbps) according to an August 2015 report,6 continuing a trend of rising speeds and growing satisfaction among consumers served by faster fiber-optic based services. Nearly 100 percent of all households are within range of ADSL connections.

While broadband access is effectively ubiquitous, steady progress continues towards the expansion of “superfast” broadband that has an advertised speed of at least 30 Mbps.7 In 2015, 30 percent of all broadband connections were superfast, compared to 0.2 percent in 2009.8 Funding for a government superfast broadband program, which is aimed at improving broadband speed and access, expanded to GBP 1.7 billion (US$ 2.62 billion). 9 In early 2015, an additional 2,411,395 premises had access to superfast broadband through the scheme, meaning a total of 80 percent of all UK premises had superfast broadband access availability, in line with a target of 95 percent by 2017.10 A voucher scheme covering up to GBP 3,000 (US$ 4,440) of installation costs for small and medium enterprises has been in place in 50 British cities since April 2015.11

Mobile telephone penetration is extensive, with a reported penetration rate of 125 percent at the end of 2015.12 The introduction of faster fourth-generation (4G) services in 2012 encouraged video streaming and access to other data services. All national mobile network operators offered 4G mobile communication technology, with outdoor 4G coverage from at least one network accessible in over 89 percent of UK premises.13 In 2016, 66 percent of adults reported a smartphone was their primary device for accessing the internet,14 and reported valuing their smartphone over any other communication or media device;15 indeed the smartphone was identified as the primary device for access in five out of nine online activities.16

The UK provides a competitive market for internet access, and prices for communications services compare favorably with those in other countries, with the scope of services increasing while prices continue to fall and remain competitive.17 The average British household spent GBP 81.30 (US$ 125) per month on telecommunication services in 2014, a decrease of 0.1 percent from 2013.18 The difference between superfast and standard services in 2014 was between GBP 5 (US$ 7.66) and GBP 10 (US$ 15.31) per month.19 While 4G services were initially more expensive than non-4G services, the difference is shrinking, and in some cases disappearing. The price basket of mobile services fell by 0.4 percent in 2014.20

People in the lowest income groups are significantly less likely to have home internet subscriptions, with the gap between socioeconomic groups remaining the same for the past few years. In 2014, only 63 percent of individuals over the age of 65 used the internet, and among those in the lowest socioeconomic group, including unskilled laborers and long-term state dependents, only 64 percent self-describe as internet users.21 However, in 2016, it was found that use in the 65 to 74 age group has increased by nearly 70 percent since 2011.22 Of the 15 percent of adults without household internet access, 12 percent reported having no intention to get connected.23 There is a no general gender gap in internet use though two-thirds of women over 75 have never used the internet.24

Restrictions on Connectivity

The government does not place limits on the amount of bandwidth ISPs can supply, and the use of internet infrastructure is not subject to direct government control. ISPs regularly engage in traffic shaping or slowdowns of certain services (such as peer-to-peer file sharing and television streaming). Mobile providers have cut back on previously unlimited access packages for smartphones, reportedly because of concerns about network congestion.

ICT Market

The five major internet service providers (ISPs) are British Telecom (BT) with a 32 percent market share, Sky (22 percent), Virgin Media (20 percent), TalkTalk (14 percent), and EE (4 percent).25 Through local loop unbundling—where communications providers offer services to households using infrastructure provided mainly by BT and Virgin—a wider number of companies provide internet access. Unbundled fixed-lines reached 9.6 million homes in 2015, a 0.2 percent increase since the previous year.26 At the time of this report, 95 percent of homes are able to receive unbundled telecommunications services.27

ISPs are not subject to licensing but must comply with general conditions set by the communications regulator, Ofcom, such as having a recognized code of practice and being a member of a recognized alternative dispute-resolution scheme.28

The telecommunications provider EE leads the mobile operator market, with some 33 percent of market, followed by O2 (21 percent), Vodafone (18 percent), Three (10 percent), and Tesco (8.5 percent) according to information from Statista as of June 2015.29 Mobile Virtual Network Operators, including Tesco, provide service using the infrastructure of one of the other four.

Regulatory Bodies

Ofcom is the primary regulator, by virtue of the broad definitions of responsibility for “citizens,” “consumers,” and “communications matters” granted to it under the Communications Act 2003.30

In 2012, major ISPs published a “Voluntary Code of Practice in Support of the Open Internet”.31 The code commits ISPs to transparency and confirms that traffic management practices will not be used to target and degrade the services of a competitor. The code was amended in 2013 to clarify that signatories could deploy content filtering or provide such tools where appropriate for public Wi-Fi access.32

In 2013, the domain registrar Nominet reviewed the extent to which the “.uk” domain registration policy should restrict offensive or otherwise inappropriate words or expressions in domain name registrations.33 The Nominet Board agreed to all the recommended changes,34 which included a post-registration domain name screening to suspend or remove domain names that encourage serious sexual offenses.35

Other groups regulate content through voluntary ethical codes and co-regulatory rules under independent oversight. The Internet Watch Foundation (IWF), an independent self-regulatory body funded by the European Union (EU) and industry bodies manages criminal online content (see Blocking and Filtering).36 The Video On Demand Association, a private self-regulatory body, had previously regulated video content in keeping with the EU AudioVisual Media Services Directive. This function has been taken over by Ofcom.37 The Advertising Standards Authority and the Independent Press Standards Organization regulate newspaper websites. With the exception of child abuse content, these bodies eschew pre-publication censorship and operate post-publication notice and takedown procedures within the E-Commerce Directive liability framework (see Content Removal).

B Limits on Content

Various categories of criminal content such as depictions of child sexual abuse, promotion of extremism and terrorism, and copyright infringing materials are blocked by UK ISPs. Parental controls over content considered unsuitable for children are enabled by default on mobile networks, requiring adults to opt out to access adult material. These measures can result in overblocking, and a lack of transparency persists regarding the processes involved and the kind of content affected.

Blocking and Filtering

Service providers block and filter some illegal and some legal content in the UK, with varying degrees of transparency. Illegal content falls into three categories. First, ISPs block potentially illegal content depicting child sexual abuse. Second, overseas-based URLs hosting content police report for violating the Terrorism Act 2006 by glorifying or promoting terrorism are included in the child filters supplied by many ISPs, and inaccessible in schools, libraries, and other facilities considered part of the “public estate.” The list of sites in these two categories is kept from the public to prevent access to unlawful materials. Finally, ISPs are also required to block domains and URLs found to be hosting material that infringes copyright when ordered by the High Court. Those orders are not kept from the public, but can be hard to obtain.1

Separately, all mobile service providers and some ISPs providing home service filter legal content considered unsuitable for children. Mobile service providers enable these filters by default, requiring customers to prove they are over 18 to access the unfiltered internet. In 2013, the four largest ISPs agreed with the government to present all customers with an “unavoidable choice” about whether to enable parentally controlled filters.2

measures,” December 16, 2015, http://stakeholders.ofcom.org.uk/binaries/internet/fourth_internet_safe… Civil society groups say those filters lack transparency and affect too much legitimate content, making it hard for consumers to make informed choices, and for content owners to appeal.

ISPs block URLs using content filtering technology known as Cleanfeed, which was developed by BT in 2004.3 In 2011, a judge described Cleanfeed as “a hybrid system of IP address blocking and DPI-based URL blocking which operates as a two-stage mechanism to filter specific internet traffic.” While the process involves deep packet inspection (DPI), a granular method of monitoring traffic that enables ISPs to block individual URLs rather than entire domains, it does not enable “detailed, invasive analysis of the contents of a data packet,” according to the judge’s description. Other, similar systems adopted by ISPs besides BT are also “frequently referred to as Cleanfeed,” the judge wrote.4

ISPs are notified about websites hosting content that has been determined to break, or potentially break UK law under different procedures:

  • The Internet Watch Foundation (IWF) compiles a list of specific URLs containing photographic or computer-generated depictions of child sexual abuse or criminally obscene adult content to distribute to ISPs and other industry stakeholders who support the foundation through membership fees.5 ISPs block those URLs in accordance with a voluntary code of practice set forth by the Internet Services Providers’ Association (see Regulatory Bodies). IWF analysts evaluate sites hosting material that potentially violate a range of UK laws,6 in accordance with a Sexual Offences Definitive Guideline published by the Sentencing Council under the Ministry of Justice.7 The IWF recommends that ISPs notify customers why the site is inaccessible,8 but some have returned error messages instead.9 The IWF website allows site owners to appeal their inclusion on the list. Citizens can also report criminal content via a hotline. In 2008, the IWF blacklisted a Wikipedia page displaying an album cover depicting a naked girl based on a complaint submitted by a reader. Other Wikipedia users reported that the block affected their ability to edit the site’s user-generated content,10 and the IWF subsequently removed the page from the list.11 An independent judicial review of the human rights implications of IWF's operations conducted in 2014 said the body’s work was consistent with human rights law.12 The review recommended some improvements, such as restricting its remit to child sexual abuse, and appointing a human rights expert.
  • The police Counter Terrorism Internet Referral Unit compiles a list of URLs hosted overseas containing material considered to glorify or incite terrorism under the Terrorism Act 2006,13 which are filtered on networks of the public estate, such as schools and libraries; they can still be accessed on private computers.14 In 2014, the four largest ISPs, BT, Virgin, Sky, and TalkTalk, said they would also filter this content from children and young internet users.15
  • The UK High Court can order ISPs to block websites found to be infringing copyright under the Copyright, Designs, and Patents Act 1988. The High Court has held that publishing a link to copyright infringing material, rather than actually hosting it, does not amount to an infringement;16 this approach was confirmed by the Court of Justice of the European Union.17 In October 2014, a new intellectual property framework included exceptions for making personal copies of protected work for private use, as well as for “parody, caricature and pastiche.”18 Sections 17 and 18 of the Digital Economy Act (DEA) of 2010 separately allowed for the courts to order websites containing “substantial” violations of copyright to be blocked. In August 2011, the government announced that the DEA’s blocking provisions would be dropped, in part because it was already authorized under another law.19 Copyright-related blocking has been criticized for its inefficiency and lack of transparency. In May 2010, an Ofcom review determined that the practice is unlikely to be effective unless used in conjunction with other measures.20 During the coverage period, the High Court ordered six ISPs to ban dozens of sites that copied or mirrored the content available on sites that had been blocked in the past.21 After lobbying from the London-based Open Rights Group, in December 2014 BT, Sky, and Virgin Media began informing visitors to sites blocked by court order that the order can be appealed with the High Court.22

Mobile service providers also block URLs identified by the IWF as containing potentially illegal content. However, Mobile UK (formerly the Mobile Broadband Group), an industry group which consists of Vodafone, Three, EE, and O2,23 introduced additional filtering of content considered unsuitable for children in a code of practice published in 2004 and last updated in July 2013.24 These child filters are enabled by default in mobile internet browsers, though users can disable them by verifying they are over 18. Mobile Virtual Network Operators are believed to “inherit the parent service's filtering infrastructure, though they can choose whether to make this available to their customers.”25 Transparency about what content is affected depends on the provider. O2 allows its users to check how a particular site has been classified.26

The filtering is based on a classification framework for mobile content published by the British Board of Film Classification (BBFC).27 Definitions of content the BBFC considers suitable for adults only include “the promotion, glamorization or encouragement of the misuse of illegal drugs;” “sex education and advice which is aimed at adults;” and “discriminatory language or behavior which is frequent and/or aggressive, and/or accompanied by violence and not condemned,” among others. The BBFC adjudicates appeals from content owners about overblocking and publishes the results quarterly.28

The four largest ISPs, BT, Sky, Virgin Media and TalkTalk, offer all customers the choice to activate similar filters to protect children under categories that vary by provider, but can include social networking, games, and sexual education.29 Website owners can check whether their site is filtered under one or more category, or report overblocking, by emailing the industry-backed nonprofit group Internet Matters,30 though the process and timeframe for correcting mistakes varies by provider.

These optional filters can affect a range of legitimate content including public health, homosexuality, drug awareness, and pages run by civil society groups and political parties. In 2012, O2 customers were temporarily unable to access the website of the right-wing nationalist British National Party.31 Civil society groups also have criticized the subjectivity of the content selected for filtering. A 2014 magazine article noted that all the ISPs had blocked dating sites with the exception of Virgin Media, which operates one.32 During the coverage period of this report, an Ofcom report said that the ISPs include “proxy sites, whose primary purpose is to bypass filters or increase user anonymity, as part of their standard blocking lists.”33 Transparency about the process remains lacking. In August 2015, when a watchmaking business complained to BT that their company website was blocked by its Parental Control software, the provider responded that the process had been outsourced to “an expert third party,” and that BT was “not involved.”34

Blocked!, a site operated by the Open Rights Group, allows users to test the accessibility of websites and report overblocking of content by both home broadband and mobile internet providers.35 In mid-2016, the website listed 11,715 sites blocked by default filters, meaning a user would have to proactively disable the filter in order to view the content affected. A further 21,239 sites were blocked by filters which users enable by choice.

Content Removal

Material blacklisted by the IWF because it constitutes a criminal offense (see Blocking and Filtering) can also be subject to removal. When the content in question is hosted on servers in the UK, the IWF coordinates with police and local hosting companies to have it taken down. For content that is hosted on servers overseas, the IWF coordinates with international hotlines and police authorities to get the offending content taken down in the host country. Similar processes are in place for the investigation of online materials inciting hatred under the oversight of TrueVision, a site that is managed by the police.36

The Terrorism Act calls for the removal of online material hosted in the UK if it “glorifies or praises” terrorism, could be useful to conducting terrorism, or incites people to carry out or support terrorism. A Counter Terrorism Internet Referral Unit (CTIRU) was set up in 2010 to investigate internet materials and take down instances of “jihadist propaganda.”37 The CTIRU compiles lists of URLs hosting such material outside its jurisdictions, which are then passed on to service providers for voluntary filtering (see Blocking and Filtering). In June 2015, Home Secretary Theresa May said the unit was taking down “about 1,000 pieces of terrorist-related material per week.”38

According to EU Directive 2000/31/EC (the E-Commerce Directive), website owners and companies who knowingly host illicit material and fail to remove it may held liable, even if the content was created by users.39 While that directive applies to libelous content, updates to the Defamation Act effective since January 1, 2014 provide greater protections for companies by limiting their liability for user-generated content that is considered defamatory.

However, the Defamation Act offers protection to website operators from private libel suits based on third-party postings only if the victim alleging defamation can find the user responsible.40 While the act does not specify what sort of information the website operator must provide to plaintiffs, unauthenticated identity information may be falsified by users and prevent the operator from benefiting from the act’s liability protections, thus placing website operators in the position of requiring authenticated identity information or risk civil liability.41

In May 2014, European Court of Justice gave search engines the task of removing links from their search results at the request of individuals if the stories in question were deemed to be inadequate or irrelevant. The so-called “right to be forgotten” ruling has had an impact on the way content is handled in the UK. Google reported receiving 93,968 requests involving the UK, requesting the removal of 215,066 URLs from its search results by July 2016, and complied in 39 percent of cases.42 The BBC publishes a regular list of its news stories which have been delisted by search engines.43 In May 2015, news reports said that the UK’s data protection authority, the Information Commissioner’s Office, was in talks with Google over 48 cases that it believed the search engine had not resolved effectively.44

In 2016, Google announced that beginning mid-February, it would expand the right to be forgotten by removing links from all versions of its search engine.45 It had previously removed them only on the local version in the country where the request originated, such as Google.co.uk, leaving them accessible to UK-based users searching international versions like Google.com. The change applies only to users with IP addresses indicating they are located within the jurisdiction of the removal request. The links remain available in searches conducted outside that jurisdiction.

Media, Diversity, and Content Manipulation

Self-censorship is difficult to measure in the United Kingdom, but not a grave concern. After the January 2015 attack on the French publication Charlie Hebdo some news outlets refrained from publishing the magazine’s controversial cartoons of the prophet Muhammad,46 but the decision was not government influenced or mandated.

Due to the UK’s extensive surveillance practices (see Surveillance, Privacy and Anonymity), it is possible that certain online groups self-censor to avoid potential government interference. Media and civil society groups filed legal challenges after Edward Snowden made GCHQ surveillance practices public, indicating heightened concern about the privacy of their communications. In September 2014, the London-based Bureau for Investigative Journalism filed an application with the European Court of Human Rights to rule on whether UK legislation properly protects journalists’ sources and communications from government scrutiny and mass surveillance.47 In January 2015, the European Court of Human Rights prioritized the case,48 but in mid-2016 it remained pending.

There is no evidence documenting government manipulation of online content. Online media outlets face economic constraints that negatively impact their financial sustainability, but these are due to market forces, not political intervention. Publications have struggled to find a profitable system for their online news platforms.

The UK lacks explicit protections for net neutrality, the principle that ISPs should not throttle, block or otherwise discriminate against internet traffic based on content. Ofcom called for a self-regulatory approach to the issue in 2011,49 describing the blocking of services and sites by ISPs as “highly undesirable” but subject to self-correction based on market forces.50 Developments at EU level could have an impact on net neutrality provisions in the UK, after agreement has been reached to ban paid prioritization—content owners being able to pay to ISPs to push their content first—across the EU as part of the Digital Single Market policy package, which seeks to strengthen the digital economy through increased support and access.51

There are a wide variety of digital news platforms available, with 60 percent of people reporting that they consume news online, and 44 percent reporting that they consume news through apps. Blogs and social media also act as sources of news. Diverse views are present online, but may not be widely read, as 59 percent of people said they obtain news from the BBC website or app, 18 percent through Google, and 17 percent on Facebook.52

Digital Activism

Online political mobilization continues to grow both in terms of numbers of participants and numbers of campaigns, though the efficacy of online mobilization remains subject to debate and it is impossible to explain success with reference to online campaigns alone. Petition and advocacy platforms such as 38 Degrees and AVAAZ continued to grow, with AVAAZ claiming around 1.6 million users registered in the UK in 2015. All civil society organizations, charities and political parties now view online communication as an indispensable part of a wider campaign strategy.

In the lead up to the June 2016 referendum on the UK’s membership of the European Union, the political discourse was largely conducted online, in keeping with other elections. Analysis of varying social media sites had found that, quantitatively, posts sympathetic to the leave campaign had more posts.53 This was also found in independent research on Instagram users.54

Privacy advocates have also used digital tools to promote transparency about surveillance. In February 2015, the Investigatory Powers Tribunal said that aspects of the way UK and U.S. intelligence agencies shared information intercepted from internet communications between 2007 and 2014 breached human rights law (see Surveillance, Privacy and Anonymity).55 The tribunal is obligated to respond to any individual complaints and reveal if an individual was illegally monitored during that period. If so, the individual can ask that the data be deleted. To facilitate such complaints, Privacy International provided a form on its website which it submits to the Tribunal on behalf of individuals. More than 6,000 people signed up in the first 24 hours after the form was launched in early 2015.56

C Violations of User Rights

The government has placed significant emphasis on stopping the dissemination of terrorist and hate speech online and on protecting individuals from targeted harassment on social media. User rights are undermined by extensive surveillance measures used by the government to monitor the flow of information for law enforcement and foreign intelligence purposes. There were several notable legal changes over the past year in these areas.

Legal Environment

The UK does not have a written constitution or other omnibus legislation detailing the scope of governmental power and individual rights. Instead, these constitutional powers and individual rights are encapsulated in various statutes and common law. The provisions of the European Convention on Human Rights (ECHR) were adopted into law via the Human Rights Act 1998. In 2014, Conservative Party officials, including the prime minister, announced their intentions to repeal the Human Rights Act in favor of a UK Bill of Rights in order to give British courts more control over application of human rights principles.1 No such bill had been introduced to Parliament in mid-2016.2

Prosecutions for statements and messages posted online fall under various laws, including Section 5 of the Public Order Act 1986, which penalizes “threatening, abusive or insulting words or behavior.” In 2013, it was amended to remove insults.3 Section 127 of the Communications Act 2003 punishes “grossly offensive” communications sent through the internet.4

In February 2015, the Criminal Justice and Courts Act 2015 amended Section 1 of the Malicious Communications Act 1988.5 The act already criminalized targeting individuals with abusive and offensive content online "with the purpose of causing distress or anxiety." The amendment additionally criminalized ‘revenge porn,’ the unwanted sharing of an individual’s own private, sexual media for the purposes of embarrassment and humiliation,6 and increased the maximum penalty from six months to two years in prison. These offenses were previously confined to the magistrates' courts, but the new law, effective in England and Wales as of April 13, 2015, allows the crown court to hear the more serious offenses, since it can issue higher prison sentences.7 The changes also extended the time limit to bring charges for these offenses to three years from the date of the offense.8

The Crown Prosecution Service (CPS) publishes specific guidelines for the prosecution of crimes “committed by the sending of a communication via social media.”9 Updates in 2014 put digital harassment offenses committed with the intent to coerce the victims into sexual activity under the Sexual Offences Act 2003, which carries a maximum of 14 years in prison.10 Revised guidelines were issued in March 2016.11 The guidelines identify four categories of communications subject to possible prosecution under UK law: Credible threats; communications targeting specific individuals; breach of court orders; and grossly offensive, false, obscene, or indecent communications. They also advise prosecutors to consider the age and maturity of the poster before pursuing charges. Some observers said this could criminalize the creation of pseudonymous accounts, although only in conjunction with activity considered abusive.12

Some changes to the legal framework were debated during the coverage period. The Copyright, Designs, and Patents Act 1988 carries a maximum two year prison sentence for offenses committed online. In July 2015, the government held a public consultation regarding a proposal to increase the sentence to 10 years. Of the 1,011 responses, only 21 supported the proposal,13 but in April 2016, a government consultation paper announced plans to submit an amendment to include the 10-year maximum sentence to parliament “at the earliest available legislative opportunity.”14

In September 2015, the home secretary outlined a proposal for “extremism disruption orders.”15 The orders would allow judicial review of individuals and groups who “spread hate but do not break laws,” disallowing them from posting messages to social media without first gaining government approval.16 That proposal, supported by the prime minister, also included plans to grant Ofcom powers to prevent broadcast of “extremist messages,” requiring pre-transmission monitoring of content.17 However, the proposal met vocal opposition even from within the Conservative Party.18

In 2014, a House of Lords committee recommended that websites allowing individuals to post content anonymously or under a pseudonym should be required to establish their actual identity.19 Critics argue that such a measure would chill speech by removing the protections of anonymity from those afraid of repercussions.20 In mid-2016, no action had been taken on the report’s recommendation had been taken.

Libel laws that tended to favor the plaintiff had previously led to a large number of libel suits with only tenuous connection to the UK being brought in its courts, a phenomenon known as “libel tourism.” This has had a chilling effect on free speech in the UK, which the Defamation Act 2013 intended to reduce. Sections which became active in January 2014 require claimants to prove that England and Wales is the most appropriate forum for the action, set a serious harm threshold for claims, and codify certain defenses such as truth and honest opinion. The overall number of defamation cases in the UK had fallen by 40 percent in the 2016 reporting period.21

Prosecutions and Detentions for Online Activities

Prosecutions involving interactions on social media increased in recent years, although jail sentences for political, social, or religious speech protected under human rights norms remain rare. According to a Freedom of Information request in October 2014,22 about 12,000 people were prosecuted for offensive speech made via social media between 2008 and 2013.

Prosecutors have targeted Islamic extremism online. In April 2016, a court in London jailed Mohammed Moshin Ameen for five years for posting 8,000 Islamic State propaganda messages aimed at young men in the UK via 42 Twitter accounts.23 He pleaded guilty to five counts of encouraging terrorist acts on social media, disseminating a terrorist video, and inviting support for a proscribed organization.

Other detentions involved comments about Muslims, though no prosecutions were subsequently reported. On February 16, 2016, Police Scotland arrested a man for posting a series of offensive messages on Facebook about the resettlement of Syrian refugees on the Isle of Bute, approximately 45 miles east of Glasgow.24 Police Scotland said that they would not “tolerate any form of activity which could incite hatred and provoke offensive comments on social media.”25 In a separate, widely publicized case, police in south London arrested a man on March 23, 2016 for a Twitter post in which he described “confronting” a Muslim woman and asking her to “explain” a series of bombings carried out by Islamic State in Brussels on March 22.26 A charge against him under the Public Order Act was dropped on March 25.

Some critics fear the drive against online extremism may affect individuals expressing or accessing political opinion. In February 2016, staff at a school in Southampton reported a 15-year-old pupil to police for accessing the United Kingdom Independence Party website in class, citing concern about the right-wing site’s “extremist views.”27 The pupil said he was conducting research, and police took no further action.28

Changes in the law provided a means for redress for those affected by revenge porn (see Legal Environment). At least 175 cases were reported to police between April and October 2015, according to the Guardian.29 That figure, obtained from a freedom of information request, covered “just over a third of police forces in England and Wales.”

Some of these cases were prosecuted during the reporting period. On September 1, 2015, Paige Mitchell pleaded guilty to assault and posting four sexually explicit pictures of her girlfriend on Facebook after an argument.30 A court in Stevenage, Hertfordshire, sentenced her to six weeks in prison, suspended for 18 months, and mandatory counselling. In a separate case in October 2015, a court in Newport, Wales, sentenced Jesse Hawthorne to 16 weeks in prison, suspended for 12 months, for posting an explicit image of his ex-girlfriend on Facebook. He was barred from communicating with his ex-girlfriend for two years, including on social media.31

Surveillance, Privacy, and Anonymity

Surveillance became a major point of contention in the UK following the revelations by former National Security Agency (NSA) contractor Edward Snowden on the activities of GCHQ and its international counterparts, published by the Guardian since June 2013. One of the priorities of the current government is the overhaul of investigatory powers of its law enforcement and intelligence agencies. Over the past two years, investigatory powers have been subject to independent reviews. In these reviews, it has been consistently found that surveillance regulation is in need of reform, particularly in relation to specification of scope, establishing credible oversight, and appropriate safeguards for individual liberty. On March 1, 2016, the government introduced the Investigatory Powers Bill to consolidate and reform surveillance laws.32 Critics say it lacks adequate safeguards and would oblige the technology industry to provide backdoors to government agencies. In mid-2016, it was still being debated in parliament.

There are a number of legislative measures authorizing surveillance,33 including the Regulation of Investigatory Powers Act 2000 (RIPA).34 RIPA includes provisions related to the interception of communications, the acquisition of communications data, intrusive surveillance, secret surveillance in the course of specific operations, and access to encrypted data. Under current rules, RIPA allows national agencies and over 400 local bodies to access communication records for a variety of reasons, ranging from national security to tax collection. RIPA established the Investigatory Powers Tribunal to adjudicate issues regarding government surveillance, including by Britain’s three intelligence agencies—GCHQ, MI5, and MI6. The 2012 Protection of Freedoms Act required local authorities to obtain the approval of a magistrate to access communications data.35

A clause within Part I of RIPA allows the foreign or home secretary to sign off on bulk surveillance if communications data is arriving from or departing to foreign soil.36 This clause provided the legal basis for Tempora, a secret surveillance project documented in material leaked by Edward Snowden. Since the UK’s fiber-optic network often routes domestic traffic through international cables, this provision legitimized widespread surveillance over most, if not all UK citizens.37 Working with telecom companies, GCHQ installed intercept probes at the British landing points of undersea fiber-optic cables, giving the agency access to some 200 cables by 2012, each carrying up to 10 Gbps of data. Intelligence agents can process data collected by the probes, including phone calls, emails, social networking posts, private messages, and more. Content collected is stored for three days, and metadata (information such as mobile phone locations and email logs) for thirty days.38 The arrangement allowed GCHQ to pass on information to its US counterparts in the NSA regarding U.S. citizens, thereby bypassing American restrictions on domestic surveillance. In 2013, documents revealed that the U.S. government had provided at least GBP 100 million (US$ 155 million) in funding to GCHQ since 2010, leading observers to argue that the U.S. government was paying to use information obtained by the UK government.39

Ten civil society organizations separately filed suit against GCHQ with the Investigatory Powers Tribunal in 2013, on grounds that surveillance impeded their work and contravened international human rights law. These were consolidated into a single case, Liberty vs GCHQ. In June 2015, the tribunal found that interception of two groups’ communications had violated human rights standards, but made no determination in the other eight, (see Surveillance, Privacy and Anonymity).

Civil society groups challenged the legitimacy of these practices with the Investigatory Powers Tribunal in Liberty vs GCHQ. The tribunal issued judgements in December 2014, and February 2015, and a related decision in June.40 The 2014 judgement said that sharing of information intercepted from internet communications between GCHQ and the NSA was lawful now that some of the procedures had been publicly disclosed. The February 2015 judgment said that prior to that public disclosure, between 2007 and 2014, the activity violated European human rights standards.41 That decision marked the first time the tribunal has ruled against any of the intelligence agencies that it is entrusted to oversee.42 The June 2015 decision found procedural irregularities in the retention of communications intercepted from Amnesty International and the South Africa-based Legal Resources Center, though it found that the interception itself was lawful.43 The tribunal made “no determination” on the claims brought by other NGOs, meaning either that no surveillance took place, or that it was considered lawful.

Three independent reviews of mass surveillance and the underlying legal framework have called more clearly for reform:

  • In December 2014, a parliamentary Home Affairs Committee inquiry concluded that RIPA was not fit for purpose and that the legislation governing communications data is in need of complete overhaul.44
  • In March 2015, the parliamentary Intelligence and Security Committee published the results of an inquiry into the extent and scale of mass surveillance.45 The report found that bulk interception does not equate to blanket or indiscriminate surveillance, and that the country’s intelligence agencies do not seek to circumvent the law. However, a new, single act of parliament should be introduced to address the complicated nature of the legal framework and the lack of transparency surrounding it, the report said.
  • In June 2015 David Anderson, an independent person appointed by the home secretary to evaluate the operation of current counter-terrorism law, called for a clean slate for government surveillance activities, lamenting the fragmentation and obscurity of current laws. A new law should be both comprehensive in scope and comprehensible in nature, the report said.46

Other laws besides RIPA have been subject to criticism, particularly in respect to the length of time companies are obliged to store data about their users’ activities. Regulations to implement the 2006 EU Data Retention Directive were adopted in the UK in 2009,47 requiring providers to retain user metadata for 18 months, though not the content of their communications.48 In April 2014, the Court of Justice of the European Union (CJEU) struck down the EU directive as a breach of fundamental privacy rights,49 sparking fears that companies would begin to delete data on UK users and undermine counterterrorism investigations. The government passed the temporary UK Data Retention and Investigatory Powers Act (DRIPA) in July 2014, requiring telecommunication companies to retain users’ metadata for up to 12 months.50 It will expire at the end of 2016.

During the coverage period of this report, the legitimacy of DRIPA was debated in the courts. Academics, journalists, and privacy advocates criticized the legislation for reintroducing data retention requirements that were struck down by the European court.51 Two members of parliament represented by human rights group Liberty challenged the Act in court on grounds that it is incompatible with the UK Human Rights Act, and the EU Charter of Fundamental Rights.52 In July 2015, the High Court found in their favor, stating that sections 1 and 2 of the Act are unlawful, as they fail to provide clear and concise rules for ensuring that data is accessed for the purpose of serious offenses, and that access is not authorized by a court or other independent body.53

The government appealed the ruling, and on November 20, the Court of Appeal referred to the CJEU for clarification.54 The High Court’s DRIPA judgement relied on an earlier CJEU’s judgment which declared the EU Data Retention Directive invalid.55 The Court of Appeal asked the CJEU whether it had intended that judgement to serve as a mandatory requirement for EU member states to follow in national legislation, and whether the judgement expanded the interpretation of certain articles of the EU Charter of Fundamental Rights. The CJEU expedited the case in February 2016,56 but had not issued a response in mid-2016. In July, outside the coverage period of this report, the court’s preliminary ruling said data retention was only legitimate during the investigation of serious crimes.57

With a final judgement on DRIPA still pending, the government introduced the Investigatory Powers Bill (IP Bill) on March 1 2016.58 Besides replacing DRIPA, the bill is meant to consolidate and reform disparate legal provisions into a single, accessible piece of legislation, replacing the current regime, including large parts of RIPA. (Other relevant legislation includes the Wireless Telegraphy Act 2006, the Telecommunications Act 1984, the Police Act 1997, the Intelligence Services Act 1994, and the Human Rights Act 1999.) However, critics said the bill lacked appropriate safeguards. A draft Code of Practice published at the same time of the IP Bill included a requirement for communications service providers to “provide a technical capability to give effect to interception, equipment interference, bulk acquisition warrants or communications data acquisition authorizations.”59

Requirements for technology companies to provide “backdoors” to government agencies—mechanisms to enter into a program or service without the user’s permission—drew particular scrutiny in the context of the government’s attitude towards encryption. Prime Minister David Cameron called for a ban on encryption in messaging apps in January 2015,60 and reaffirmed his commitment to making sure that terrorists were not able to communicate safely via new digital technologies in June.61 There are no legal restrictions on the use of encryption technologies in the UK, though under Part 3 of RIPA it is a crime not to disclose an encryption key upon an order from a senior policeman or a High Court judge.62 In 2008, the Court of Appeal held that such disclosure would not necessarily violate the privilege against self-incrimination.63 The provision has been used to obtain court orders to force disclosure of keys.

Major technology companies such as Apple submitted statements to the IP Bill committee, which collects and analyzes evidence from stakeholders during the drafting of legislation, criticizing the requirement to maintain backdoors. In December 2015, Apple argued that weakening encryption or the use of backdoors would weaken individual security.64 Robert Hannigan, the director of GCHQ, defended the bill in March 2016, arguing that neither GCHQ or the IP Bill advocate weakening encryption, but rather work to make security stronger and make the law clearer.65 However, more than 200 lawyers called the bill “not fit for purpose” in a letter to the Guardian published the same month.66 On March 8, the United Nations’ Special Rapporteur for Privacy, Joseph Cannataci, highlighted the bill in his first report, which recommended that “disproportionate, privacy-intrusive measures such as bulk surveillance and bulk hacking as contemplated in the Investigatory Powers Bill be outlawed.”67 In mid-2016, the bill was in the committee stage of the legislative process.68

Earlier attempts to change the legal framework supporting surveillance were similarly criticized for expanding access for intelligence agencies without suitable strengthening of privacy protections. In 2012, the government introduced the Communications Data Bill to replace elements of RIPA. The media dubbed the bill ‘the Snooper’s Charter,’ as it would have recorded details of messages sent over social media platforms, phone call records, and internet browsing activity including each website a user had visited (although not the pages within that site).69 The Liberal Democrats, a coalition partner with the Conservatives, withdrew their support for the bill in 2013.70

According to the latest available data, 517,236 requests for communications data were submitted by public authorities in 2014, compared to 514,608 in 2013; 2,795 lawful intercept warrants were issued, a slight increase from 2,760 in 2013.71

Intimidation and Violence

There were no reported incidences of violence against users for their online activities over the coverage period, though cyberbullying, particularly targeting women, is widespread.72 Some online abuse is subject to prosecution under UK law (see Legal Environment and Prosecutions and Detentions for Online Activities).

Technical Attacks

Nongovernmental organizations, media outlets, and activists are not generally targeted for technical attacks by government or nonstate actors, although the use of computer exploitation techniques have been avowed by the government and GCHQ. On February 12, 2016, the Investigatory Powers Tribunal ruled in Privacy International v. Secretary of State for the Foreign and Commonwealth Office et al that computer network exploitation carried out by GCHQ was in principle lawful.73 The arguments provided in defense of these activities rested on the powers being within the limitations in the European Convention of Human Rights. The tribunal also noted that network exploitation is legal if the warrant is as specific and narrow as possible. There are no figures or further information on where such exploitation takes place and in which circumstances.

In wider cybercrime, financially-motivated fraud and hacking continue to present a challenge to authorities and the private sector. Incidents of cyberattacks have increased in recent years. Observers also question the security of devices connected to the network though the Internet of Things.74

On United Kingdom

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    93 100 free
  • Internet Freedom Score

    79 100 free