A Obstacles to Access 23 25
B Limits on Content 30 35
C Violations of User Rights 24 40
Last Year's Score & Status
76 100 Free
Scores are based on a scale of 0 (least free) to 100 (most free). See the research methodology and report acknowledgements.

header1 Overview

France registered a slight improvement in internet freedom during the coverage period following successful efforts to diversify the telecommunications industry. The internet remains accessible for most of the population. Website blocks and content removals are typically subject to careful judicial or administrative oversight. However, electronic surveillance has increased in both scope and frequency. The coverage period also saw an attempt to regulate hate speech on the internet, requiring companies to remove content within strict time limits and with the potential of high penalties, though most of the law was ruled unconstitutional for violating freedom of expression.

The French political system features vibrant democratic processes and generally strong protections for civil liberties and political rights. However, due to a number of deadly terrorist attacks in recent years, successive governments have been willing to curtail constitutional protections and empower law enforcement to act in ways that impinge on personal freedoms.

header2 Key Developments, June 1, 2019 - May 31, 2020

  • The Regulatory Authority for Electronic Communications and Post (ARCEP), the telecommunications regulator, reported that the market for high-speed fiber-optic services has diversified, though concerns about market leader Orange’s dominance persist (see A4).
  • A new law on hate speech passed at the end of February 2020, requiring online platforms to be able to remove user-reported hate speech content within a day and remove content reported by law enforcement as terrorist or child exploitation within an hour. The law was drastically limited by the Constitutional Council in June 2020, which held that many of the law’s provisions violated freedom of expression (see B2).
  • Misinformation about the COVID-19 pandemic spread online during the coverage period, sometimes propagated by far-right political figures (see B5).
  • The government released StopCovid, a contact tracing app that relies on centralized data storage, raising concerns about user privacy (see C5).
  • Orange agreed to provide the government with its subscribers’ geolocation data, in aggregate and anonymized, to facilitate monitoring of the COVID-19 pandemic (see C6).

A Obstacles to Access

The internet penetration rate continued to increase, although regional disparities persist. The current information and communication technologies (ICT) market is open, highly competitive, and has benefited from the privatization of the state-owned company France Telecom (now Orange). ARCEP, the telecommunications regulator, acknowledged the growth of a more competitive market for high-speed fiber-optic services, though concerns about Orange’s market dominance persist.

A1 1.00-6.00 pts0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 6.006 6.006

Infrastructural limitations generally do not restrict access to the internet. According to Organisation for Economic Co-operation and Development (OECD) data from 2019, France has a fixed broadband internet penetration rate of 44 percent and a mobile penetration rate of 91 percent.1 Internet penetration stood at 90 percent,2 with 58 million internet users in January 2020.3 Despite increased reliance on internet infrastructure due to the COVID-19 crisis, there were no major issues with network capacity during the coverage period.4

Committed to providing widespread access to high-speed broadband with connection speeds of at least 30 Mbps, the government has been implementing an ambitious national plan to deploy fiber-optic, very high-speed digital subscriber line (VDSL), terrestrial, and satellite networks throughout the country by 2022, mobilizing public and private investments totaling €20 billion ($22 billion) over ten years.5 In 2020, very high-speed broadband coverage accounted for 60 percent of high-speed broadband connections (29.9 million out of 49.8 million), according to a March 2020 report by the Regulatory Authority for Electronic Communications and Post (ARCEP), the telecommunications regulator.6

Reforms approved in 2015, known as the “Loi Macron,” sought to improve mobile broadband coverage by requiring mobile service providers to deploy second-generation (2G) technology for mobile networks in underserved municipalities by 2016 and ensure coverage with third- and fourth-generation (3G and 4G) technology networks by 2017.7 In January 2018, the ARCEP and the government conceived a mobile “New Deal,” enacted in July 2018, to develop 4G networks by 2022. According to a December 2019 ARCEP report, between 85 and 87 percent of rural areas had 4G coverage, with a goal of 90 percent coverage by January 2022.8 However, according to current ARCEP data, the 4G networks of three major mobile service providers cover more than 99 percent of the metropolitan French population, while the fourth major provider’s 4G coverage extends to 93 percent of the same population.9 Networks with fifth-generation (5G) technology may be deployed as early as the end of 2020, with bidding for licenses scheduled for September 2020.10

France is ranked among the top countries in the world for fixed broadband connection speed. According to March 2020 data from Ookla, the average download speed on a fixed broadband connection was 136.4 Mbps, while the average mobile download speed was 43 Mbps.11

A2 1.00-3.00 pts0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 3.003 3.003

Internet connections are relatively affordable. In 2020, the Economist Intelligence Unit ranked France second of 100 countries for affordability of internet connections.1 According to 2019 International Telecommunications Union (ITU) data, a monthly entry-level fixed broadband subscription cost .8 percent of gross national income (GNI) per capita, while a mobile-only monthly mobile data plan cost 0.3 percent of GNI per capita.2 Both prices were lower than those in neighboring Germany and the United Kingdom.

There are a number of Internet Exchange Points (IXPs) in France,3 which contribute to improved access and lower consumer prices.4

However, demographic disparities in internet usage persist. A map produced by ARCEP illustrates some of the regional disparities in mobile penetration, showing patchy 4G coverage in rural areas and overseas territories.5 Most at-home users have access to broadband connections, while the remaining households, usually in rural areas, must rely on dial-up or satellite services.6 The aforementioned mobile “New Deal” aims to reduce these disparities. Between July 2018 and April 2020, ARCEP deployed 1,374 4G antennas in targeted areas out of a planned 5,000 antennas.7 There are no significant digital divides in terms of gender or income.

A3 1.00-6.00 pts0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 6.006 6.006

There were no restrictions on connectivity reported during the coverage period. There is no central internet backbone, and ISPs are not required to lease bandwidth from a monopoly holder, as is the case in other countries. Instead, the backbone consists of several interconnected networks run by ISPs and shared through peering or transit agreements. The government does not have the legal authority to restrict the domestic internet during emergencies.

A4 1.00-6.00 pts0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 4.004 6.006

Score Change: The score improved from 3 to 4 because of successful efforts to diversify the telecommunications industry, though the market leader Orange remains dominant.

There are no significant business hurdles to providing access to digital technologies in France. Service providers do not need to obtain operating licenses.1 However, the use of frequencies (for mobile networks) is subject to strict licensing by ARCEP.2 Only four providers are licensed in this regard: Orange, Free, Bouygues Telecom, and SFR.3 Others, such as NRJ Mobile, make use of these providers’ networks, reselling internet and mobile services.4

Orange, Free, Bouygues Telecom, and SFR dominate both the fixed and mobile markets. Competition between these four providers is fierce, but there is little room for other players to compete.

In 2017, ARCEP announced that it would impose certain constraints on market leader Orange in an effort to open up competition for high-speed fiber-optic services among small- and medium-sized companies.5 In February 2020, ARCEP reported a 40 percent rise in investment in the broadband market in four years and welcomed the healthier competition for high-speed fiber services and traditional broadband services.6 In particular, ARCEP focused on reinforcing competition in the wholesale market.7 As of July 2020, after the coverage period, Sébastien Soriano, the president of the ARCEP, expressed dissatisfaction with the state of competition in the business-to-business telecommunication market, denouncing the dominant position of Orange and criticizing the company.8

A5 1.00-4.00 pts0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 4.004 4.004

The telecommunications industry is regulated by ARCEP,1 while competition is regulated by the Competition Authority and, more broadly, the European Commission (EC).2 ARCEP remains an independent and impartial body, and regulatory decisions are usually seen as fair.

ARCEP is governed by a seven-member panel. Three members are appointed by the president, while the National Assembly and Senate appoint two each.3 All serve six-year terms. As a member state of the European Union (EU), France must ensure the independence of its telecommunications regulator. Given that the government is the main shareholder in Orange, the leading telecommunications company, the EC stated in 2011 that it would closely monitor the situation in France to ensure that European regulations were met.4

The Digital Republic Act enacted in 2016 broadened ARCEP’s mandate, granting the body investigatory and sanctioning powers to ensure compliance with the principle of net neutrality introduced by the law.5 In July 2019, ARCEP reiterated its commitment to promote net neutrality, digital transformation, and technological innovation in France.6

B Limits on Content

Parliament passed a law aimed at curbing online hate speech that placed extreme requirements on technology companies to remove user-reported content; the Constitutional Council found that the law violated freedom of expression in June 2020, voiding its problematic provisions. The Yellow Vest protests shifted online during the COVID-19 pandemic. Misinformation about the pandemic spread on social media, sometimes by far-right political figures.

B1 1.00-6.00 pts0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content? 5.005 6.006

The government does not generally block web content in a politically motivated manner. All major social media platforms are available.

However, France is one of the few countries that has blocked two well-known websites engaged in piracy, Sci-Hub and LibGen, which offer free access to millions of paywalled academic books, journals, and papers. Following a complaint from academic publishers Elsevier and Springer Nature, a court ordered the four major ISPs to block the two websites in April 2019.1

Since the 2015 terrorist attacks in Paris, terrorist-related content and incitements to hatred have been subject to censorship. In November 2018, a Paris court ordered nine French ISPs to block Participatory Democracy, a racist, antisemitic, and anti-LGBT+ French-language website hosted in Japan that was found to be inciting hatred. The website is affiliated with French far-right and extremist communities.2 As of April 2020, the website was accessible at a different URL.3

A decree issued in 2015 outlined administrative measures to block websites containing materials that incite or condone terrorism, as well as sites that display child abuse.4 Shortly after the decree was promulgated, five websites were blocked with no judicial or public oversight for containing terrorism-related information.5 In the ensuing years, many more websites have been blocked in France. According to the National Commission on Informatics and Liberty (CNIL), France’s data protection agency, the Central Office for the Fight against Crime Related to Information and Communication Technology (OCLCTIC) issued 420 blocking orders to ISPs between February 2019 and December 2019, compared to 763 during the period from February 2018 to February 2019. Among the orders were 15 sites targeted for hosting terrorism-related information; the remaining 405 were targeted for displaying child abuse.6 The CNIL does not offer details on the content of blocked websites, but it does disclose the OCLCTIC censorship decisions that have been disputed in the past. During the coverage period, CNIL disputed none of the OCLCTIC’s decisions.

CNIL’s May 2020 report suggests that the agency may no longer oversee the blocking process, indicating that the task may be done by another independent administrative authority from January 2021 under the Avia Law (see B2). 7

B2 1.00-4.00 pts0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content? 2.002 4.004

The French government continues to actively legislate the online environment.

In May 2020, Parliament adopted the law against hateful content on the internet, known as the Avia Law,1 with the Senate and the National Assembly both modifying the July 2019 bill.2 The Avia Law requires major online platforms to consider removing content reported by users as “illegal” within 24 hours. The bill also requires all websites to remove terrorist or child abuse content within one hour of notification by law enforcement. In case of noncompliance, platforms could be heavily fined, up to €20 million ($22 million), or a maximum of 4 percent of global turnover in special cases.3 The Avia law is modeled, at least in part, on Germany’s Network Enforcement Act, or NetzDG. The EU Directive on Copyright in the Digital Single Market is also on the legislative horizon and must be implemented at the national level in all EU member states, including France, following the measure’s final approval in Brussels in April 2019.

The scope of the Avia Law was drastically reduced in scope by the Constitutional Council in June 2020, after the coverage period, following an appeal from a group of senators. The Council found that most of the law’s provisions, particularly the timed removal obligations, violated freedom of expression.4 The remaining provisions simplify systems for the notification of disputed content, strengthen the prosecution of online hate speech, and create an “online hate observatory.”5

The government sometimes orders online platforms to delete or deindex content. For example, in December 2018, after a year of heated debate, a French court ordered Google to deindex search engine results related to seven illegal streaming websites for a year.6 In June 2019, the French government asked Google to delete a Google+ picture depicting two French officials as dictators. Google did not comply with this request. According to Google’s transparency report, the government issued 469 requests to remove content in the first half of 2019, invoking national security, or privacy and security in a majority (66 percent) of cases. Google acceded to 82 percent of these requests. 7

Between July 2019 and December 2019, Facebook restricted access to 104 items based on reports of denial of the Holocaust and 125 items in response to reports of defamation.8 Facebook did not disclose how many content removal requests it received. In the first half of 2019, Twitter received 352 removal requests, including five court orders. The company acceded to about 9 percent of these requests.9 Finally, from July to December 2019, Microsoft received 67 content removal requests from the government. The company acceded to 62 percent of these requests. 10

A government decree issued in 2015 allows for the deletion or deindexing of online content related to child abuse and terrorism using an administrative procedure supervised by CNIL.11 According to CNIL, between March 2018 and February 2019, the OCLCTIC issued 11,874 removal requests (a decrease from the previous year’s 18,014) targeting such content as well as 5,883 deindexing requests (compared to 6,581 the previous year).12 Content was deleted in response to 8,105 removal requests (68 percent of the total number issued), 5,479 of which related to child abuse and 2,626 of which related to terrorism. The CNIL did not dispute any decisions from the OCLCTIC this year.13

The right to be forgotten (RTBF) was introduced in France in 1995 and institutionalized throughout Europe with the implementation of the General Data Protection Regulation (GDPR) in May 2018.14 Between June 1, 2019 and May 31, 2020, Google deindexed some 55,000 URLs in France under the RTBF.15 Between July and December 2019, Microsoft deindexed just 857 URLs under the RBTF.16 Both companies deindexed only about half the URLs requested by users and other entities in France.

Technology companies also proactively removed content during the coverage period. Facebook withdrew over 10,000 ads in early 2019 because they violated its new political advertising policy. Some of the deleted ads were EU-sponsored posts encouraging participation in the upcoming EU parliamentary elections, posts by nongovernmental organizations (NGOs) including Greenpeace, Médecins du Monde, and UNICEF, as well as posts promoting media outlets (including Le Figaro).17 Similarly, Twitter’s guidelines on political content led the company to block a government-sponsored voter registration ad campaign in April 2019.18 These guidelines were adopted in reaction to France’s new law against election-related false news (see B3).

B3 1.00-4.00 pts0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 3.003 4.004

Authorities are fairly transparent about what content is prohibited and the reasons behind specific content removal requests. Incitement of hatred, racism, Holocaust denial, child abuse and pornography, copyright infringement, and defamation1 are illegal and may be grounds for blockings or takedowns. Article R645-1 of the criminal code outlaws the display of the emblems, uniforms, or badges of criminal organizations under penalty of a fine and can justify blockings or takedowns of such symbols when they appear online.2

Notably, in December 2018, Parliament passed a law first proposed by President Macron that aims to combat disinformation around elections by empowering judges to order the removal of “fake news” within three months of an election.3 The proposal was rejected twice by the Senate before it was passed. The law places a significant strain on judges, who will have 48 hours to decide whether a website is spreading false news following a referral by a public prosecutor, political party, or interested individual. Under the law, social media platforms are also required to disclose who is paying for sponsored ads during electoral campaigns.4 Commentators have expressed concern that the law could be used as a political tool.5

A set of decrees issued in 2015 outlined administrative measures to block websites containing materials that incite or condone terrorism, as well as sites that display child pornography (see B1). The decree implemented Article 6-1 of the Law on Confidence in the Digital Economy (LCEN), passed in 2004, as well as Article 12 of a new antiterrorism law passed in 2014.6

The OCLCTIC is responsible for maintaining a denylist of sites that contain prohibited content, and must review the list every four months to ensure that such sites continue to contravene French law. The OCLCTIC can ask editors or hosts to remove the offending content, and after a 24-hour period, it can order ISPs to block sites that do not comply.7 Users attempting to access sites on a denylist are redirected to a website from the Ministry of Interior providing avenues for appeal. The decree also allows for the deletion or deindexing of online content from search results using an administrative procedure supervised by CNIL (see B2). Under this decree, the OCLCTIC submits requests to search engines, which then have 24 hours to comply.8 The OCLCTIC is responsible for reevaluating deindexed websites every four months and requesting the reindexing of websites when the incriminating content has been removed.

The lack of judicial oversight in the blocking of websites that allegedly incite or condone terrorism remains a concern. The procedures outlined above are supervised by the CNIL. As an administrative authority, the CNIL can also refer requests to the administrative court system should it object to any action taken by the OCLCTIC, thus disputing the OCLCTIC’s orders. In May 2019, a CNIL official asserted that the body lacks the technical means and human resources to efficiently supervise the OCLCTIC.9 Some commentators have lamented that, while the CNIL was founded to protect internet freedom, it now oversees restrictions of the online space.10 In March 2020, the CNIL reaffirmed its intent to fight for confidentiality, refocusing on health data misuses, geolocation abuses, and noncompliance with obligations on the use of cookies and tracers.11

Legal debates over the RTBF have also escalated in recent years. The CNIL has been battling with Google to enforce the RTBF ruling across all of the sites that can be accessed within the country, including and Google raised concerns that the move would set a dangerous precedent for authoritarian governments, who could also request that Google apply national laws extraterritorially.13 In 2016, Google was fined $112,000 by the CNIL for not complying with demands to remove results across its global domains.14 Google appealed to France’s Council of State, which in 2017 decided to refer the matter to the Court of Justice of the European Union (CJEU).15 The French Council of State cancelled the 2016 penalty in March 2020, following a September 2019 judgment from the CJEU ruling that Google was not required to scrub search results worldwide.16

A ruling in 2016 by a Paris court established that Facebook could be sued in France for removing the account of a French user who posted an image of a Gustave Courbet painting of a naked woman. Facebook had argued that cases concerning its terms and conditions could only be heard by a court in the United States. The case was finally judged in March 2018; a French court dismissed the user’s suit. The user appealed this first decision in April 2018 and withdrew his appeal in August 2019 after a settlement with Facebook 17

B4 1.00-4.00 pts0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 4.004 4.004

Online self-censorship is minimal. However, a law aimed at countering online hate speech might lead to increased government oversight of internet users, raising concerns that it could potentially cause greater self-censorship (see B2). In January 2019, President Macron said, “We should move progressively toward the end of anonymity” online.1 The proposal was outlined in further detail in February 2019 by then Secretary of State for Digital Affairs Mounir Mahjoubi and Secretary of State for Equality Marlène Schiappa; it would, among other things, pressure social media platforms and other websites to provide identifying information about users.2 The full proposal was made public in July 2019 and passed in May 2020 as a law requiring online platform to remove illegal content within strict time parameters (see B3).3

B5 1.00-4.00 pts0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 3.003 4.004

There were no reports of the government proactively manipulating content online during the coverage period. However, there were strong concerns about disinformation and misinformation campaigns in the run-up to European parliamentary elections held in May 2019. Several online tools were employed to fight against such manipulation, including the EU’s EUvsDisinfo initiative1 and the nonprofit FactCheckEU.2 According to the EC, Facebook “acted specifically against 1,574 non-EU–based and 168 EU-based pages, groups, and accounts engaged in inauthentic behavior targeting EU member states” between January and May 2019.3 Pursuant to the EU’s voluntary Code of Practice on Disinformation, major social media platforms restricted misleading ads ahead of the elections.4

The yellow vest movement, active in France from October to December 2019, has decried the spread of misinformation within and about the movement’s protests. Traditional media outlets highlighted the spread of false news within the movement, such as images of violence against protesters in other countries that were wrongly attributed to France.5

Content manipulation remains a problem outside of politics. During the coronavirus pandemic, false reports and misinformation about coronavirus spread online,6 as did conspiracy theories propagated by radical far-right and extremist political parties.7 In March 2019, false reports of child abductions by members of the Romany community spread on social networks, notably Facebook and Snapchat, triggering real-world violence against Roma living in the suburbs of Paris.8

B6 1.00-3.00 pts0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 3.003 3.003

France has a long history of antipiracy laws and regulatory constraints on online content publication. However, users face few obstacles to publishing online.

An antipiracy law administered by the High Authority for the Dissemination of Works and the Protection of Rights on the Internet (HADOPI) was originally passed in 20091 and supplemented by a second law also passed that year.2 HADOPI functions by responding to copyright infringers with a graduated response, starting with an email warning for the first offense, followed by a registered letter if a second offense occurs within six months. If a third offense occurs within a year of the registered letter, the case can be referred to a court, and the offender may receive a fine.3 In 2019, HADOPI filed more than 1,748 referrals to prosecutors (compared to 1045 in 2018). Most fines ranged from €50 to €1,500 ($55 to $1,650).4

A new copyright proposal may increase HADOPI’s power by implementing the newly passed EU Copyright Directive (see B3).5 The proposal is currently in draft form. A first report was redacted on November 20, 2019 to discuss the boundaries of the draft proposal that were expected to be debated in 2020.6 In its present form, the proposal would, inter alia, ban websites that host pirated content and promote the use of measures similar to YouTube’s Content ID on other social media platforms to automatically detect and remove copyright violations.7 It has been criticized by freedom of speech activists who fear that measures like Content ID will limit the ability of content creators to benefit from the fair use of copyrighted materials.8

The principle of net neutrality is enshrined in the law. In November 2018, a joint study published by ARCEP and Northeastern University indicated that net neutrality was better respected in France than in the rest of the EU.9

B7 1.00-4.00 pts0-4 pts
Does the online information landscape lack diversity? 4.004 4.004

France is home to a highly diverse online media environment. There are no restrictions on access to independent online media. There is no censorship of platforms providing content produced by different ethnic, religious, or social groups, including LGBT+ people. However, commentators have observed increased online harassment against LGBT+ users (see C7).1

B8 1.00-6.00 pts0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 6.006 6.006

There are no restrictions on digital mobilization in France. The state and other actors do not block online organizing tools and collaboration websites.

A number of digital rights and advocacy groups, such as Squaring of the Net (LQDN), are active and play a significant role in protesting the government’s recent moves to expand censorship and surveillance measures without judicial oversight.1

The yellow vest movement was rooted in digital mobilization platforms like, along with social media platforms. The first protests were planned using Facebook, Twitter, and YouTube in May 2018.2 The movement organized its first mass protest in Paris in November 2018 through Facebook. Protests are still organized online every weekend by the yellow vests. When the number of followers of yellow-vest-aligned Facebook groups dropped precipitously in January 2019, some suspected that Facebook was censoring the movement,3 but Facebook asserted that the reduction in followers was due to a change in the platform’s policies for counting followers, seeking to dispel misinformation about the issue.4 After the coverage period, several yellow vest leaders were arrested for organizing illicit demonstrations on Bastille Day in July 2019.5 Protests continued through the end of 2019, including a national strike in December.6 While the strike movement was diminished by the national confinement related to COVID-19 starting in March 2020, it pursued action on social media and through building windows.7

C Violations of User Rights

Laws to address threats to national security have bolstered the state’s surveillance powers and introduced stricter measures to tackle terrorist propaganda online. A new amendment to the Military Planning Law increased the state’s surveillance capabilities. The COVID-19 pandemic and corresponding national lockdown raised the specter of the monitoring of confined and sick people without their consent. The telecommunications provider Orange provided anonymized aggregate subscriber data to the French government and the French government decided to create a centralized app for COVID-19 contact-tracing, enabling them more control at the cost of citizens’ privacy.

C1 1.00-6.00 pts0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 4.004 6.006

The French constitution reinforces press freedom and access to information, and guarantees freedom of speech and the protection of journalists.1

However, the government’s response to the 2015 terror attacks have curtailed human rights online in practice. The European Convention on Human Rights, to which France is a signatory, provides for freedom of expression, subject to certain restrictions considered “necessary in a democratic society.”2 Since the Charlie Hebdo attack and November 2015 terrorist attacks in Paris, the government has suggested on a number of occasions that limiting fundamental rights would serve public safety.3

Broad new powers under the state of emergency proclaimed in 2015 raised concerns among human rights and digital rights activists.4 While then prime minister Manuel Valls declared that it was a “short term response,”5 the state of emergency was subsequently extended six times until November 2017.6 The new counterterrorism law that came into effect in 2017 has also raised concerns among civil rights campaigners for giving prefects and security forces wide-ranging powers with limited judicial oversight. It also introduced a new legal framework for surveillance of wireless communications (see C5).7

C2 1.00-4.00 pts0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities? 2.002 4.004

There are a number of laws that assign criminal or civil penalties for potentially legitimate online activities. In particular, the myriad counterterrorism laws threaten to punish users for such activities. Measures to address terrorism were already in place prior to the 2015–17 state of emergency. The counterterrorism law passed in 2014 penalizes online speech deemed to sympathize with terrorist groups or acts with up to seven years in prison and a €100,000 ($110,000) fine. Speech that incites terrorism is also penalized. Penalties for online offenses are harsher than offline offenses, which are punishable by up to five years in prison and a €75,000 ($83,000) fine.1

Another counterterrorism and organized crime law enacted in 2016 imposes up to two years in prison or a €30,000 ($33,000) fine for frequently visiting sites that glorify or incite terrorist acts, unless these visits are in “good faith,” such as conducting research.2 The Constitutional Council rejected this law in 2017, arguing that the notion of “good faith” was unclear and that the law was not “necessary, appropriate, and proportionate.”3 An amended version was reintroduced as part of a public security law—imposing prison sentences on users who also “manifest adherence” to the ideology expressed at the visited sites4 —but was once again struck down by the Constitutional Court in December 2017.5 While at least one member of Parliament contemplated reintroducing the law during the coverage period, the government has opposed this effort.6

Defamation can be a criminal offense in France, punishable by fines or, in circumstances such as “defamation directed against a class of people based on their race, ethnicity, religion, sex, sexual orientation or handicap,” prison time.7

C3 1.00-6.00 pts0-6 pts
Are individuals penalized for online activities? 5.005 6.006

While no citizens faced politically motivated arrests or prosecutions in retaliation for online activities, users have been convicted of inciting or sympathizing with terrorism online. The broad terms “inciting” and “glorifying” terrorism risk targeting speech that has tenuous connections to terrorist acts.

In February 2020, a court convicted an elected member of the Brittany regional legislature of sympathizing with terrorist acts. The official, who had previously been expelled from the far-right National Front party, posted an Islamophobic message on Twitter following the attacks by a far-right activist in Christchurch, New Zealand against two mosques. She was sentenced to one year’s suspended sentence and three years of ineligibility to contest elections.1

In June 2019, a 21-year-old woman was handed a six-month suspended prison sentence for possessing, but not sharing, videos and pictures glorifying terrorism. Following an electronic search, the police found 82 incriminating videos, along with 735 pictures. She was also accused of being in contact with the Islamic State (IS) militant group through social networks.2

In June 2019, Marine Le Pen, leader of the far-right National Rally party, was ordered to stand trial by a correctional court for sharing videos of IS terrorists beheading a journalist on Twitter. The trial was postponed to February 2021.3

Penalties for threatening state officials are applied to online activities. In May 2019, a man was fined €500 ($550) for sending President Macron a death threat on Facebook.4

C4 1.00-4.00 pts0-4 pts
Does the government place restrictions on anonymous communication or encryption? 2.002 4.004

Users are not prohibited from using encryption services to protect their communications, although mobile users must provide identification when purchasing a SIM card, potentially reducing anonymity for mobile communications.1 There are no laws requiring providers of encryption services to install backdoors, but providers are required to turn over decryption keys to the government.2 In June 2019, a drug dealer who was using encryption services refused to unlock his phone during his arrest and was also charged for this refusal. A court later ruled that the suspect was not required to unlock his phone in the absence of a court order, setting a legal precedent.3

Anonymous communication using tools such as Tor is not prohibited.

C5 1.00-6.00 pts0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 2.002 6.006

Surveillance has escalated in recent years, including through the enactment of a new surveillance law in 2015, which was passed in the wake of the attack on Charlie Hebdo that year.

The 2015 Intelligence Law allows intelligence agencies to conduct electronic surveillance without a court order.1 An amendment passed in 2016 authorized real-time collection of metadata not only from individuals “identified as a terrorist threat,” but also those “likely to be related” to a terrorist threat and those who belong to the “entourage” of the individuals concerned.2

The Constitutional Council declared three of the law’s provisions unconstitutional in 2015, including one that would have allowed the interception of all international electronic communications. However, an amendment enabling surveillance of electronic communications sent to or received from abroad was adopted later in 2015, shortly after the Paris attacks, for the purposes of “defending and promoting the fundamental interests of the country.”3 In 2016, the Constitutional Council struck down part of the Intelligence Law related to the monitoring of hertz wave communications, ruling it “disproportionate.”4 Article 15 of the new counterterrorism law of 2017 reintroduced a legal regime for monitoring wireless communications, but limits surveillance to certain devices such as walkie-talkies and does not encompass Wi-Fi networks.5

The COVID-19 pandemic and the ensuing national lockdown raised the specter of the monitoring of confined and sick people without their consent. In March 2020, Orange shared statistics on mobile users’ travels out of the Paris region area in response to a government request, and the telecommunications industry invited legislation to regulate such data-sharing (see C6).6

In April 2020, the government announced the development of a Bluetooth contact-tracing app that deploys pseudonymized identifiers and relies on centralized data storage.7 The release of the app, named StopCovid, was originally intended to coincide with the deconfinement measures of May 11, but was released on June 2, after only one month of development.8 The CNIL released opinions on the principles of the app on April 269 and May 26,10 ultimately noting that its concerns had been addressed and approving the release of the app. On May 27, the National Assembly and Senate voted to approve the deployment of the app.11 Critics in civil society and Parliament raised concerns about anonymity, the effectiveness of the tool, the potential for discriminatory effects, but also basic interoperability issues (as of June 2020, the French app lacked any form of interoperability with neighboring countries).12 As of June 2020, only 2.8 percent of French citizens downloaded the app.

The state of emergency imposed from 2015 and 2017 included provisions on electronic searches13 and empowered the minister of the interior to take “any measure to ensure the interruption of any online public communication service that incites the commission of terrorist acts or glorifies them.”14

In 2019, an amendment that was passed as part of a routine military spending bill (the Military Planning Law, or LPM) extended the state’s surveillance capabilities. To be implemented from 2019 to 2025, the amendment expands access to data collected outside France’s borders by providing domestic antiterrorism investigators with information obtained by the General Directorate for External Security, France’s foreign intelligence agency.15 According to Article 37 of the new LPM, it will be possible to “perform within the intercepted connection data spot checks for the sole purpose of detecting a threat to the fundamental interests of the nation, linked to subscription numbers or technical identifiers attributable to French territory and geographical areas.”16 Digital rights groups have criticized this expansion of surveillance that previously only affected French citizens living abroad.17

The LPM covering 2014 to 2019 extended administrative access to user data by enabling designated officials to request such data from ISPs for “national security” reasons, to protect France’s “scientific and economical potential,” and to prevent “terrorism” or “criminality.”18 The office of the prime minister authorizes surveillance, and the National Commission for Security Interception (CNCIS, later renamed the National Intelligence Control Commission, or CNCTR) must be informed within 48 hours in order to approve it.19 Early critics pointed out that the CNCIS lacked appropriate control mechanisms and independence from potential political interference, given that the body was comprised of only three politicians in 2014.20 While the government argued that the law provided an improved legal framework for practices that had been in place for years,21 it finally replied to these criticisms at the end of 2015 by enlarging its composition from three members to nine, making room for judges.22

A law related to the fight against organized crime and terrorism, enacted in 2016, also elicited strong reactions from the public.23 The law notably expanded the range of special investigation methods available to prosecutors and investigating judges, which were previously reserved for intelligence services. These include bugging private locations, using phone eavesdropping devices such as international mobile subscriber identity (IMSI) catchers, and conducting nighttime searches.24 Relatedly, Article 23 of the Law on Guidelines and Programming for the Performance of Internal Security (LOPPSI 2), adopted in 2011, granted the police with the authority to install malware—such as keystroke logging software and Trojan horses—on suspects’ computers in the course of counterterrorism investigations, although a court order must first be obtained.25

The Digital Republic Act adopted in 2016 seeks to enhance individuals’ rights to control the use of their personal data. Companies will face hefty fines if they fail to comply; with the GDPR coming into force in 2018, the CNIL will be able to fine a company up to 4 percent of its total worldwide annual turnover for any data protection violations.26

C6 1.00-6.00 pts0-6 pts
Are service providers and other technology companies required to aid the government in monitoring the communications of their users? 3.003 6.006

Service providers are required to aid the government in monitoring their users’ communications under certain circumstances. For instance, they must retain user metadata for use in criminal investigations.1 The 2015 Intelligence Law requires ISPs to install so-called “black boxes,” algorithms that analyze users’ metadata for “suspicious” behavior in real time.2 The first black box was set in 2017.3 Intelligence services released data on the use of three black boxes in 2018, and two additional black boxes were added during the coverage period.4 Related to this increase in surveillance capabilities, 10,562 “security interceptions” were undertaken in 2018—an increase of 20 percent from 2017. Real-time geolocation tracking in the context of individual surveillance for national security purposes increased by 38.4 percent (from 3,751 to 5,191). The number of individuals subject to this surveillance only slightly increased (from 21,386 to 22,038).5

In March 2020, Orange shared public statistics on mobile users’ travels out of the Paris region area in response to a government request, to aid contact-tracing efforts of people with symptoms of COVID-19.6 The telecommunications industry then invited the government to adopt legislation in case that more advanced measures were needed. The government created a consultation committee in March 2020 to assess the use of geolocation data as part of the surveillance of the spread of the COVID-19 pandemic,7 raising concern among activists that there will be mapping of every patient or confined person without their consent. 8

In June 2019, the Ministry of the Interior proposed a new intelligence law in order to extend the use of black boxes, with the aim of improving automation, prolonging data collection, and taking into account new technologies such as 5G networks.9

Despite these surveillance efforts, the data protections enshrined in the GDPR are strongly enforced in France. In January 2019, the CNIL fined Google a record €50 million ($55 million) for violating the regulation.10

C7 1.00-5.00 pts0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in retribution for their online activities? 4.004 5.005

There were no reported physical attacks against journalists or ordinary users during the coverage period. However, there were several high-profile cases of online harassment.

In February 2020, Benjamin Griveaux, a candidate running for mayor of Paris, stepped down from the race after a video of him engaging in a sex act, self-recorded for a lover who was not his spouse, was leaked online.1

In February 2019, a group of mostly male journalists were accused of online harassment against women, obese people, and LGBT+ people. Though they carried out harassment campaigns primarily on Twitter, they coordinated their activities in a private Facebook group called the “League of LOL.”2

In April 2019, journalists from the investigative online outlet Disclose were summoned to the General Directorate for Internal Security (DGSI), France’s domestic intelligence agency, after publishing confidential documents about the export of weapons later used by Saudi Arabia and the United Arab Emirates (UAE) in the war in Yemen.3

Online harassment of LGBT+ people increased during the coverage period. The NGO called SOS Homophobia highlighted in its 2020 report an increase of anti-LGBT+ content on social networks, from 383 cases reported in 2018, to 596 in 2019.4 In January 2019, two associations defending LGBT+ rights filed 213 complaints related to insults, incitements to hatred, and calls to murder LGBT+ users on social networks.5 Also in January 2019, YouTuber and LGBT+ advocate Bilal Hassani filed a lawsuit asserting that he was the victim of a large-scale cyberbullying campaign.6

C8 1.00-3.00 pts0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 2.002 3.003

Several government-affiliated websites experienced cyberattacks during the coverage period, and businesses routinely experience hacking attempts.

During the COVID-19 crisis in March 2020, l’Assistance publique-Hôpitaux de Paris, which manages 39 hospitals in Paris and the surrounding region, experienced a distributed denial-of-service (DDoS) attack, leading the hospital network to close temporarily its internet access for a day. 1

In June 2019, the government’s tax collection website went down on the last day for fiscal declarations. The National Cybersecurity Agency (ANSSI) is investigating the case and suspects that the attack originated from abroad.2 It was also reported that 2,000 fiscal declarations were altered by hackers.3

In June 2020, after the coverage period, the national French Television group experienced a malware attack, though it had no effect on broadcasting.4

According to the Global State of Information Security Survey 2018, French business losses related to cyberattacks grew by 50 percent in 2017, with companies losing an average of €2 million ($2.2 million). More than 4,550 cybersecurity incidents were recorded by French companies in one year.5 Companies and institutions also frequently experience ransomware attacks, which are sometimes targeted attacks where cybercriminals manually intrude the network and encrypt data; the petroleum company Picoty SA suffered such an attack in May 2019.6 There are also automated viruses using ransomware from the black market, which are injected via phishing schemes. A public hospital’s network was affected in this manner in May 2019.7

During the 2017 presidential campaign, Macron’s campaign team announced that they were the “victim of a massive and coordinated hacking attack” after thousands of leaked emails and documents were dumped on the internet in a last minute effort to destabilize the race.8 Macron had previously confirmed being the target of phishing operations by a group of hackers and denounced the “interference.”9 Later, an investigation by Le Monde indicated that these cyberattacks were directed by a US-based neo-Nazi group.10 Observers noted that there was no real police investigation into the leaks.11 Indeed, after Macron was elected, the government did not follow up on the investigation of the origins of this cyberattack.

On France

See all data, scores & information on this country or territory.

See More
  • Global Freedom Score

    89 100 free
  • Internet Freedom Score

    76 100 free
  • Freedom in the World Status

  • Networks Restricted

  • Websites Blocked

  • Pro-government Commentators

  • Users Arrested