Policy Recommendations
Palestinian journalists in the Gaza Strip attempt to connect to the internet. Armed conflicts around the world were made even more dangerous by restrictions on connectivity. (Photo credit: Said Khatib/AFP via Getty Images)
Policymakers, the tech industry, and civil society should work together to address the global decline in internet freedom.
The following recommendations lay out strategies that policymakers, regulators, donor institutions, and private companies can adopt to advance human rights online and prevent or mitigate the internet’s contribution to broader societal harms. While reversing the global decline in internet freedom will require the participation of a range of stakeholders, governments and companies should actively partner with civil society, which has always been at the forefront in raising awareness of key problems and identifying remedies with which to address them.
1. Promote Freedom of Expression and Access to Information
Freedom of expression online is increasingly under attack as governments shut off internet connectivity, block social media platforms, or restrict access to websites that host political, social, and religious speech. Protecting freedom of expression will require strong legal and regulatory safeguards for digital communications and access to information.
Governments
Governments should maintain access to internet services, digital platforms, and anticensorship technology, particularly during elections, protests, and periods of unrest or conflict. Imposing outright or arbitrary bans on social media and messaging platforms unduly restricts free expression and access to information. Governments should address any legitimate risks posed by these platforms through existing democratic mechanisms, such as regulatory action, security audits, parliamentary scrutiny, and legislation passed in consultation with civil society. Other methods to address legitimate security problems include strengthening legal requirements for platform transparency, data privacy, cybersecurity, and responsibility for mandatory human rights due diligence and risk assessments. Any legal restrictions for online content should adhere to international human rights standards of legality, necessity, and proportionality, and include robust oversight, transparency, and consultation with civil society and the private sector.
Legal frameworks addressing online content should uphold internationally recognized human rights and establish special obligations for companies that are tailored to their size and services, incentivize platforms to improve their own standards, and require human rights due diligence and reporting. Such obligations should prioritize transparency across core products and practices, including content moderation, recommendation and algorithmic systems, collection and use of data, and political and targeted advertising. Laws should ensure that vetted researchers are able to access platform data in a privacy-protecting way, allowing them to provide insights for policy development and civil society’s broader analysis and advocacy efforts.
Safe-harbor protections for intermediaries should remain in place for most of the user-generated and third-party content appearing on platforms, so as not to encourage these companies to impose excessive restrictions that inhibit free expression. Laws should also reserve final decisions on the legality and removal of content for the judiciary. Independent regulators with sufficient resources and expertise should be empowered to oversee the implementation of laws, conduct audits, and ensure compliance. Provisions in the European Union’s Digital Services Act—notably its transparency requirements, data accessibility for researchers, a coregulatory form of enforcement, and algorithmic accountability—offer a promising model for content-related laws.
Companies
Companies should commit to respecting the rights of people who use their platforms or services, and to addressing any adverse impact that their products might have on human rights. The Global Network Initiative’s Principles provide concrete recommendations on how to do so.
Companies should support the accessibility of anticensorship technologies, including by making them more affordable, and resist government orders to shut down internet connectivity or ban digital services. Service providers should use all available legal channels to challenge content removal requests—whether official or informal—that would violate international human rights standards, particularly when they relate to the accounts of human rights defenders, civil society activists, journalists, or other at-risk individuals.
If companies cannot resist such demands in full, they should ensure that any restrictions or disruptions are as limited as possible in duration, geographic scope, and type of content affected. Companies should thoroughly document government demands internally and notify people who use their platforms as to why connectivity or content may be restricted, especially in countries where government actions lack transparency. When faced with a choice between a ban of their services and complying with censorship orders, companies should bring strategic legal cases that challenge government overreach, in consultation or partnership with civil society.
2. Defend Information Integrity
The potential consequences of false, misleading, and incendiary content are especially grave during election periods, underscoring the need to protect information integrity. Efforts to address the problem should start well before campaigning begins and continue long after the last vote is cast.
Governments
Governments should adopt a whole-of-society approach to fostering a high-quality, diverse, and trustworthy information space. The Global Declaration on Information Integrity Online identifies best practices for safeguarding the information ecosystem, to which governments should adhere. For example, the declaration highlights the need to protect freedom of expression and address false or misleading information that targets and affects women, LGBT+ people, people with disabilities, and Indigenous people. It also underscores the importance of working with other initiatives designed to enhance information integrity, such as the Forum on Information and Democracy.
Laws aimed at increasing platform responsibility as described above—such as those that boost transparency, provide platform data to vetted researchers, and safeguard free expression—are pivotal to countering threats to information integrity. Governments should also support independent online media and empower ordinary people with the tools they need to identify false or misleading information and to navigate complex media environments. They should proactively and directly engage with their constituencies to disseminate credible information and build trust. In addition, election management bodies and/or government officials should seek out trusted community messengers from specific populations who can share reliable information. Governments should support the work of independent civil society organizations that conduct fact-checking efforts, civic education initiatives, and digital literacy training, as well as those that focus on human rights and democracy work more broadly.
Governments should set strong rules on how generative artificial intelligence (AI) can be used in political campaigns. Policymakers should require the labeling of campaign advertisements featuring AI-generated images, audio, or video. Policymakers should also evaluate how to prohibit the use of AI-generated media for manipulative or deceptive purposes in online campaigning, for example to fabricate statements by a political opponent. In the United States, Congress should direct the Federal Election Commission to pursue rulemaking to this effect, in line with the Federal Communications Commission’s pending rulemaking on AI use in campaign advertisements that appear in broadcast media.
Companies
The private sector has a responsibility to ensure that its products contribute to, and do not undermine, a diverse and reliable information space. Companies should invest in staff tasked with work related to public policy, information integrity, trust and safety, and human rights, including teams of regional and country specialists. These teams should collaborate closely with civil society groups around the world to understand the local impact of their companies’ products. Without such expertise, the private sector is ill-equipped to address harassment, abuse, and false and misleading information that can have serious offline consequences. Social media firms should also develop mechanisms for and expand researchers’ access to platform data, allowing
for independent analysis of harassment, disinformation campaigns, and other trends online.
Companies should continue to develop effective methods to watermark AI-generated content, which entails the use of a cryptographic signature. While not a silver-bullet solution, watermarking could be useful when combined with other labeling of AI-generated media for individual awareness, as well as coordination with civil society, academia, and technical experts on industry standards for documenting the provenance of specific content. When assessing how to appropriately enhance content provenance, companies should consider privacy risks for human rights defenders and other vulnerable users.
As more government agencies, such as technical regulators and election management bodies, seek to engage with technology firms, companies should tailor their engagement based on an assessment of whether the bodies operate independently and without political interference, in consultation with in-country civil society. Companies should specifically adopt processes and procedures to ensure that engagement does not undermine free expression, access to information, due process, and other fundamental rights. For example, formal and informal demands for content removal should be thoroughly documented and evaluated to determine whether they are sufficiently protecting human rights.
To combat political violence and support free and fair elections more broadly, technology platforms should develop standards for threat assessment and crisis planning. This includes addressing threats against election workers and responding to false election-related claims by promoting accurate information and meaningfully engaging with civil society, fact-checkers, and, as appropriate, election management bodies and government officials. Companies should dedicate adequate resources to both preelection and postelection activities, and ensure the smooth operation of escalation channels.
3. Combat Disproportionate Government Surveillance
Governments worldwide have passed disproportionate surveillance laws and can access a booming commercial market for surveillance tools, giving them the capacity to monitor the private communications of individuals inside and beyond their borders in violation of international human rights standards. The lack of data privacy safeguards in the United States and around the world exacerbates the harms of excessive government surveillance.
Governments
Government surveillance programs should adhere to the International Principles on the Application of Human Rights to Communications Surveillance, a framework agreed upon by a broad consortium of civil society groups, industry leaders, and scholars. The principles, which state that all communications surveillance must be legal, necessary, and proportionate, should also be applied to AI-driven and biometric surveillance technologies, targeted surveillance tools like commercial spyware and extraction software, and open-source intelligence methods such as social media monitoring.
In the United States, lawmakers should reform or repeal existing surveillance laws and practices, including Section 702 of the Foreign Intelligence Surveillance Act and Executive Order 12333, to better align them with these standards. Broad powers under Section 702 and Executive Order 12333 have allowed US government agencies to collect and access Americans’ personal data without meaningful transparency or oversight. Congress should also close a legal loophole that allows US government agencies to purchase personal data from commercial brokers rather than obtaining a warrant.
Policymakers should refrain from mandating the introduction of “back doors” to digital devices and services, requiring that messages be traceable, or reducing intermediary liability protections for providers of end-to-end encryption. Weakening encryption would endanger the lives of activists, journalists, members of marginalized communities, and ordinary people around the world.
Governments should restrict the export of surveillance technologies of concern, including commercial spyware, and should solicit input from civil society when considering how to strengthen export controls to protect human rights. The US Commerce Department’s Bureau of Industry and Security has taken several important steps to this effect, including adding commercial spyware firms to its Entity List—which subjects them to specific export restrictions—and initiating regular civil society consultations. The US Congress should pass legislation to codify provisions of Executive Order 14093 that prohibit the operational use of commercial spyware products by federal agencies.
The US government should continue to lead the international community in its efforts to combat the abuse of commercial spyware by encouraging signatories to the Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware to follow through on their commitments. Like-minded democracies, in Europe and elsewhere, should follow suit, including through the Pall Mall Process led by the United Kingdom and France, among other forums. Bold action from these democracies would be an important step in combating spyware purveyors’ irresponsible global trade.
Companies
Companies should mainstream end-to-end encryption in their products, support anonymity software, and uphold other robust security protocols, including by notifying victims of surveillance abuses and resisting government requests to provide special decryption access. Companies should also resist government data requests that contravene international human rights standards or lack a valid judicial warrant. Digital platforms should use all available legal channels to challenge such problematic requests from state agencies, whether they are official or informal, especially when they relate to the accounts of human rights defenders, civil society activists, journalists, or other at-risk individuals.
Businesses exporting surveillance and censorship technologies that could be used to commit human rights abuses should report publicly and annually on the human rights–related due diligence they are conducting before making sales, the due diligence obligations they are requiring from their resellers and distributors, and their efforts to identify requests from customers that suggest the technologies may be used for repressive purposes. The reports should include a list of countries to which they have sold such technologies. These businesses should also adhere to obligations and responsibilities outlined in the UN Guiding Principles on Business and Human Rights.
4. Safeguard Personal Data
Comprehensive data-protection regulations and industry policies on data protection are essential for upholding privacy and other human rights online, but they require careful crafting to ensure that they do not contribute to internet fragmentation—the siloing of the global internet into nation-based segments—and cannot be used by governments to undermine privacy and other fundamental freedoms.
Governments
Democracies should collaborate to create interoperable privacy regimes that comprehensively safeguard user information, while also allowing data to flow across borders to and from jurisdictions with similar levels of protection. Individuals should be given control over their information, including the right to access it, delete it, and easily transfer it to providers of their choosing. Laws should include guardrails that limit the ways in which private companies can use personal data for AI development and in their AI systems, including algorithmic recommendations. Governments should ensure that independent regulators and oversight mechanisms have the ability, resources, and expertise to ensure foreign and domestic companies’ compliance with updated privacy, nondiscrimination, and consumer-protection laws.
The US Congress should urgently pass a comprehensive federal law on data privacy that includes data minimization, the principle that personal information should only be collected and stored to the extent necessary for a specific purpose, and purpose limitation, the principle that personal data gathered for one purpose should not later be used for another. This is especially relevant for discussions around generative AI and other technologies that depend on harvesting information online without people’s consent.
In the absence of congressional action, the US Federal Trade Commission (FTC) has been working to develop new regulations on commercial surveillance and data security. While an Advance Notice of Proposed Rulemaking was announced over two years ago, the process is still ongoing. The commission should continue to pursue enforcement of existing rules to hold companies accountable, and Congress should ensure that the FTC has sufficient resources to finalize and enforce meaningful new regulations related to data protection.
Companies
Companies should minimize the collection of personal information, such as health, biometric, and location data, and limit how third parties can access and use it. Companies should also clearly explain to people who use their services what data are being collected and for what purpose, including what information may be collected from user prompts to generative AI services. Finally, companies should ensure that people who use their services have control over their own information, including the right to access it, delete it, and prevent it from affecting an algorithm’s behavior.
5. Protect a Free and Open Internet
A successful defense of the free, open, and interoperable internet will depend on international cooperation and a shared vision for global internet freedom. If democracies live up to their own values at home, they will serve as more credible advocates for internet freedom abroad.
Governments
Governments should ensure that digital diplomacy is coordinated among fellow democracies and promotes the protection of internationally recognized human rights. They should identify and utilize regional multilateral forums that are strategically placed to advance the principles of a free and open internet. Democracies should also facilitate dialogue among national policymakers and regulators, allowing them to share best practices and strengthen joint engagement at international standards-setting bodies.
The multistakeholder model of internet governance, which is essential for the functioning of the global internet and helps constrain the influence of authoritarian regimes on internet freedom, should be protected at multilateral forums and initiatives, including through the United Nations’ Global Digital Compact. Governments should renew the mandate of the Internet Governance Forum and its regional iterations during the forthcoming World Summit on Information Society+20 Review in 2025 and help ensure that civil society can meaningfully participate in these discussions.
The Freedom Online Coalition (FOC) should improve its name recognition and its ability to drive diplomatic coordination and global action. The body should more proactively articulate the benefits of a free and open internet to other governments and be more publicly and privately vocal about threats and opportunities for human rights online. The FOC should also create an internal mechanism by which member states’ laws, policies, and activities can be evaluated to ensure that they align with the coalition’s principles. Finally, the FOC should continue to diversify and expand its advisory network.
Governments should establish internet freedom programming as a vital component of their democracy assistance strategies, incorporating funding for cybersecurity and digital hygiene into their projects. Program beneficiaries should receive support for open-source and user-friendly technologies that will help them circumvent government censorship, protect themselves against surveillance, and overcome restrictions on connectivity. When new and emerging technologies, such as generative AI, are harnessed for programming, they should be deployed in a rights-respecting way.
Democracies should coordinate to ensure that perpetrators who direct or engage in reprisals against people for their online speech face meaningful accountability. This could include imposing targeted sanctions or blocking or revoking visas. Sanctions against state entities should be crafted to minimize their impact on ordinary citizens, and when broad-based sanctions are imposed, democratic governments should carve out exemptions for internet services when relevant.
Governments should advocate for the immediate, unconditional release of those imprisoned or detained for online expression that is protected under international human rights standards. Governments should incorporate these cases, in addition to broader internet freedom concerns, into bilateral and multilateral engagement with perpetrator states.
Companies
Companies should engage in continuous dialogue with civil society to understand the effects of their policies and products. They should seek out local expertise on the political and cultural context in markets where they have a presence or where their products are widely used, especially in repressive settings that present unique human rights challenges. Consultations with civil society groups should inform companies’ decisions to operate in a particular country, their approach to local content moderation, and their development of policies and practices—particularly during elections or crisis events, when managing government requests, and when working to counter online harms.
Prior to launching new internet-related or AI services or expanding them to a new market, companies should conduct and publish human rights impact assessments to fully illuminate the ways in which their products and actions might affect rights including freedom of expression, freedom from discrimination, and privacy.
Finally, when complying with sanctions and anti–money laundering regulations, companies should coordinate with democratic governments to ensure that their risk mitigation efforts are not negatively and needlessly affecting civilians who have not themselves been sanctioned.
Explore the Report
Read the Report
Explore the latest edition of Freedom on the Net to learn how around the world, voters have been forced to make major decisions about their future while navigating a censored, distorted, and unreliable information space.
Key Internet Controls
To track the different ways in which governments seek to dominate the digital sphere, Freedom House monitors their application of nine Key Internet Controls. The resulting data reveal trends in the expansion and diversification of these constraints on internet freedom.
Acknowledgements
Freedom on the Net is a collaborative effort between Freedom House staff and a network of more than 80 researchers, who come from civil society organizations, academia, journalism, and other backgrounds, covering 70 countries.
Sign up to receive the Freedom House weekly newsletter.