Avoiding the Pitfalls of Social Media Regulation | Freedom House

Avoiding the Pitfalls of Social Media Regulation

< back to Freedom At Issue Blog

by Alex Rochefort, Google Public Policy Fellow

Presidential Social Media Summit

Special Assistant to the President for Innovation Policy and Initiatives Matt Lira speaks with House Minority Leader Kevin McCarthy, R-Calif., July 11, 2019, during the Presidential Social Media Summit in the East Room of the White House. Image Credit: Official White House Photo by Andrea Hanks via Flickr

Policymakers should proceed with caution, and welcome good advice.

Earlier this month the White House convened a “social media summit” to discuss the “opportunities and challenges of today’s online environment.” Under review was the allegation that major social media companies suppress conservative viewpoints. In the lead-up to the event, one outspoken activist in attendance had penned an op-ed in the Washington Post decrying Section 230 of the Communications Decency Act (CDA), the legal mechanism that determines how social media platforms regulate content.

As policymakers react to the controversy surrounding the social media industry, multiple agendas are colliding. In many cases, new government rules are being advanced without an assessment of their possible long-term effects. Nowhere is this more evident than with regard to the debate over Section 230.

Broadly speaking, Section 230 is a federal law that protects online service providers from most civil liabilities and state criminal prosecution for the content generated by their users. In other words, companies like Facebook and Twitter generally cannot be sued for user posts that appear on their platforms. The provision also encourages proactive oversight practices by shielding industry providers when they limit offensive, harmful, or otherwise objectionable content, regardless of whether it falls within First Amendment protections. Promoting responsible content moderation was, in fact, a key goal of the law. Importantly, the immunity provision does not extend to federal criminal law, which includes, for example, prohibitions on child sexual abuse imagery, sex-trafficking laws like FOSTA, intellectual property (copyright) laws, and electronic communications privacy laws.

Responding to a digital ecosystem that is increasingly plagued by fake news, hate speech, false advertising, and other problems, officials in various countries have updated their own “intermediary liability” arrangements. Singapore has gone so far as to criminalize fake news, which the government has broad discretion to define. Technology companies that don’t comply with government content-removal requests risk incurring significant fines. This approach offers little in the way of judicial oversight and has been criticized as overreach and a threat to freedom of expression. Germany’s new content law, known as NetzDG, requires social media providers to follow detailed government guidelines in blocking access to hate speech, defamation, and other illegal content within the country. It too has been condemned for encouraging censorship while imposing standards that are not necessarily recognized in other societies.

Now, some in the United States are seeking to reduce service providers’ protections under Section 230. One senator has proposed removing automatic immunities unless major platforms can prove that their moderation practices are politically neutral. The idea, though presumably well-intentioned, is fraught with ambiguity and contradicts a central purpose of Section 230, which was to encourage moderation without specific content obligations. Moreover, asking government officials to adjudicate the “neutrality” of content policies could invite political gamesmanship. Others taking aim at Section 230 have suggested treating platforms as publishers, making them directly liable for the content they host. Courts have already found some platforms liable for content they effectively developed or created. Yet experts have noted that the publisher liability approach is problematic from a practical implementation standpoint and was not what Congress originally intended.

On the morning of the social media summit, a group of academics and civil society organizations released a document entitled “Liability for User-Generated Content Online: Principles for Lawmakers.” This letter urges a judicious approach that would allow authorities to take action against the rising tide of online abuses while avoiding an overbroad, or poorly crafted, regulatory intervention. The following points of advice on the future of Section 230 are based on the perspectives of this group, either in whole or in part:

1. Regulations should err on the side of free speech.

As it stands, Section 230 prioritizes freedom of expression. This is as it should be. The concept of internet freedom cannot be taken for granted at a time when new forms of corporate and governmental intrusion are surfacing.

2. Incremental changes are wiser than sweeping overhauls.

The balance that Section 230 strikes between potentially conflicting values—such as free speech and user protection, corporate problem-solving and government control, or innovation and standardization—is a delicate one. Legislative overhauls that would impose new regulatory regimes within such a complex and evolving sector should be approached with caution. Some legal scholars favor modest proposals for dealing with harmful online content by means of “conditioned” immunity, in which social media companies must demonstrate that they are curbing the worst abuses of their platforms. This does not mean that certain state-level exceptions to immunity could not exist. One expert has foregrounded the competing legal, technical, organizational, and social considerations inherent in the framing of intermediary liability statutes so that lawmakers will be encouraged to fine-tune their approach to regulation policy in this area.

3. Greater transparency and civil society participation are necessary.

Do social media platforms suppress the speech of specific ideological groups? Do they apply and enforce content rules consistently? How systematic and thorough are content policy updates? Without better data, it is impossible to draw informed conclusions about matters like these. Policymakers should continue to prioritize expanded information-sharing among technology companies, civil society advocates, and researchers. Notably, France is currently exploring the regulatory possibilities of this kind of collaboration. Transparency is not an end in itself, but rather the crucial means for achieving greater accountability and evidence-based public policy in the social media domain.

 

Finally, in the face of differing social norms and laws around the world, a human rights perspective could provide a body of principles for guiding content-moderation policies internationally. As stated in a UN special rapporteur’s 2018 report on regulating user-generated content, “the authoritative global standard for ensuring freedom of expression on [social media] platforms is human rights law, not the varying laws of States or [the platforms’] own private interests.”

Analyses and recommendations offered by the authors do not necessarily reflect those of Freedom House.

Share this story