Perspectives July 20, 2022
How Social Media Companies Can Protect Kenyan Users Ahead of the ‘Mother of All Elections’
With the help of experts and local civil society, social media companies can do more to protect their users’ right to free expression while combating disinformation that hinders democratic participation.

The President of Kenya, Uhuru Kenyatta, addresses journalists at the conclusion of the historic 28th Extra-Ordinary IGAD Heads of State Summit in Mogadishu, Somalia on September 13, 2016. AMISOM Photo / Omar Abdisalan
On August 9, Kenyan voters will participate in a general election that some are calling the “mother of all elections.” This year’s contest promises to be bitterly fought, with interpersonal relationships between candidates especially fraught. Outgoing president Uhuru Kenyatta, who is term-limited, has thrown his support behind former rival and opposition leader Raila Odinga. Meanwhile, Deputy President William Ruto, who was instrumental to Kenyatta’s prior electoral victories, has launched a populist campaign appealing to economic issues like the rising cost of living while targeting the state machinery that benefited him as deputy president.
While Kenyan elections are considered among the most competitive in East Africa, the specter of electoral violence and malfeasance is never far from voters’ minds. In 2007, electoral authorities declared that then incumbent Mwai Kibaki had defeated Odinga after initial reporting suggested Kibaki was trailing by a significant margin. The head of the electoral commission then admitted that both sides pressured him to declare a winner quickly; communal clashes that killed over 1,000 people and displaced more than 350,000 followed. In 2017, Kenyatta won his second and final term in a disputed election, with the Supreme Court annulling the first-round results after determining that the Independent Electoral and Boundaries Commission’s (IEBC) vote-counting procedures were flawed.
The IEBC is facing accusations of corruption and unpreparedness ahead of the August vote. Observers fear that the uncertainly surrounding the IEBC, along with growing disillusionment over the perceived mismanagement of the country’s economy—especially among younger citizens—could lead to violence. The risk is particularly high if the vote count swings wildly toward one candidate at the eleventh hour, as it did in 2007, and opponents quickly seize on narratives alleging fraud. Amid this crucial vote, social media companies must ensure that Kenyan voters have access to reliable information and are protected from the online harms that threaten democratic participation.
Internet harms are threats to electoral integrity
There is good reason to believe that candidates could use disinformation to foment violence besides the country’s recent history. Kenya-based researchers like Odanga Madung have raised the alarm about the prevalence of harmful “political disinformation,” some of which is meant to stoke ethnic tension, on platforms like TikTok ahead of the August polls.
Indeed, when Freedom House prepared a preelection assessment on Kenya as part of our Election Watch for the Digital Age project, we noted that state and nonstate actors manipulate the country’s online information landscape to spread disinformation, increasing the risk of communal violence. Our assessment also noted the booming disinformation-for-hire market, where high-profile influencers are paid to spread distorted information on politically sensitive subjects with coordinated hashtags.
TikTok is a particularly concerning vector of harmful disinformation this year. Video content is notoriously difficult to moderate, especially when it contains speech and text in a variety of languages. In the case of Kenya, this is notably true for content that is not produced in English or Kiswahili, the country’s two official languages. Madung has documented potent examples of disinformation on TikTok; in one case, a falsified video is meant to depict television coverage of an opinion poll. Madung also documented out-of-context images that show previous bouts of electoral violence with the aim of suggesting the violence is occurring in the present. These images are mated to threatening language surrounding the upcoming polls. Kenyans who are presented with these distorted reminders may fear its recurrence and stay away from the ballot box.
Electoral periods present unique risks in that social media users may encounter damaging content that is unique to those periods, like vote-buying solicitations or posts designed to suppress voter turnout. But elections also act as accelerants for existing harms like disinformation, censorship, and hate speech. Social media companies would do well to recognize that “everyday” internet harms are often of the same vein as the kind that threaten electoral integrity in Kenya and elsewhere.
Paths forward for social media companies
How can social media companies better protect electoral integrity in places like Kenya while creating online spaces where their users can safely participate in the activities—debate, campaigning, organizing—that are essential to democratic success?
For one, social media companies should heed the call of civil society and invest more in experts who understand the languages and sociopolitical contexts of their markets. For Kenya, this would mean coverage of content produced in commonly spoken languages like Kikuyu and Dholuo in addition to English and Kiswahili. These experts can bolster content moderation efforts that protect users from culturally coded threats and harmful language while protecting voters’ right to speak with openness and nuance during the polls.
This is a difficult task, as effective moderation at scale is challenging. Social media companies must also navigate the tension between safety and free expression that is at the heart of their election-integrity work. But companies can strike that balance by collaborating with local nongovernmental organizations (NGOs). With their involvement, firms can better understand the cultural and sociopolitical context of the threats their users face as well as the implications of their own election-time policies.
At the same time, companies should resist the temptation to “outsource” their moderation efforts to underresourced and overworked NGOs. Instead, these companies should integrate the lessons of their local partners when developing products and election-related policies that are designed to safeguard fundamental freedoms. These products and policies should also move beyond the binary take-down-or-leave-up approach to moderation. Instead, they should include more granular options like increasing user friction to slow the spread of viral disinformation, while remaining sensitive to the local context of content generally.
Finally, technology companies should be more transparent in their election-related policies and decisions. This includes releasing reports on takedowns and other content decisions during electoral periods, whether those decisions are initiated by governments or are undertaken by the companies themselves. Social media companies can also highlight the state affiliations of media outlets by adding content labels, a practice TikTok has only recently (and selectively) committed to after the Russian invasion of Ukraine.
There is a glaring gap between the complex, sometimes divisive role that ethnic and linguistic identities play in Kenyan electoral politics and social media companies’ understanding of the on-the-ground consequences for their users. Closing that gap should be a top priority for all stakeholders committed to supporting democratic elections.