Perspectives May 2, 2019
Asia’s Elections Are Plagued by Online Disinformation
Parties and candidates across the region have turned to content manipulation as a preferred campaign tactic.

An image from the BJP Cyber Army page on Facebook (@bjpcyberarmy)
Parties and candidates across the region have turned to content manipulation as a preferred campaign tactic.
The manipulation of online content is an increasingly common feature of political campaigns around the world, particularly in Asia, where several major countries have elections this year. Candidates have employed the technique for a variety of purposes, such as feigning grassroots support, smearing opponents and journalists, and warping online discussion to advance their positions and downplay unfavorable topics.
Governments seeking to influence elections continue to use cruder forms of censorship, including website blocking and internet shutdowns, but content manipulation offers a number of distinct advantages. It is more difficult to detect and counteract, and it requires relatively little in terms of resources, meaning it is available to both challengers and incumbents. In fact, its dispersed nature and the sheer amount of it online threaten to undermine the ability of voters everywhere to choose their leaders on the basis of accurate news and informed debate.
Direct, indirect, and automated meddling
The most direct way for candidates and parties to manipulate content is to spread deliberately falsified or misleading reports themselves. For example, in India’s general election, which stretches from April 11 to May 19, party officials have shared propaganda over WhatsApp. Amit Malviya, the national head of the ruling Bharatiya Janata Party’s information technology unit, is an administrator of the BJP Cyber Army 400+ WhatsApp group, which describes itself as a league of “Hindu warriors working to save nation from break India forces led politically by congress, communist and religiously by Islam and Christianity [sic].” In Thailand, which held its elections in March, a doctored audio file was circulated, purporting to show that Thanathorn Juangroongruangkit—leader of the popular opposition Future Forward Party—had conspired with ousted prime minister Thaksin Shinawatra. Reports indicated that many of the outlets promoting the recording had links to the News Network Corporation, whose chairman at the time of the election was a member of Thailand’s incumbent military junta, the National Council for Peace and Order.
Some political actors prefer to outsource their content falsification to hired guns. In Indonesia, where the country’s 192 million registered voters headed to the polls on April 17, online campaign strategists for both incumbent president Joko Widodo, known as Jokowi, and his main challenger, former general Prabowo Subianto, allegedly deployed paid commentators—known as “buzzers”—to spread political propaganda and disinformation. One reported using 250 accounts on Facebook, WhatsApp, YouTube, and Instagram. Depending on the reach of their posts, buzzers were said to be paid between $70 and $350 per project.
The use of bots, or automated accounts, is also proving popular. They can be created easily and marshalled in large numbers to shape what is discussed online or harass opponents, journalists, and voters. The technique was on display during a recent campaign visit by Indian prime minister Narendra Modi to the southern state of Tamil Nadu. The Atlantic Council’s Digital Forensic Research Lab (DFR) found that the first 49,000 tweets using the hashtag #TNwelcomesModi were significantly manipulated: About 66 percent of the tweets causing the hashtag to trend had come from just 50 accounts. One of them, @SasiMaha6, shared 1,803 tweets with only 15-second intervals between each post—a strong indicator of automation. The opposition hashtag #GoBackModi, which trended during the same period, also received a heavy boost from Twitter bots: @PhillyTdp, for example, tweeted using this hashtag every 5.3 seconds for a total of 2,179 posts. Similar patterns have been spotted in other countries. In Indonesia, DFR found that over a one-month period, 25 percent of posts using the hashtag #JokowiLagi, signifying support for the incumbent president, were automated.
Long-term damage
In many Asian electoral environments, traditional media outlets cannot operate independently, and voters rely on social media and messaging apps as their primary source of news. Content manipulation makes the line between real news and propaganda difficult to discern, particularly in hyperpartisan settings in which each side regularly accuses the other of disseminating falsehoods. This in turn significantly reduces voters’ capacity to make clear-eyed political decisions.
But the damage done by partisan content manipulation is not limited to the elections themselves. Parties and candidates that engage in such activities often take advantage of existing cleavages between different groups in society, and these divisions remain exacerbated after the vote. Supporters who have bought into fraudulent smears against rival groups may also expect their elected candidates to act accordingly once in power. Government policies, not just voter decisions, could come to be based on outright falsehoods.
Indeed, politicians who attribute their electoral success to manipulation techniques are more likely to use them in governing. In the Philippines in 2016, for example, then presidential candidate Rodrigo Duterte employed a “keyboard army” to assist his campaign. For $10 a day, the hired commentators posted in support of Duterte and against his opponents via fraudulent social media accounts. These efforts have reportedly continued during his administration to promote his policies, including the extrajudicial killings associated with his supposed war on drugs.
There is no easy fix to content manipulation. Collaborative fact-checking efforts are needed to monitor disinformation in real time. In India and the Philippines, news outlets and civil society organizations have teamed up to verify legitimate news, debunk fraudulent content, and monitor candidates’ use of social media. Platforms should also be required to clearly label political advertising and other paid comments. Longer-term efforts will have to prioritize digital literacy education and support for press freedom in general.
For now, however, it seems clear that the problem has far outpaced the resources devoted to combating it. Until this imbalance is addressed, democracy in Asia and around the world will continue to suffer.