Beijing’s impact on AI-generated content, COVID-19 deaths, Alibaba “golden shares” (February 2023)
In this issue: How Beijing’s censors could shape the future of AI-generated content, Chinese officials’ undercounting of COVID-19 deaths, and more government stakes purchased in top tech firms. Also, the trial of 47 Hong Kong democrats begins and incidents of media influence from China occur in Ethiopia, Brazil, Australia, and Taiwan.
Image of the month: Canceled Tea Party
Chinese artist Badiucao, now living in Australia, published this cartoon on February 11 after the Guardian reported that the British Foreign and Commonwealth Office was intending to meet with Erkin Tuniyaz, chairman of the Xinjiang Uyghur Autonomous Region, in mid-February. Tuniyaz has been sanctioned by the US government for his involvement in rights violations against Uyghurs and other ethnic minority groups in Xinjiang. Badiucao's cartoon, which circulated widely on Twitter, symbolizes the downplaying of the Uyghur rights crisis as a British official has tea with Tuniyaz. Following the outcry over the visit from British MPs and rights groups, Tuniyaz canceled the planned visit (credit: Badiucao).
- Analysis: Beijing’s Censors Could Shape the Future of AI-Generated Content
- In the News:
- Censorship updates: COVID-19 deaths, Alibaba “golden shares,” Apple pulls social media app
- State media vs. netizen narratives: Spy balloon, Li Wenliang memorials, student’s death
- Detentions: Chinese police target pandemic protesters, Xinjiang detainee database published, bookseller’s wife barred from traveling abroad
- Hong Kong: Trial of democracy activists, sedition updates, broadcasters to promote security law, foreign tech censorship
- Beyond China: Ethiopian Twitter campaign, Taiwanese data leaks, global spread of China-based apps, surveillance-export concerns
- Featured Pushback: Viral tweets shed light on plight of political prisoners
- What to Watch For
A critical need for transparency
A defining feature of China’s censorship system is its opacity. Much of what is known about the day-to-day functioning of the apparatus has come from leaks of censorship directives, testimony from former employees, anonymous comments to the media from current staff, and the sorts of outside research and investigations referenced above. Particularly, while many international tech firms are deficient on transparency, their Chinese counterparts are generally even less open regarding the functionality and content-moderation systems of their products and services, including their AI-generative applications. For example, Baidu’s ERNIE-ViLG text-to-image generator does not publish an explanation of its moderation policies, unlike the international alternatives DALL-E and Stable Diffusion.
Given the clear potential for abuse, any pressure applied to Chinese technology firms for greater transparency would benefit users. International competitors should integrate strong human rights principles into developing and implementing new AI-generated tools and set a high global standard for transparency and public accountability. Meanwhile, independent investigations and rigorous testing to detect and understand pro-CCP content manipulation will remain critical to informing users and creating better safeguards for free expression and access to diverse information.
It is perhaps a sign of the times that these constructive endeavors will also likely be assisted by AI technology.
Sarah Cook is a senior advisor for China, Hong Kong, and Taiwan at Freedom House. This article was also published in the Diplomat on February 21, 2023.