Michael Caster is a human rights advocate and researcher and the co-founder of Safeguard Defenders, an NGO specializing in human rights in China.
At the start of April, the U.N. and Chinese technology business Tencent Holdings announced a partnership to host thousands of online conversations using Tencent's videoconferencing software, in part to ask what the world should look like in 25 years.
This sounds modern and optimistic, but the deal risks involving the U.N. in the controversial Chinese tech sector, where government intrusion through electronic surveillance is rife and censorship reigns.
These U.N. conversations are bound to touch on explosive topics -- freedom of speech, freedom of religion, the benefits and downsides of different forms of government. But on Tencent's social media app WeChat, rather than the VooV platform used for the U.N., these discussions might not be possible.
As pointed out in a 2018 report by David Kaye, the U.N.'s Special Rapporteur on freedom of expression, Tencent requires "anyone using [WeChat] within China and Chinese citizens using the platform 'anywhere in the world' to comply with content restrictions that mirror Chinese law or policy."
These content restrictions include prohibiting topics deemed sensitive in China, not just about the political system and the surveillance and mass internment of over one million people from ethnic and religious minorities in Xinjiang but now coronavirus too.
As the Human Rights Council has stressed, public-private partnerships should be fully transparent and not kept confidential, and yet despite potential concerns we know very little.
Considering one of the partnership's goals is to discuss the future we want to see, what happens when people discuss these sensitive topics on a Tencent platform? Will they be censored? What happens to people saying these things if they are in China?
Tencent did not reply to questions about these concerns.
We know WeChat does not allow end-to-end encryption. What is more, China's cryptography law requires the Chinese government to have access to all data on Chinese platforms.
The U.N. partnership involves Tencent's AI-powered simultaneous interpretation software, and the Chinese government's broad use of AI, not speaking specifically of Tencent, has also been a source of concern.
AI development needs data to train algorithms. The more data, the faster they learn. But as human rights organizations' reports have documented, China has fed a massive pool of biometric data, forcibly collected, into its sophisticated network of surveillance and detention, most alarmingly in Xinjiang. China is exporting such abusive technology to other countries with poor human rights records.
WeChat, which takes censorship in real time beyond the removal of offending text, according to a Freedom House report, relies on AI to identify and delete images. Chinese netizens have been using pictures to express ideas otherwise deleted through text-based censorship algorithms, and recently some have tried speaking or texting in other languages to evade censors.
David Kaye, in a separate 2018 report, has emphasized the central role human rights law must play in AI advancement, but instead China's tech sector has perfected AI to perpetuate human rights abuses.
China even has a poor record when it comes to global governance forums. For five years, Chinese digital tools secretly downloaded confidential data from the new African Union headquarters which China itself had donated. China denied any wrongdoing.
Under the U.N. Guiding Principles on Business and Human Rights, companies should assess any actual or potential adverse human rights impacts associated with their relationships.
The U.N. should be guided by its own principles and halt this partnership until it can guarantee an independent and impartial impact assessment has addressed these and related concerns.
Given what we already know about Tencent's domestic activity, the U.N. should be cautious.