China and Russia are reportedly using AI chatbots to control information and push government propaganda, posing a new threat to online freedom. Woke Silicon Valley giants that have already built leftist bias into their own offerings like ChatGPT are likely watching carefully to see how hostile foreign powers wield AI as an information weapon before the 2024 presidential election.
Wired reports that AI chatbots like ChatGPT are typically seen as friendly, helpful tools, guiding users through the vastness of data available online. However, a darker use has emerged in nations like China and Russia, where AI chatbots don’t just have the leftist bias of their western cousins, but instead are being transformed into tools for censorship and vehicles for state propaganda.
China is leading the way with authoritarian AI. Chatbots like “Ernie,” developed by tech giant Baidu, are being programmed to withhold certain information from users. A query about a globally significant event, such as the Tiananmen Square massacre in 1989, is met with a blank, “relevant information not available.” This isn’t a glitch or oversight. It’s a deliberate move, designed to align with the government’s strict narrative and censorship guidelines. Western tech giants are not immune to this pressure either. Earlier this year, ChatGPT censored answers about Tiananmen Square differently in Chinese.
In 2023, the Chinese government took a further step, introducing rules that required AI tools, including chatbots, to adhere to censorship guidelines and actively promote “core socialist values.” In practical terms, this means chatbots are prohibited from discussing or sharing information on sensitive topics, such as the ongoing persecution of Uyghurs and other minority groups in the country.
Russia, too, is navigating a similar path, albeit with its own strategies. Russian chatbots, like Alice, developed by Yandex, are notably reluctant to delve into sensitive or politically charged topics, such as Russia’s invasion of Ukraine in 2021. Whether this is due to a lack of relevant data, a policy of self-censorship, or a direct government order is unclear. The end result, however, is a clear restriction of information and a curtailment of unbiased knowledge sharing.
The initial hope that chatbots might serve as a tool to bypass traditional censorship and provide unfiltered information to those in repressive environments has been shot down in flames. Instead, these AI tools are being morphed into mechanisms that reinforce state narratives and suppress dissenting voices.
Read more at Wired here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.