Complaint about poverty in rural China. A news report on a corrupt member of the Communist Party. A cry for help for corrupt cops that shake the businessmen.
These are just some of the 133,000 examples fueled in a sophisticated large linguistic model designed to automatically point out any piece of content considered sensitive by the Chinese government.
A leak database observed by TechCrunch reveals that China has developed an AI system that exceeds the already terrible censorship machine, which extends far beyond traditional taboos such as the slaughter of Tiananmen Square.
The system appears mainly oriented towards the censorship of Chinese citizens online, but could be used for other purposes, such as improving the already extensive censorship of Chinese models AI.
Xiao Qiang, a researcher at UC Berkeley, who is studying Chinese censorship and also examined the set of data, told TechCrunch that it was “clear evidence” that the Chinese government or its subsidiaries want to use LLMS to improve repression.
“Unlike traditional censorship mechanisms, which are based on human work and manual keywords, a LLM-trained manual manual review, a LLM trained in such instructions would significantly improve the efficiency and flexibility of checking updated information,” Qiang said.
This adds to increasing evidence that authoritarian regimes quickly adopt the latest AI technology. In February, for example, Openai said He took many Chinese entities using LLMS to watch anti-government positions and accelerate Chinese dissidents.
The Chinese Embassy in Washington, DC, told TechCrunch in a statement That is opposed to the “countless attacks and slander against China” and that China attaches great importance to the development of ethics.
The data found with a simple look
Data set was discovered by Netaskari Security Researcherwho shared a sample with TechCrunch after finding it stored in an unpaid Elasticsearch database hosted on a Baidu server.
This shows no involvement from each company – all types of organizations store their data with these providers.
There is no indication of exactly who created the data set, but the files show that the data is recent, with the latest listings dating back to December 2024.
A llm to detect disagreement
In the language it reminds the way in which people encourage chatgpt, the creator of the system duties an anonymous llm to understand If a piece of content has to do with sensitive issues related to politics, social life and the army. This content is considered “higher priority” and must be marked immediately.
Priority top issues include pollution and food safety scandals, financial fraud and labor differences, which are hot button issues in China that sometimes lead to public protests-for example, Protests against Shifang pollution of 2012.
Any form of “political satire” is explicitly targeted. For example, if one uses historical proportions to make a point for the “current political elements” that must be highlighted immediately, and thus must be related to anything related to “Taiwan’s policy”. Military issues are aimed extensively, including reports of military movements, exercises and weapons.
An excerpt of the data set can be seen below. The code within it mentions chips and llms, confirming that the system uses an AI model to make its offer:


Within the training data
From this huge collection of 133,000 examples to be evaluated by LLM for censorship, TechCrunch was gathered 10 representative pieces of content.
The issues that may cause social upheavals are a repetitive issue. An excerpt, for example, is a place by a business owner complaining about corrupt local police officers who are moving businessmen, a growing issue in China as his economy is struggling.
Another piece of content mourns rural poverty in China, describing cities that only have older people and children who have left them. There is also a news report on the Chinese Communist Party (CCP) that attributes a local employee of serious corruption and faith in “superstitions” instead of Marxism.
There are extensive material related to Taiwan and military issues, such as comments about Taiwan’s military abilities and details of a new Chinese jet fighter. The Chinese word for Taiwan (台湾 台湾) refers to more than 15,000 times in the data, a search by TechCrunch shows.
The subtle disagreement seems to be targeted as well. An excerpt included in the database is an anecdote about the elusive nature of power used by the popular Chinese idiom, “when the tree falls, the dispersed monkeys”.
Power transitions are a particularly sensitive issue in China thanks to its authoritarian political system.
Built for ‘public opinion work”
The data set does not include information about its creators. But he says he is intended for the “work of public opinion”, which offers a strong indication that it is going to serve the goals of the Chinese government, one expert at TechCrunch said.
Michael Caster, director of the Asian Rights Program, Article 19, explained that the “public opinion work” is supervised by a strong Chinese government regulatory authority, the administration of the cyberspace of China (CAC) and usually refers to the efforts of censorship and propaganda.
The ultimate goal is to ensure that the narratives of the Chinese government are protected online, while alternative views are cleaned. Chinese President Xi Jinping has described himself The internet as “Frontline” of the “Public Opinion project” of CCP.
Repression becomes smarter
The set of data examined by TechCrunch is the latest evidence that authoritarian governments seek to exploit AD for repressive purposes.
Open Released a report last month Revealing that an unrecognized actress, who is likely to operate from China, used genetic AIs to monitor social media talks – especially those who support human rights protests against China – and promoting them in the Chinese government.
Contact us
If you know more about how AI is used in the State Opporession, you can contact Charles Rollet safely on the brand on Charlesrollet.12 You can also contact TechCrunch via Securedrop.
Openai also found that the technology used to create comments is extremely critical of a protruding Chinese disagreement, Cai Xia.
Traditionally, China’s censorship methods are based on more basic algorithms that automatically impede the content that refers to the terms Blacklisted, such as the “Tiananman Massacre” or “Xi Jinping”, as many users who first experienced Deepseek.
But the latest AI technology, such as LLMS, can make censorship more effective, even finding fine criticism on a huge scale. Some AI systems can also continue to improve as they make more and more data.
“I think it is important to point out how AI’s censorship is evolving, making state control in public discourse even more sophisticated, especially at a time when Chinese AI models such as Deepseek do headwaves,” said Xiao, Berkeley’s researcher.
