When Deepseek, Alibaba and other Chinese companies released AI models, Western researchers quickly observed that they violated questions criticizing the Chinese Communist Party. US officials later confirmed That these tools are designed to reflect Beijing’s talk points, creating censorship and prejudice concerns.
AI leaders such as Openai have pointed out this as an excuse for promoting their technology quickly, without excessive regulation or supervision. As wrote Openai’s World Affairs Officer Chris Lehane LinkedIn post Last month, there is a competition between “Democratic AI headed by the American AI and China’s authoritarian AI”.
An executive order It was signed on Wednesday by President Donald Trump that the prohibitions “awakened the AI models” and AI that are not “ideologically neutral” by government contracts could disrupt this balance.
The series calls for diversity, equality and integration (dei), called a “widespread and destructive” ideology that can “deform the quality and accuracy of production”. Specifically, the series refers to information on race or gender, the handling of racial or sexual representation, the critical theory of tribes, transsexuals, unconscious prejudice, cross -sectional and systematic racism.
Experts warn that it could create a cold effect on developers who may feel pressure to align models and data sets with White House rhetoric to secure federal dollars for cash -burning businesses.
The order comes on the same day that the White House published Trump’s “Ai Action Plan”, which shifts national priorities away from social danger and focuses on building the infrastructure of AI, bureaucracy cutting for technology companies, national security and competition with China.
The provision teaches the Director of the Office of Management and Budget together with the Administrator of the Federal Policy, the General Services Manager and the Director of the Science and Technological Policy Service for the issuance of guidance to other organizations on how to comply.
TechCrunch event
Francisco
|
27-29 October 2025
“Once upon a time, we will get rid of awakening,” Trump said on Wednesday during an AI event hosted by All-in Podcast and Hill & Valley Forum. “I will sign a mandate that forbids the federal government from supplying AI technology that has been infused with sympathetic or ideological agendas, such as the theory of the critical race, which is ridiculous and from now on, the US government will only deal with it.
Determination of what is impartial or objective is one of the many challenges of the class.
Philip Seargeant, a senior lecturer in applied linguistics at Open University, told Techcrunch that nothing can ever be objective.
“One of the fundamental principles of socio -linguistics is that language is never neutral,” Seargeant said. “So the idea that you can ever get pure objectivity is a fantasy.”
In addition, the ideology of Trump’s administration does not reflect the beliefs and values of all Americans. Trump has repeatedly sought to eliminate funding for climate, education, public broadcasting, research, social services grants, Community and agricultural support programs and the care confirmed by gender, often formulating these initiatives.
Like Rumman Chowdhury, a data scientist, chief executive of non -profit human intelligence technology, and former US science envoy for AI, he raised it: “Anything [the Trump administration doesn’t] as he immediately flies in this mouse of awakening. ”
The definitions of the “truth of truth” and “ideological neutrality” in the series published on Wednesday are unclear in some ways and specific to others. While “search for truth” is defined as llms that “it prioritizes historical accuracy, scientific research and objectivity”, “ideological neutrality” is defined as LLMS which is “neutral, non -pieces of tools that do not manipulate the answers to ideological dogs.”
These definitions leave room for broad interpretation as well as possible pressure. AI companies have pressed for less restrictions on how they work. And while an executive mandate does not bear the power of the legislation, Frontier AI companies could still be found to be subject to the changing priorities of the administration’s political agenda.
Last week, Openai, Anthropic, Google and Xai signed contracts With the Ministry of Defense receiving up to $ 200 million each to develop AI work flows on critical national security challenges.
It is not clear which of these companies are best positioned to win the AI ban, or whether they will comply.
TechCrunch has reached each of them and will inform this article if we hear back.
Despite its own prejudices, XAI can be the most aligned with order – at least at this early stage. Elon Musk has placed Grok, Xai’s Chatbot, as the ultimate anti-Woke, “less biased”, Truthseeker. The prompts of the Grok system have directed to avoid postponing the Grok’s authorities and media, to seek information on the controversy, even if they are politically incorrect and even mention Musk’s views on controversial issues. In recent months, Grok has even dropped anti -Semitic comments and praised Hitler on X, among other hated, racist and misogyny positions.
Mark Lemley, Professor of Law at Stanford University, told TechCrunch that the executive mandate “is clearly intended to be a distinction of opinion since then [the government] He just signed a contract with Grok, also known as Mechahitler. ”
Along with XAI’s DOD funding, the company announced that “Grock for government“He had been added to the General Services Management Program, which means that XAI products are now available for purchase at every government office and service.
“The right question is this: they will ban Grok, the AI who signed only a large contract, because it is deliberately constructed to provide politically charged answers?” Lemley said in an email interview. “If not, it is clearly designed to discriminate against a particular view.”
As the Grok system prompts have shown, model outputs can reflect both people who build technology and the data trained in AI. In some cases, excessive attention between developers and AI trained in the content of the internet that promotes prices such as integration has led to deformed model outputs. Google, for example, last year came under fire after her Gemini Chatbot showed a black George Washington and racially different Nazi-which Trump’s order calls as an example of AI models of AI infections.
Chowdhury says her biggest fear with this executive order is that AI companies will actively rebuild training data to tow the party line. Showed statements From Musk a few weeks before the start of Grok 4, saying that Xai will use the new model and its advanced reasoning capabilities to “rewrite the whole body of human knowledge, adding missing information and errors.
This would apparently put Musk in the position of judging what is true, which could have huge later effects on how to access information.
Of course, companies have made crisis calls about the information observed and have not seen from the dawn of the internet.
Conservative David Sacks-The Entrepreneur and the Investor that Trump appointed as Ai Czar-has overcome his concerns about “awakened AI” in All-in Podcast, which co-inflammatory AI’s Trump Day announcements. The Sacks accused the creators of the protruding AI products to give them left -wing values, to frame his arguments as a defense of freedom of speech, and a warning against a tendency towards central ideological control on digital platforms.
The problem, the experts say, is that there is no truth. Achieving impartial or neutral results is impossible, especially in today’s world, where even events are politicized.
“If the results produced by an AI say climate science is right, is this bias of the left wing?” Said Seargeant. “Some people say that you must give both sides of the argument to be objective, even if one side of the argument has no status on it.”
Do you have a sensitive advice or confidential documents? We mention the internal operation of the AI industry – from companies that shape their future to the people who are influenced by their decisions. Contact rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zef at maxwell.zeff@techcrunch.com. For safe communication, you can contact us via a signal at @rebeccabellan.491 and @mzeff.88.
