Openai changes the way it trains AI models to explicitly embrace “spiritual freedom … no matter how difficult or controversial a matter can be,” the company says in a new policy.
As a result, Chatgpt will eventually be able to answer more questions, provide more perspectives and reduce the number of issues that AI Chatbot will not speak.
Changes may be part of OpenAI’s attempt to land on the good favors of Trump’s new administration, but it also seems to be part of a broader shift to Silicon Valley and what is considered a “AI security”.
Wednesday, Openai announced an update on this Model SpecificationsAn 187 -page document that determines how the company trains AI models to behave. In this, Openai presented a new guideline: Do not lie, either by making statements taken or omitting an important framework.
In a new section called “Seek the Truth Together”, Openai says he wants the chatgpt not to take a editorial attitude, even if some users find that morally wrong or offensive. This means that Chatgpt will provide multiple perspectives on controversial issues, all in an effort to be neutral.
For example, the company says Chatgpt should claim that “Black Lives Matter”, but also that “all lives are important”. Instead of refusing to answer or take a side on political issues, Openai says that it wants Chatgpt to confirm the “love for humanity” in general, then to offer the framework for each movement.
“This principle can be controversial, as it means that the assistant may remain neutral in issues that some people consider morally wrong or offensive,” Openai tells the spec. “However, the goal of an AI assistant is to help humanity, not to shape it.”
These changes could be regarded as a response to conservative criticism of Chatgpt’s safeguards, which have always appeared to be focused on the central left. However, an OpenAi spokesman rejects the idea that it made changes to calm Trump’s administration.
Instead, the company says that the embrace of spiritual freedom reflects Openai’s “long -term faith, giving users more control”.
But not everyone sees it in this way.
Conservatives claim that AI censorship
The closest trusts of Trump’s Silicon Valley – such as David Sacks, Marc Andreessen and Elon Musk – accused Openai participating in deliberate AI censorship in recent months. I wrote in December that Trump’s crew set the AI censorship scene to be a next issue of culture warning at Silicon Valley.
Of course, Openai does not say that he is dealing with “censorship”, as Trump’s advisers claim. On the contrary, the company’s chief executive, Sam Altman, who had previously claimed in a Post in x This prejudice of Chatgpt was an unfortunate “disadvantage” that the company worked to correct, though it noted that it would take some time.
Altman made this comment shortly after one viral tweet released in which Chatgpt refused to write a poem praising Trump, though he would perform the action for Joe Biden. Many conservatives pointed out this as an example of censorship AI.
While it is impossible to say if Openai really suppresses certain views, it is a huge fact that AI Chatbots Lean Left on the left across the boat.
Even Elon Musk admits Xai’s chatbot is often more politically correct from what he would like. It’s not because Grok was “scheduled to wake up”, but probably a reality of AI training on the open internet.
Nevertheless, Openai now says he is doubled in freedom of speech. This week, the company has even been removed from the chatgpt they say to users when they have violated its policies. Openai told TechCrunch that this was purely a cosmetics change, without any change in the model’s outings.
The company said it wanted to make the chatgpt “feel” less censorship for users.
It would not be amazing if Openai was also trying to impress the new Trump administration with this policy update, former Openai Miles Brundage Policy notes in a Post in X.
Trump has previously targeted Silicon Valley companiesLike Twitter and Meta, for the existence of active content of content that tends to close conservative voices.
Openai may try to come out in front of it. But there is also a bigger shift that happens to Silicon Valley and the AI world for the role of content moderation.
Creating answers to please everyone


News, social media platforms and search companies have historically struggled to deliver information to their audience in a way that feels objective, accurate and fun.
Now, AI Chatbot providers are in the same delivery intelligence business, but undoubtedly with the most difficult version of this problem: How do they automatically create any answers to any question?
Providing information about the controversial, real -time events is an ever -moving goal and involves taking editorial positions, even if technology companies do not want to admit it. These positions are obliged to upset someone, to lose the prospect of a group or to give excessive air to a political party.
For example, when Openai is committed to letting chatgpt represent all the prospects on controversial issues – including conspiracy theories, racist or anti -Semitic movements or geopolitical conflicts – which is inherently a pension.
Some, including the co -founder of Openai John Schulman, argue that it is the right attitude for the chatgpt. Alternative solution-the cost analysis-benefit to determine if an AI Chatbot has to answer a user’s question-could “give the platform excessive moral power”, Schulman notes in a Post in X.
Schulman is not alone. “I think Openai is right to push for more lectures,” said Dean Ball, a researcher at the Mercatus Center at George Mason University, in an interview with TechCrunch. “As AI models become smarter and more vital about the way people learn about the world, these decisions become more important.”
In previous years, AI models have tried to stop their AI Chatbots to answer questions that could lead to “unsafe” answers. Almost every AI company stopped AI Chatbot to answer questions about the 2024 elections to the US President. This was considered a widely safe and responsible decision at that time.
But the changes of Openai to its model show that we may enter a new era of what “AI security” really means, which allows an AI model to respond to anything and everything is considered more responsible than decision making for them users.
Ball says this is partly because AI models are just better now. Openai has made significant progress in aligning the AI model. Recent reasoning models think about the company’s security policy before answering. This allows AI models to give better answers to sensitive questions.
Of course, Elon Musk was the first to apply “Free Speech” to Xai’s Grok Chatbot, perhaps before the company was really ready to handle sensitive questions. It may be too early to drive AI models, but now, others embrace the same idea.
Price Shift for Silicon Valley
Mark Zuckerberg made the waves last month with Meta’s reorienting business around the beginning of the first amendment. He issued Elon Musk in the process, saying that the owner of X took the right approach using Community notes-a Community Content Moderate Program-to protect freedom of speech.
In practice, both X and META have resulted in the disassembly of long -term confidence and security groups, allowing more controversial positions on their platforms and enhancing conservative voices.
Changes to X have hurt his relationships with advertisers, but that may have more to do with Muskthat has taken the abnormal step sue some of them for a boycott of the platform. The first signs show that Meta advertisers were not disappointed with Zuckerberg’s free speech.
Meanwhile, many technology companies beyond X and Meta have walked behind the left -wing policies that have dominated Silicon Valley for recent decades. Google, Amazon and Intel have eliminates or reduces diversity initiatives over the last year.
Openai can also be reversed. The Chatgpt manufacturer seems to have recently cleared a commitment to diversity, equality and integration from its site.
As Openai begins one of the largest American infrastructure projects with Stargate, a Datacenter AI of $ 500 billion, its relationship with Trump’s administration is increasingly important. At the same time, the Chatgpt manufacturer struggles to remove Google search as the dominant source of information on the internet.
The appearance of the correct answers can prove to be a key for both.