Ai’s security researchers from Openai, humanity and other organizations are publicly talking against the “reckless” and “completely irresponsible” security culture on XAI, the start of AI billions of dollars owned by Elon Musk.
The criticisms are followed by weeks of scandals in XAI that have overshadowed the company’s technological developments.
Last week, the company’s AI Chatbot, Grok, threw anti -Semitic comments and was repeatedly named “Mechahitler”. Shortly after Xai took the off -connection chatbot to deal with the problem, began a more and more capable AI Frontier model, Grok 4, which TechCrunch and others found themselves consulting Elon Musk’s personal policy to help answer issues. In the latest development, XAI began the AI comrades who take the form of a super-sexual anime girl and an excessively aggressive Panda.
The friendly Joshing among AI competing workers is quite normal, but these researchers seem to demand increased attention to XAI security practices, which they claim to be in contrast to industry rules.
“I didn’t want to publish Grok’s security since I work for a competitor, but it’s not competition,” said Boaz Barak, a professor of Informatics who is currently on a Harvard license to work on security for security in Openai, on a Tuesday Post in X. “I appreciate scientists and engineers @xai, but the way of handling security is completely irresponsible.”
Barak is particularly facing XAI’s decision not to publish system cards – Industrial Standardized that in detail training methods and security assessments in a good effort to share information with the research community. As a result, Barak says it is not clear which security training was done on Grok 4.
Openai and Google have a reputation with the same reputation when it comes to sharing system cards when they reveal new AI models. Openai decided not to publish a system card for GPT-4.1, claiming it was not a border model. Meanwhile, Google waited months after the revelation of Gemini 2.5 Pro to publish a security report. However, these companies publish historical security reports for all AI Frontier models before entering complete production.
TechCrunch event
Francisco
|
27-29 October 2025
Barack also notes that Grok AI’s comrades “are getting the worst issues we have today for emotional dependencies and are trying to strengthen them”. In recent years, we have seen countless stories of Unstable people who are developing about the relationship with chatbotsAnd how AI’s excessive answers can upgrade them above the edge of logic.
Samuel Marks, AI’s security researcher with Anthropic, also referred to XAI’s decision not to publish a security report, calling on the “reckless” movement.
“Google and Google’s release practices have problems,” Marks writes in a Post in x. “But at least they do something, anything to assess security before installation and the findings of the documents. Xai doesn’t.”
The reality is that we don’t really know what Xai did to try Grok 4. In a widely common position in the online forum Lesswrong, An anonymous researcher claims that Grok 4 does not have significant protective security messages based on their tests.
Whether this is true or not, people seem to discover Grok’s weaknesses in real time. Many of XAI’s security issues have progressed from the virus and the company claims to have faced them Power in the Grok system system.
Openai, Anthropic and Xai did not respond to TechCrunch’s request for comments.
Dan Hendrycks, security adviser for XAI and director of AI Security Center, Posted on x that the company made “dangerous ability evaluations” on Grok 4. However, the results in these ratings have not been publicly shared.
“I am concerned when standard security practices are not confirmed throughout the AI industry, such as the publication of the results of dangerous abilities,” said Steven Adler, an independent AI researcher who previously led security teams to Openai, in a statement by Techcrunch. “Governments and the public are worth knowing how AI companies handle the dangers of very powerful systems that say they are building.”
What is interesting for XAI’s controversial security practices is that Musk has long been one of AI’s most remarkable supporters. The billionaire leader of XAI, Tesla and Spacex has often warned about the ability of advanced AI systems to cause devastating effects on humans and praised an open approach to the development of AI models.
And yet, AI researchers in competitive laboratories claim that XAI extends from industry rules around the safe AI models. In this way, the start of Musk can inadvertently make a strong case for state and federal legislators to set rules on the publication of AI security reports.
There are several attempts at the state level to do so. California Senator Scott Wiener is pushing a bill that would require AI laboratories – possibly including XAI – to publish security reports, while Gov. Kathy Hochul of New York is currently considering a similar bill. Supporters of these accounts note that most AI Labs publish this type of information anyway – but obviously, they don’t do it all consistently.
AI models today have not yet presented real world scenarios that create truly destructive damage, such as the death of people or billions of dollars in damage. However, many AI researchers say this could be a problem in the near future, given the rapid progress of AI models and billions of dollars Silicon Valley invests to further improve AI.
But even for the skeptics of such devastating scenarios, there is a strong case that suggests that Grok’s bad behavior makes products that dominate today significantly worse.
Grok spread anti -Semitism around the X platform this week, just a few weeks after Chatbot repeatedly brought “white genocide” to talks with users. Musk has stated that Grok will be more rooted in Tesla vehicles and Xai is trying to sell are AI models in the pentagon and other businesses. It is difficult to imagine that people who drive Musk cars, federal workers who protect the US or businessmen who automate duties will be more receptive to these misrepresentation by users in X.
Many researchers argue that AI safety and alignment tests not only ensure that the worst results do not occur, but also protect from short -term behavior issues.
At least, Grok’s incidents tend to overshadow XAI’s rapid progress in the development of AI Frontier models whose best Openai and Google technology, just two years after the start of the start.
