XAI blamed an “unauthorized modification” for an error in Ai-Powered Grok Chatbot caused by Grok repeatedly refer to “White Genocide in South Africa” when citing some contexts in X.
On Wednesday, Grok began responding to dozens of seats in X with information on white genocide in South Africa, even in response to non -related issues. The odd answers came from the X account for Grok, which responds to users with positions produced by AI whenever a “@grok” person labels.
According to a report on Thursday by XAI’s official X account, a change took place on Wednesday morning on the Grok Bot system-high-level instructions that guide the bot-directing Grok to provide a “special answer” to a “political issue”. Xai says the sting “was violated [its] Internal policies and core values ”and that the company” conducted a thorough survey “.
It is the second time that XAI has publicly recognized that an unauthorized change in the Grok code has caused AI to respond in controversial ways.
In February, Grok was briefly censored the reports of Donald Trump and Elon Musk, the billionaire founder of XAI and owner of X. Igor Babuschkin, Xai’s mechanical lead, said Grok had been ordered by a rogue To ignore the sources reported by Musk or Trump to spread misinformation and that Xai returned the change as soon as users began to point out.
Xai said on Thursday that he is going to make several changes to prevent similar incidents in the future.
Starting today, Xai will Post the Grok system In GitHub as well as a Changelog. The company says it will “implement additional checks and measures” to ensure that XAI employees cannot modify the prompt of the system without a review and create a “24/7 monitoring group to respond to cases with Grok answers not caught by automated systems”.
Despite the frequent Musk warnings of the dangers of All included lost uncontrolledXAI has a poor AI security history. A recent report He found that Grok would strip photos of women when asked. Chatbot can also be much more haircut than AI such as Google Gemini and Chatgpt, a curse without limiting to talk.
A study by Saferai, a non -profit character aimed at improving AI’s accounting, found that Xai is wrongly classified by security between his peers because of him. “Very weak” risk management practices. Earlier this month, XAI lost a self -imposed deadline to publish a definitive AI safety framework.
