Elon Musk’s Ai Chatbot Grok seemed to experience an error on Wednesday that caused her to respond to dozens of seats on X with information about the “white genocide” in South Africa, even when the user asked nothing about the matter.
The odd answers come from the X account for Grok, which answers users with AI posts every time a user label @grok. When asked about irrelevant issues, Grok repeatedly told users about a “white genocide” as well as the scissors against apartheid “Kill the Boer. ”
Grok’s weird, irrelevant answers are a reminder that AI Chatbots is still a hatching technology and may not always be a reliable source of information. In recent months, AI models have struggled to mitigate the answers of their AI Chatbots, which have led to strange behaviors.
Openai has recently been forced to return an update to Chatgpt that caused AI Chatbot to be overly sycophantic. Meanwhile, Google has had problems with the Gemini Chatbot that refused to respond or misinform political issues.
In an example of Grok’s bad behavior, a user asked Grok for a professional player of baseball and Grock replied That “the claim of the” white genocide “in South Africa is largely discussed.”
Several users were posted on X about their confused, strange interactions with Grok AI Chatbot on Wednesday.
It is not clear right now what is the cause of Grok’s strange answers, but Xai’s chatbots have been manipulated in the past.
In February, Grok 3 appeared to have briefly censored the analog reports of Elon Musk and Donald Trump. At that time, XAI Engineering was leading Igor Babuschkin to confirm that Grok had been informed for a while, though the company quickly reversed the order after the reaction that suffered more attention.
Whatever the cause of the error, Grok seems to respond more normally to users now. A XAI spokesman did not immediately respond to TechCrunch’s request for comments.
