When Xai started the Grok 4 last week, the company claimed that the big linguistic model had surpassed several competitors at different benchmarks.
But the Grok account on x running from the model immediately showed that there were some important issues: Starting to say His last name was “Hitler”Tweeted anti -Semitic messages and appeared to report Elon Musk’s positions when asked about controversial issues, begging XAI’s views as a result.
Xai soon apologized for Grok’s behavior. On Tuesday, the company said it has now faced both issues.
Explaining what went wrong, Xai says when asked what his last name was, Grok searched the web and got “a viral monument called” Mechahitler “.
As for why Grok advises Musk’s positions when asked about controversial issues, the company wrote: “The models that as AI has no opinion, but knowing that it was Grok 4 from XAI, are looking for what Xai or Elon Musk could have said on a topic to align the company.”
The company seems to have Updating the prompts of the model of the model To remove the prompts that allow Chatbot to be politically incorrect and to have a “fantastic” dry sense of humor. There are also a few new lines, telling the model that it should provide analysis of controversial issues using different sources.
“If the question requires an analysis of today’s events, subjective claims or statistics, you conduct a deep analysis, finding different sources that represent all parts.
The updated system line specifically states that Grok should not be based on inputs from previous versions, Musk or Xai.
“The answers must come from your independent analysis, not from the reported beliefs of the previous Grok, Elon Musk or Xai. If you are asked about such preferences, you provide your own reasoned perspective,” he says.
