AI of Elon Musk, Xai, has lost a self -imposed deadline publish a definitive AI security framework, as noted by the Watchdog team the Midas project.
XAI is not exactly known for its strong AI safety commitments, as is commonly understood. A recent report He found that the company’s AI Chatbot, Grok, would strip the women’s photos when asked. Grok can also be much worse than chatbots like Gemini and Chatgpt, the curse without too much limit to talk.
Nevertheless, in February at the AI Seoul Summit, a global gathering of leaders and interested parties of AI, Xai posted one frame drawing describing the company’s approach to AI security. The eight -page document set XAI’s priorities and philosophy, including the Company’s comparative assessment protocols and AI models development assessments.
As the Midas project was observed in the blog post on Tuesday, however, the design is only applied to indefinite future AI models “not currently ongoing”. In addition, it failed to formulate how XAI will identify and apply risk mitigation, a basic element of a document by the company Signed at AI Seoul Summit.
In the plan, Xai said he was planning to release a revised edition of her “three months” security policy – until May 10th.
Despite the frequent Musk warnings of the dangers of All included lost uncontrolledXAI has a poor AI security history. A recent study by Saferai, a non -profit character aimed at improving AI’s accounting, found that Xai was wrongly ranked among his peers because of him. “Very weak” risk management practices.
This is not to suggest other AI laboratories are dramatically better. In recent months, Xai opponents including Google and Openai Rushed a security test and have slowly published the model security reports (or omitted publication reports in total). Some experts have expressed their concern that the apparent frustration of security efforts comes at a time when AI is more capable – and therefore potentially dangerous – than ever.
