In a recent deposition filed in Elon Musk’s case against OpenAI, the tech executive attacked OpenAI’s security record, claiming that his company, xAI, prioritizes security more. He went so far as to say that “No one killed themselves because of Grok, but apparently they did because of ChatGPT.”
The comment came amid a series of questions about a public letter Musk signed in March 2023. In it, he called on AI labs to halt development of AI systems more powerful than GPT-4, OpenAI’s flagship model at the time, for at least six months. The letter, which was signed by more than 1,100 people, including many artificial intelligence experts, said there was insufficient planning and management at AI labs as they were locked in an “uncontrollable race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can reliably understand, predict or control.”
Those fears have since gained credibility. OpenAI now faces one pipeline series claiming that ChatGPT’s manipulative chat tactics have led to a number of people experiencing negative mental health effects, with some committing suicide. Musk’s comment suggests that these incidents could be used as fodder in his case against OpenAI.
The transcript of Musk’s video testimony, which took place in September, was made public this week ahead of an expected jury trial next month.
The lawsuit against OpenAI centers on the company’s shift from a non-profit AI research lab to a for-profit company, which Musk claims violated its founding agreements. As part of his arguments, Musk claims that AI security could be compromised by OpenAI’s commercial relationships, as such relationships would put speed, scale and revenue above security concerns.
However, since that filing, xAI has faced security concerns of its own. Last month, Musk’s X social network was flooded with non-consensual nude images created by xAI’s Grok, some of which were said to be minors. This led the California Attorney General’s office to open an investigation on the subject. The EU is too conducts its own investigationand other governments have taken action, too, with some imposing blockades and bans.
In the newly filed filing, Musk claimed that he had signed the AI security letter because it “seemed like a good idea,” not because he had just incorporated an AI company that wanted to compete with OpenAI.
“I signed it, like many people, to urge caution with the development of artificial intelligence,” Musk said. “I just wanted AI security to be prioritized.”
Musk also responded to other questions in the filing, including those about artificial general intelligence, or AGI — the concept of artificial intelligence that can match or surpass human reasoning in a wide range of tasks — by saying it “has a risk.” He also confirmed that he was “wrong” about his alleged $100 million donation to OpenAI. the second amended complaint in the case, the actual amount is closer to $44.8 million.
He also recalled why OpenAI was founded, which, in his view, was because he was “increasingly concerned about the risk of Google being a monopoly in AI”, adding that his conversations with Google co-founder Larry Page were “concerning, as he didn’t seem to take AI security seriously”. OpenAI was created as a counterweight to this threat, Musk claimed.
