OpenAI’s Superalignment team, responsible for developing ways to govern and guide “superintelligent” AI systems, was promised 20 percent of the company’s computing resources, according to a person from that team. But requests for a fraction of that calculation were often rejected, preventing the team from doing its job.
That issue, among others, prompted several team members to resign this week, including co-lead Jan Leike, a former DeepMind researcher who while at OpenAI was involved in the development of ChatGPT, GPT-4 and its predecessor ChatGPT, InstructGPT.
Leike went public with some reasons for his resignation on Friday morning. “I have been disagreeing with OpenAI’s leadership about the company’s core priorities for some time, until we finally reached a tipping point,” Leike wrote in a series of posts on X. “I believe that far more than our bandwidth will should be spent on preparing for the next generations of models, on security, monitoring, preparedness, security, adversary resilience, (hyper)alignment, confidentiality, social impact and related issues. These problems are very difficult to fix, and I’m concerned that we’re not on track to get there.”
OpenAI did not immediately return a request for comment on the resources promised and made available to that group.
OpenAI formed the Superalignment team last July and was led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned from the company this week. It had the ambitious goal of solving the key technical challenges of controlling superintelligent artificial intelligence over the next four years. Together with scientists and engineers from OpenAI’s former alignment division, as well as researchers from other organizations across the company, the team was to contribute research that informs the security of both internal and non-OpenAI models and, through initiatives including a research grant program , solicit and share work with the wider AI industry.
The Superalignment team has managed to publish a body of safety research and funnel millions of dollars in grants to outside researchers. But as product launches began to take up an increasing amount of OpenAI leadership’s bandwidth, the Superalignment team was forced to scramble for more upfront investment—investment it believed was critical to the company’s stated mission of developing superintelligent AI for the benefit of of all humanity.
“Building smarter-than-human machines is an inherently risky endeavor,” Leike continued. “But in recent years, safety culture and processes have taken a back seat to shiny products.”
Sutskever’s battle with OpenAI CEO Sam Altman served as an important additional distraction.
Sutskever, along with OpenAI’s old board, abruptly ousted Altman late last year over concerns that Altman was not “consistently honest” with board members. Under pressure from OpenAI’s investors, including Microsoft, and many company employees, Altman was eventually reinstated, much of the board resigned, and Sutskever According to reports he never returned to work.
According to the source, Sutskever was instrumental to the Superalignment team — not only contributing to research, but acting as a bridge to other departments within OpenAI. He would also serve as an ambassador of sorts, impressing upon key OpenAI decision makers the importance of the team’s work.
After the departures of Leike and Sutskever, John Schulman, another co-founder of OpenAI, began leading the kind of work that the Superalignment team was doing, but there will no longer be a dedicated team—instead, it will be a loosely-knit group of researchers. integrated into departments across the company. An OpenAI spokesperson described it as “an integration [the team] Deeper”.
The fear is that, as a result, OpenAI’s AI development will not be as security-focused as it could be.
We’re launching an AI newsletter! Sign up here to start receiving it in your inbox on June 5th.