While investors were getting ready to go nuclear after Sam Altman’s unceremonious departure from OpenAI and Altman planning his return to the company, members of OpenAI’s Superalignment team were busily dealing with the problem of how to control the artificial intelligence that is smarter than humans.
Or at least, that’s the impression they’d like to give.
This week, I got on the phone with three of the Superalignment team members — Collin Burns, Pavel Izmailov and Leopold Aschenbrenner — who were in New Orleans at NeurIPS, the annual machine learning conference, to present OpenAI’s newest project to ensure that AI systems behave as intended.
OpenAI formed the Superalignment team in July to develop ways to guide, regulate and govern “superintelligent” AI systems — that is, theoretical systems with intelligence that far exceeds that of humans.
“Today, we can basically align models that are dumber than us or maybe human-level maximumBurns said. “Aligning a model that is actually smarter than us is much, much less obvious — how can we do that?”
The Superalignment effort is led by OpenAI co-founder and chief scientist Ilya Sutskever, who didn’t disappoint in July — but certainly now, in light of the fact that Sutskever was among those who initially pushed for Altman’s firing. While some reference suggests that Sutskever is in a “vacuum state” after Altman’s return, OpenAI’s PR tells me that Sutskever is indeed – as of today, at least – still head of the Superalignment team.
Hyper-alignment is a bit of a touchy subject in the AI research community. Some argue that the subfield is premature. others imply it is a red herring.
While Altman has invited comparisons between OpenAI and the Manhattan Project, going so far as to assemble a team to explore artificial intelligence models to protect against “catastrophic risks,” including chemical and nuclear threats, some experts say there is little evidence that indicate the startup’s technology will acquire vast, world-beating capabilities anytime soon — or ever. Claims of impending superintelligence, these experts add, only serve to deliberately distract and divert attention from the pressing AI regulatory issues of the day, such as algorithmic bias and AI’s propensity for toxicity.
For what it’s worth, Sutskever seems to think fervently that AI — not OpenAI per se, but some incarnation of it — could one day become an existential threat. According to information, he reached the spot supply and combustion a wooden dummy in an off-site company to demonstrate its commitment to preventing artificial intelligence from harming humanity, and orders a significant amount of OpenAI computing—20% of its existing computer chips—for the Superalignment team’s research.
“The progress of artificial intelligence recently has been extremely fast, and I can assure you that it is not slowing down,” Aschenbrenner said. “I think we’ll get to human-level systems very soon, but it won’t stop there — we’ll go straight to superhuman systems… So how do we align superhuman AI systems and make them safe? It really is a problem for all of humanity — perhaps the most important unsolved technical problem of our time.”
The Superalignment team is currently trying to build governance and control frameworks that could they apply well to future powerful AI systems. It’s no simple task, as the definition of “superintelligence” — and whether a particular AI system has achieved it — is hotly debated. But the approach the team has come up with for now involves using a weaker, less sophisticated AI model (eg GPT-2) to guide a more advanced, sophisticated model (GPT-4) in desired directions — and away from spam.
An image illustrating the AI-based Superalignment team’s analogy for the alignment of superintelligent systems. Image Credits: OpenAI
“A lot of what we’re trying to do is tell a model what to do and make sure it does it,” Burns said. “How can we get a model to follow instructions and get a model to only help with things that are real and not things that are made up? How can we get a model to tell us if the code it generated is safe or obscene? These are the types of tasks we want to be able to achieve with our research.”
But wait, you might say – what does AI that guides AI have to do with preventing AI that threatens humanity? Well, it’s an analogy: The weak model is meant to be a stand-in for human supervisors, while the strong model represents the super-intelligent AI. Similar to humans who may not be able to understand a super-intelligent AI system, the weak model cannot “get” all the complexities and nuances of the strong model – making the setup useful for proving hyper-alignment hypotheses, the team says Superalignment.
“You can think of a sixth grader trying to supervise a college student,” Izmailov explained. “Suppose the sixth grader tries to tell the student about a task that he somehow knows how to solve… Although the sixth grader’s supervision may have errors in the details, there is hope that the college student would understand the substance and would be able to do the work better than the supervisor.”
In the Superalignment group setting, a weak model fine-tuned to a particular task creates tags that are used to “communicate” the broad strokes of that task to the strong model. Given these labels, the strong model can more or less correctly generalize according to the weak model’s intent — even if the weak model’s labels contain errors and biases, the team found.
The weak-strong model approach may even lead to breakthroughs in the field of hallucinations, the team claims.
“Illusions are actually very interesting, because internally, the model actually knows whether what it’s saying is fact or fiction,” Aschenbrenner said. “But the way these models are trained today, the human supervisors reward them with ‘up’, ‘up down’ for saying things. So sometimes, inadvertently, people reward the model for saying things that are either false or that the model doesn’t actually know about, and so on. If “If we’re successful in our research, we should develop techniques where we can basically call the knowledge of the model, and we could apply that call to whether something is fact or fiction and use that to reduce hallucinations.”
But the analogy is not perfect. So OpenAI wants to gather ideas.
To that end, OpenAI is launching a $10 million grant program to support technical research on superintelligent alignment, portions of which will go to academic labs, nonprofits, individual researchers, and graduate students. OpenAI also plans to host an academic hyperalignment conference in early 2025, where it will share and promote the work of the hyperalignment prize finalists.
Interestingly, part of the funding for the grant will come from former Google CEO and chairman Eric Schmidt. Schmidt — a staunch supporter of Altman — is fast becoming a poster child for AI doom, arguing that the arrival of dangerous AI systems is imminent and that regulators aren’t doing enough to prepare. It’s not necessarily out of a sense of altruism — the petition Protocol and Wired Note that Schmidt, an active AI investor, stands to benefit enormously commercially if the US government implements his proposed plan to boost AI research.
Donating can be seen as virtue signaling through a cynical lens. Schmidt’s personal fortune is estimated at $24 billion, and he has poured hundreds of millions into other, arguably less focused on ethics AI ventures and capital — including his own.
Schmidt denies this is happening, of course.
“Artificial intelligence and other emerging technologies are reshaping our economy and society,” he said in an emailed statement. “Ensuring that they are aligned with human values is critical and I am proud to support OpenAI’s new [grants] to develop and control artificial intelligence responsibly for public benefit”.
Indeed, the involvement of a figure with such transparent commercial motives begs the question: Will OpenAI’s superalignment research and the research it encourages the community to submit to its future conference be made available to anyone to use as they see fit?
The Superalignment team assured me that, yes, both OpenAI research — including code — and the work of others who receive OpenAI grants and awards for superalignment-related work will be publicly shared. We will keep the company on it.
“Contributing not only to the safety of our models, but also to the safety of other labs’ models and advanced artificial intelligence in general is part of our mission,” Aschenbrenner said. “It’s really the core of our mission to build [AI] for the benefit of all mankind, safely. And we believe that doing this research is absolutely necessary to make it beneficial and safe.”
