The power of OpenAI The fight that captivated the tech world after co-founder Sam Altman was fired has finally come to an end — at least for now. But what to do with it?
It almost feels like some praise is due – like OpenAI is dead and a new, but not necessarily improved, startup is in the middle of it. Former Combinator president Altman is back at the helm, but is his return justified? OpenAI’s new board is off to a less diverse start (meaning it’s all white and male), and the company’s founding philanthropic goals are at risk of being co-opted by more capitalist interests.
This is not to say that the old OpenAI was perfect.
As of Friday morning, OpenAI had a six-person board of directors — Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI President Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo and Helen Toner, director at the Georgetown Center for Security and Emerging Technologies. The board was technically affiliated with a non-profit organization that held a majority stake in the for-profit side of OpenAI, with full decision-making authority over for-profit OpenAI’s operations, investments, and overall direction.
OpenAI’s unusual structure was created by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s ultra-short (500-word) charter describes the board as making decisions by ensuring “that AI benefits all of humanity,” leaving it up to board members to decide how best to interpret it. Neither “profits” nor “revenues” are mentioned in this North Star document. Tonic According to reports he once told Altman’s executive team that triggering OpenAI’s collapse “would actually be consistent with [nonprofit’s] Mission.”
Maybe the setup would have worked in some parallel universe. for years, it seemed to work quite well in OpenAI. But once investors and powerful partners get involved, things get… more difficult.
Altman’s firing unites Microsoft, OpenAI employees
After the board abruptly fired Altman on Friday without notifying almost anyone, including most of OpenAI’s 770-person workforce, the startup’s backers began to express their displeasure both privately and publicly.
Satya Nadella, the CEO of Microsoft, a major partner of OpenAI, was it is alleged that “Furious” to learn of Altman’s departure. Vinod Khosla, the founder of Khosla Ventures, another OpenAI backer, told X (formerly Twitter) that the fund wanted Altman back. Meanwhile, Thrive Capital, the aforementioned Khosla Ventures, Tiger Global Management and Sequoia Capital are said to be considering legal action against the board if negotiations over the weekend to reinstate Altman do not go well.
Now, OpenAI employees were not not aligned with these investors from outside appearances. Instead, nearly everyone — including Sutskever, in an apparent change of heart — signed a letter threatening the board with mass resignations if they chose not to reverse course. But one has to consider that these OpenAI employees had a lot to lose if OpenAI collapsed — job offers from Microsoft and Salesforce from one part of it.
OpenAI had discussed, led by Thrive, a possible sale of employee stock in a move that would have boosted the company’s valuation from $29 billion to somewhere between $80 billion and $90 billion. Altman’s sudden departure — and OpenAI’s rotating cast of questionable interim CEOs — put a chill on Thrive, putting the sale in jeopardy.
Altman won the five-day battle, but at what cost?
But now, after several breath-taking, hair-raising days, some form of resolution has been reached. Altman — along with Brockman, who resigned Friday in protest of the board’s decision — is back, though subject to a background check for the concerns that led to his ouster. OpenAI has a new transition board, fulfilling one of Altman’s requirements. And OpenAI will reportedly maintain its structure, with investors’ profits limited and the board free to make non-revenue-based decisions.
Salesforce CEO Marc Benioff posted on X that “the good guys” won. But that may be too early to tell.
Sure enough, Altman “won,” winning over a board of directors who accused him of “not [being] consistently honest” with board members and, according to some reports, putting growth ahead of the mission. In one example of this alleged rudeness, it was Altman is said to have been critical of Toner for a paper he co-authored that cast aspersions on OpenAI’s approach to security — to the point where he tried to push her off the board. In another, Altman “enragedSutskever accelerates the launch of AI-powered features at the first OpenAI developer conference.
The council did not explain itself even after repeated opportunities, citing possible legal challenges. And it’s safe to say they fired Altman in an unnecessarily historic way. But it can’t be denied that the filmmakers may have had good reasons for letting Altman go, at least depending on how they interpreted their humanitarian directive.
The new board seems likely to interpret this directive differently.
OpenAI’s board currently consists of former Salesforce co-CEO Bret Taylor, D’Angelo (the only original board member), and Larry Summers, the Harvard economist and former president. Taylor is an entrepreneurial entrepreneur, having co-founded several companies, including FriendFeed (acquired by Facebook) and Quip (through whose acquisition he came to Salesforce). Summers, meanwhile, has deep business and government connections—an advantage for OpenAI, the thinking behind his selection likely went, at a time when regulatory scrutiny of artificial intelligence is intensifying.
The filmmakers don’t seem like an outright “win” to this reporter, though — not if the intent was differing opinions. While six seats are yet to be filled, the first four set a rather even tone. such a council would actually be illegal in Europe, which commands Companies reserve at least 40% of their board seats for female candidates.
Why some AI experts are worried about OpenAI’s new board of directors
I’m not the only one who’s disappointed. Several AI academics took to X to voice their frustrations earlier today.
Noah Giansiracusa, a mathematics professor at Bentley University and author of a book about social media recommendation algorithms, takes issue with both the male makeup of the board and the nomination of Summers, who he notes has a history of making it. unflattering remarks about women.
“Whatever one makes of these incidents, the visuals are not good, to say the least — especially for a company leading the way in developing artificial intelligence and reshaping the world we live in,” Giansiracusa said via text. “What I find particularly disturbing is that OpenAI’s main goal is to develop artificial general intelligence that ‘benefits all of humanity.’ Given that half of humanity is female, recent events don’t give me much confidence in that. Toner more directly represents the security side of AI, and that’s so often been the position women have been placed in, throughout history, but especially in technology: protecting society from great harm, while men take credit for innovation and world management’.
Christopher Manning, the director of the Sanford AI Lab, is slightly more charitable than — but in agreement with — Giansiracusa in his assessment:
“The newly formed OpenAI board is probably still incomplete,” he told TechCrunch. “Still, the current board membership, with no one deeply knowledgeable about the responsible use of AI in human society and consisting of only white men, is not a promising start for such an important and influential AI company” .
Inequality plagues the AI industry, from commentators which highlight the data used to train generative AI models for the harmful biases that often appear in those trained models, including OpenAI models. Summers, to be fair, he’s got expressed concern about the potentially harmful consequences of artificial intelligence — at least in terms of livelihoods. But critics I spoke to find it hard to believe that a board like OpenAI’s current one will consistently prioritize these challenges, at least not in the way a more diverse board would.
It begs the question: Why didn’t OpenAI try to recruit a well-known AI technician like Timnit Gebru or Margaret Mitchell for the original board? “They weren’t available”? Did they refuse? Or did OpenAI not make an effort in the first place? Maybe we’ll never know.
OpenAI has a chance to prove wiser and more worldly in picking the five remaining board seats — or three, if Altman and a Microsoft executive each take one (as has been rumored). Unless they take a more nuanced approach, which Daniel Colson, the director of the AI Policy Institute think tank, says he said in X may well be true: a few people or a single lab cannot be trusted to ensure the responsible development of artificial intelligence.