A high -profile former policy researcher, Miles Brundage, got on social media Wednesday to criticize Openai for the “re -registration of the story” of the development approach to potentially dangerous AI systems.
Earlier this week, Openai published a document Describing its current philosophy for the safety and alignment of AI, the process of designing AI systems that behave in desired and explanatory ways. In the document, Openai said he sees the growth of AGI, widely defined as AI systems that can perform any work a human container, as a “continuous path” that requires “repetitively growing and learning” by AI Technologies.
‘In a discontinuous world […] Security courses come from the treatment of current systems with overproduction in relation to their obvious authority, [which] It is the approach for which we followed [our AI model] GPT -2, “Openai wrote.” Now we see the first AGI as just a point along a series of increasing utility systems […] In the constant world, the way to make the next system safe and beneficial is to learn from the current system. ”
But Brundage claims that GPT-2, in fact, justifies plenty of attention at the time of its liberation and that this was “100% consistent” with Openai’s repetitive development strategy today.
“The release of Openai’s GPT-2, with which I participated, was 100% consistent [with and] warned Openai’s current philosophy of repetitive development, “Brundage wrote in a post on x. “The model was gradually released, with lessons shared at every step. Many security experts then thanked us for this attention.”
Brundage, who joined Openai as a researcher in 2018, was head of the company’s research for several years. In the Openai “AGI” group, it was particularly emphasis on the responsible development of language production systems, such as Openai’s AI Chatgpt platform.
The GPT-2, announced by Openai in 2019, was the ancestor of the AI systems supplying Chatgpt. GPT-2 could answer questions about a topic, summarize the articles and create text at a level sometimes indiscriminate from that of people.
While GPT-2 and its results may look basic today, it was peak at that time. Referring to the risk of malicious use, Openai initially refused to release the source code of GPT-2, choosing instead of giving selected news agencies limited access to a demo.
The decision met with mixed reviews from the AI industry. Many experts argued that the threat raised by GPT-2 had excessiveAnd that there was no evidence, the model could be abused in the ways that Openai was described. Post focused on AI The slope went so much to post a open letter Asking Openai to release the model, arguing that it was very technologically important to contain.
Openai finally released a partial version of the GPT-2 six months after the model was unveiled, followed by the full system several months after that. Brundage believes this was the right approach.
‘Which part of [the GPT-2 release] Was it motivated by or was it based on thinking about agi as discontinuous? None of them, “he said in a position in X.” What is the proof that this attention was “disproportionate” ex ante? Ex post, prob. would be ok but that doesn’t mean he was in charge of yolo [sic] Information data at that time. ”
Brundage is afraid that Openai’s goal with the document is to create a burden of proof where “concerns are worrying” and “you need overwhelming signs of impending risks to act on them”. This, he argues, is a “very dangerous” mentality for advanced AI systems.
“If I was still working on Openai I would ask why this [document] It was written in the way that it was and exactly what Openai hopes to succeed carefully in such a way, “Brundage added.
Openai has historically been accused of giving priority to “glittering products” at the expense of security and Releases of rush products defeat the rival companies in the market. Last year, Openai dispels the AGI readiness team and a number of AI security and policy researchers left the company for opponents.
Competitive pressures have only increased. Chinese AI Lab Deepseek has caught the attention of the world with the openly available R1 model, which fits Openai’s O1 “logic” model in a number of basic reference points. Openai Sam Altman CEO admitted that Deepseek has reduced Openai’s technological lead and said This openai will “pull some releases” to compete better.
There is a lot of money on the line. Openai loses billions a year and the company has referenced He predicted that his annual losses could triple to $ 14 billion by 2026. A faster product liberalization cycle could benefit the lower line of Openai in the short term, but possibly at the expense of security in the long run. Experts such as the Brundage question whether the compensatory is worth it.