A disconnection between first and third -party reference results for O3 AI model is Openai is Asking for questions about the company’s slide and model test practices.
When Openai revealed O3 in December, the company claimed that the model could answer just over a quarter of the Frontiermath questions, a difficult set of mathematical problems. This score broke the competition-the next best model managed to respond properly to 2% about 2% of Frontiermath problems.
‘Today, all offers out there have less than 2% [on FrontierMath]”Mark Chen, Head of Researcher at Openai, said during a lifetime. ‘We see [internally]With O3 in aggressive testing settings, we are able to reach over 25%. ”
It turns out that this number was probably a upper limit, reached by an O3 version with more computers behind it than the OpenAi model that began publicly last week.
Epoch AI, the research institute behind Frontiermath, released the results of O3’s independent reference tests on Friday. Epoch found that O3 recorded about 10%, well below Openai’s highest score.
Openai has released O3, their long-awaited model of logic, along with O4-Mini, a smaller and cheaper model that succeeds O3-Mini.
We evaluated the new models in the suite of mathematics and science. It results in the thread! pic.twitter.com/5gbtzkey1b
– epoch ai (@epochairesearch) April 18 2025
This does not mean that Openai lies, per se. The reference results The company published in December shows a lower score that matches the score is observed. Epoch also noted that the testing test is probably different from Openai’s and used an up -to -date Frontiermath release for its ratings.
“The difference between our results and the Openai may be due to Openai’s evaluation with a more powerful internal scaffold using more testing time [computing]or because these results were carried out on a different subset of Frontiermath (the 180 problems in the Frontiermath-2024-11-26 compared to the 290 problems at Frontiermath-2025-02-28-Private), ” I wrote Time.
According to a post in x From the Arc Prize Foundation, an organization that examined a release before O3 release, the public model O3 “is a different model […] Coordinated for use of conversation/product, “confirms the report of the season.
“All circulators O3 calculate levels are smaller than version we [benchmarked]”He wrote the ARC Award. In general, bigger computational steps are expected to achieve better reference ratings.
The review released by O3 on ARC-AGI-1 will last one day or two. Because today’s liberation is a virtually different system, we re -link our past results as “preview”:
O3-pareview (low): 75.7%, $ 200/work
O3-Preview (high): 87.5%, $ 34.4k/TaskAbove uses O1 Pro pricing …
– Mike Knoop (@mikeknoop) April 16 2025
His own Wenda Zhou, a member of the technical staff, Said during a livestream last week That O3 in production is “more optimized for cases of real world use” and speed against the O3 version submitted in December. As a result, he may present “inequalities”, he added.
“[W]You have done [optimizations] To make the [model] more efficient financial [and] More useful in general, “Zhou said.” We still hope – we still believe that – this is a much better model […] You won’t have to wait so much when you ask for an answer, which is real with them [types of] models. ”
The fact that the public release of O3 is not lacking from Openai’s promises is a part of a point, as the O3-Mini-High and O4-Mini models exceeded the O3 in Frontiermath and Openai plans to debut in a stronger O3, O3-PRO variation in the next few weeks.
However, it is another reminder that AI’s reference points are not better taken at their nominal value – especially when the source is a company with services for sale.
The comparative “controversy” evaluation becomes a common phenomenon in the AI industry, as sellers are fighting to capture headlines and Mindshare with new models.
In January, Epoch was criticized for waiting to disclose funding from Openai until O3 announced. Many academics who contributed to Frontiermath were not informed of Openai’s participation until it was made public.
More recently, Elon Musk’s XAI has been accused of publishing misleading reference charts for the latest AI model, Grok 3. Just this month, Meta admitted that he brought a reference rating for a version of a model that was different from the one posted to the developers.
Updated 4:21 PM Pacific: Comments were added by Wenda Zhou, a member of the OpenAi technical staff, from a livelihood last week.
