Discussions about AI’s reference points – and how they are reported by AI Labs – lasts in public.
This week, an Openai employee accused Elon Musk’s AI company, XAI, of the publication of misleading reference results for the latest model AI, Grok 3. insisted that the company was on the right.
The truth is somewhere in between.
To one Post on XAI blogThe company published a graph showing Grok 3’s performance on Aime 2025, a collection of challenging mathematical questions from a recent mathematical invitations. Some experts have challenged Aime’s validity as a reference point. However, the Aime 2025 and earlier versions of the test are commonly used to explore the mathematical ability of a model.
Xai’s chart showed two variants of Grok 3, Grok 3 logic of Beta and Grok 3 mini Reasoning, hitting the available Openai, O3-MINI-High, Aime 2025. In “Cons@64”.
What is cons@64, can you ask? Well, it’s short about “Consensus@64” and basically gives a model 64 tries to answer every problem at a reference point and gets the answers that are most commonly created as final answers. As you can imagine, the cons@64 tends to enhance the models’ reference ratings enough and skip it from one graph can make it look like a model overcoming another when in fact, this is not the case.
Grok 3 Beta Accounting and Grok 3 Mini Collections for Aime 2025 in “@1”-indicates that the first score the models got on the reference-jumping point under the O3-Mini-High score. The Grok 3 the logic beta routes are also constantly-the light behind the Openai O1 model set on “average” computers. However xai is Advertising Grok 3 as “smarter AI in the world.”
Gooseberry supported in x That Openai has published similarly misleading reference charts in the past – though diagrams that compare the performance of its own models. A more neutral party in the debate has gathered a more “accurate” chart showing almost every performance of the model in cons@64:
Hilarious how some people see my plot as an attack on Openai and others as an attack on Grok, while in fact it is Deepseek Propaganda
(I really think Grok looks good there, and Openai’s ttc chicanery behind O3-MINI-*High*-pass@”1″ “” Worth more checks.) pic.twitter.com/3Wh8Foufic– Teortaxes ▶ Place February 20 2025
But as a researcher Ai Nathan Lambert pointed to a postPerhaps the most important measurement remains a mystery: the computational (and monetary) costs needed for each model to achieve the best score. This is exactly how a few more reference points of AI communicate for the restrictions of models – and their strengths.