On Tuesday, startup Anthropic released a family of AI models that it claims achieve best-in-class performance. Just days later, rival Inflection AI unveiled a model that it claims comes close to matching some of the most capable models out there, including OpenAI’s GPT-4, in quality.
Anthropic and Inflection are by no means the first AI companies to claim that their models have matched or beaten the competition by some objective measure. Google supported the same with its Gemini models at launch, and OpenAI said the same for GPT-4 and its predecessors, GPT-3, GPT-2, and GPT-1. The list goes on.
But what metrics are they talking about? When a seller says a model achieves top performance or quality, what exactly does that mean? Perhaps more to the point: Will a model that technically “performs” better than some other model in reality touch improved in a tangible way?
On that last question, not likely.
The reason – or rather the problem – lies in the benchmarks that AI companies use to quantify a model’s strengths and weaknesses.
Internal measures
Today’s most commonly used benchmarks for AI models — specifically chatbot-powered models such as OpenAI’s ChatGPT and Anthropic’s Claude — do a poor job of capturing how the average human interacts with the models being tested. For example, a benchmark cited by Anthropic in its recent announcement, GPQA (“A Graduate-Level Google-Proof Q&A Benchmark”), contains hundreds of PhD-level biology, physics, and chemistry questions — yet most people use chatbot for tasks like answering emails, writing cover letters and talking about their feelings.
Jesse Dodge, a scientist at the Allen Institute for AI, the nonprofit AI research organization, says the industry has reached a “crisis of evaluation.”
“Benchmarks are typically static and narrowly focused on evaluating a single capability, such as a model’s realism in a single domain or its ability to solve multiple-choice mathematical reasoning questions,” Dodge told TechCrunch in an interview. “Many benchmarks used for evaluation are more than three years old, from when AI systems were mainly used for research and did not have many real users. In addition, humans use genetic AI in many ways — they are very creative.”
Wrong measurements
It’s not that the most used benchmarks are completely useless. No doubt someone is asking Ph.D level math questions. in ChatGPT. However, as genetic AI models are increasingly positioned as mass-market, do-it-all systems, the old benchmarks are becoming less applicable.
David Widder, a postdoctoral researcher at Cornell who studies artificial intelligence and ethics, notes that many of the common tests of reference skills—from solving school-level math problems to determining whether a sentence contains an anachronism—will never be relevant to the majority of users.
“Earlier AI systems were often built to solve a specific problem in a context (e.g. medical AI expert systems), making a deep understanding of what constitutes good performance in that particular context more possible,” Widder said. at TechCrunch. “As systems are increasingly seen as ‘general purpose’, this is less possible, so we’re increasingly seeing a focus on testing models across a variety of benchmarks in different fields.”
Errors and other defects
In addition to misalignment with use cases, there are questions about whether some benchmarks are properly measuring what they are supposed to measure.
One analysis of HellaSwag, a test designed to assess common sense reasoning in models, found that over a third of the test questions contained typos and “stupid” writing. Somewhere else, MMLU (short for “Massive Multitask Language Understanding”), a benchmark highlighted by vendors such as Google, OpenAI and Anthropic as proof that their models can reason through logic problems, asks questions that can be solved through memorization verbatim.
“[Benchmarks like MMLU are] more about memorizing and associating two keywords together,” Widder said. “I can find [a relevant] article quickly enough and answer the question, but that doesn’t mean I understand the causal mechanism or that I could use my understanding of that causal mechanism to actually reason and solve new and complex problems in unpredictable contexts. Not even a model can.”
Fixing what’s broken
So benchmarks are broken. But can they be fixed?
Dodge believes so – with more human involvement.
“The right way forward, here, is a combination of evaluation benchmarks with human evaluation,” he said, “prompting a model with a real user question and then hiring a human to evaluate how good the response is.”
As for Widder, he’s less optimistic that benchmarks today — even with corrections for the most obvious mistakes, like typos — can be improved to the point where they would be informative to the vast majority of AI model users. Instead, he believes that tests of models should focus on the downstream effects of those models and whether the effects, good or bad, are seen as desirable by those affected.
“I would ask for what specific goals we want AI models to be able to be used for and assess whether they would be – or are – successful in such contexts,” he said. “And hopefully that process also includes evaluating whether we should be using AI in such contexts.”