Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Substack confirms that the data breach affects users’ email addresses and phone numbers

Fundamental raises $255 million in Series A with a new approach to big data analytics

Secondary sales are shifting from founders’ windfalls to employee retention tools

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Amazon and Google are winning the AI ​​capital race — but what’s the prize?

    6 February 2026

    AWS revenue continues to grow as cloud demand remains high

    5 February 2026

    Sam Altman tested Claude’s Super Bowl commercials brilliantly

    5 February 2026

    Alphabet won’t talk about Google-Apple AI deal, even to investors

    4 February 2026

    Exclusive: Positron Raises $230M Series B to Take on Nvidia’s AI Chips

    4 February 2026
  • Apps

    Meta is testing a standalone app for its AI-generated ‘Vibes’ videos

    6 February 2026

    Reddit sees AI search as the next big opportunity

    5 February 2026

    Tinder looks to AI to help fight dating app ‘fatigue’ and burnout

    5 February 2026

    Google’s Gemini app has surpassed 750 million monthly active users

    4 February 2026

    TikTok bounces back from drop in usage that benefited rival apps after US ownership change

    4 February 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

    5 February 2026

    Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

    3 February 2026

    How Sequoia-backed Ethos went public while rivals lagged behind

    30 January 2026

    5 days left for TechCrunch Disrupt 2026 +1 pass with 50%

    26 January 2026

    50% off +1 ends | TechCrunch

    23 January 2026
  • Hardware

    Ring brings “Search Party” feature for finding lost dogs to non-Ring camera owners

    2 February 2026

    India offers zero taxes till 2047 to attract global AI workloads

    1 February 2026

    Microsoft won’t stop buying AI chips from Nvidia, AMD even after its own is released, says Nadella

    30 January 2026

    The iPhone just had its best quarter ever

    30 January 2026

    Snap is serious about specs, spinning off AR glasses into a standalone company

    28 January 2026
  • Media & Entertainment

    The Washington Post retreats from Silicon Valley when it matters most

    6 February 2026

    Spotify is in the business of selling books and adding new audiobook features

    5 February 2026

    Amazon will begin testing AI tools for film and TV production next month

    5 February 2026

    Alexa+, Amazon’s AI assistant, is now available to everyone in the US

    4 February 2026

    Watch Club produces short video dramas and creates a social network around them

    3 February 2026
  • Security

    Substack confirms that the data breach affects users’ email addresses and phone numbers

    6 February 2026

    One of Europe’s biggest universities was offline for days after the cyber attack

    6 February 2026

    Cyber ​​tech giant Conduent’s hot air balloon data breach affects millions more Americans

    5 February 2026

    Hackers Release Personal Information Stolen During Harvard, UPenn Data Breach

    5 February 2026

    French police investigate X office in Paris, call in Elon Musk for questioning

    4 February 2026
  • Startups

    Fundamental raises $255 million in Series A with a new approach to big data analytics

    6 February 2026

    a16z VC wants founders to stop stressing about crazy ARR numbers

    6 February 2026

    Lunar Energy raises $232 million to develop home batteries that support the grid

    5 February 2026

    Meet Gizmo: A TikTok for vibe-coded interactive mini-apps

    5 February 2026

    India’s Varaha wins $20M to scale up carbon removal from Global South

    4 February 2026
  • Transportation

    Apeiron Labs Takes $9.5M to Flood Oceans with Autonomous Underwater Robots

    5 February 2026

    Uber appoints new CFO as its AV plans accelerate

    5 February 2026

    Skyryse lands another $300 million to make flying, even helicopters, simple and safe

    4 February 2026

    China is leading the fight against hidden car door handles

    3 February 2026

    Waymo raises $16 billion to scale robotaxi fleet globally

    3 February 2026
  • Venture

    Secondary sales are shifting from founders’ windfalls to employee retention tools

    6 February 2026

    Sapiom Raises $15M to Help AI Agents Buy Their Own Tech Tools

    6 February 2026

    What a16z actually funds (and what it ignores) when it comes to AI infra

    5 February 2026

    Plans 2026: What’s Next for Startup Battlefield 200

    4 February 2026

    Minneapolis tech community holds strong in ‘tense and difficult times’

    4 February 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»Why most AI benchmarks tell us so little
AI

Why most AI benchmarks tell us so little

techtost.comBy techtost.com8 March 202405 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Why Most Ai Benchmarks Tell Us So Little
Share
Facebook Twitter LinkedIn Pinterest Email

On Tuesday, startup Anthropic released a family of AI models that it claims achieve best-in-class performance. Just days later, rival Inflection AI unveiled a model that it claims comes close to matching some of the most capable models out there, including OpenAI’s GPT-4, in quality.

Anthropic and Inflection are by no means the first AI companies to claim that their models have matched or beaten the competition by some objective measure. Google supported the same with its Gemini models at launch, and OpenAI said the same for GPT-4 and its predecessors, GPT-3, GPT-2, and GPT-1. The list goes on.

But what metrics are they talking about? When a seller says a model achieves top performance or quality, what exactly does that mean? Perhaps more to the point: Will a model that technically “performs” better than some other model in reality touch improved in a tangible way?

On that last question, not likely.

The reason – or rather the problem – lies in the benchmarks that AI companies use to quantify a model’s strengths and weaknesses.

Internal measures

Today’s most commonly used benchmarks for AI models — specifically chatbot-powered models such as OpenAI’s ChatGPT and Anthropic’s Claude — do a poor job of capturing how the average human interacts with the models being tested. For example, a benchmark cited by Anthropic in its recent announcement, GPQA (“A Graduate-Level Google-Proof Q&A Benchmark”), contains hundreds of PhD-level biology, physics, and chemistry questions — yet most people use chatbot for tasks like answering emails, writing cover letters and talking about their feelings.

Jesse Dodge, a scientist at the Allen Institute for AI, the nonprofit AI research organization, says the industry has reached a “crisis of evaluation.”

“Benchmarks are typically static and narrowly focused on evaluating a single capability, such as a model’s realism in a single domain or its ability to solve multiple-choice mathematical reasoning questions,” Dodge told TechCrunch in an interview. “Many benchmarks used for evaluation are more than three years old, from when AI systems were mainly used for research and did not have many real users. In addition, humans use genetic AI in many ways — they are very creative.”

Wrong measurements

It’s not that the most used benchmarks are completely useless. No doubt someone is asking Ph.D level math questions. in ChatGPT. However, as genetic AI models are increasingly positioned as mass-market, do-it-all systems, the old benchmarks are becoming less applicable.

David Widder, a postdoctoral researcher at Cornell who studies artificial intelligence and ethics, notes that many of the common tests of reference skills—from solving school-level math problems to determining whether a sentence contains an anachronism—will never be relevant to the majority of users.

“Earlier AI systems were often built to solve a specific problem in a context (e.g. medical AI expert systems), making a deep understanding of what constitutes good performance in that particular context more possible,” Widder said. at TechCrunch. “As systems are increasingly seen as ‘general purpose’, this is less possible, so we’re increasingly seeing a focus on testing models across a variety of benchmarks in different fields.”

Errors and other defects

In addition to misalignment with use cases, there are questions about whether some benchmarks are properly measuring what they are supposed to measure.

One analysis of HellaSwag, a test designed to assess common sense reasoning in models, found that over a third of the test questions contained typos and “stupid” writing. Somewhere else, MMLU (short for “Massive Multitask Language Understanding”), a benchmark highlighted by vendors such as Google, OpenAI and Anthropic as proof that their models can reason through logic problems, asks questions that can be solved through memorization verbatim.

Test questions from the HellaSwag benchmark.

“[Benchmarks like MMLU are] more about memorizing and associating two keywords together,” Widder said. “I can find [a relevant] article quickly enough and answer the question, but that doesn’t mean I understand the causal mechanism or that I could use my understanding of that causal mechanism to actually reason and solve new and complex problems in unpredictable contexts. Not even a model can.”

Fixing what’s broken

So benchmarks are broken. But can they be fixed?

Dodge believes so – with more human involvement.

“The right way forward, here, is a combination of evaluation benchmarks with human evaluation,” he said, “prompting a model with a real user question and then hiring a human to evaluate how good the response is.”

As for Widder, he’s less optimistic that benchmarks today — even with corrections for the most obvious mistakes, like typos — can be improved to the point where they would be informative to the vast majority of AI model users. Instead, he believes that tests of models should focus on the downstream effects of those models and whether the effects, good or bad, are seen as desirable by those affected.

“I would ask for what specific goals we want AI models to be able to be used for and assess whether they would be – or are – successful in such contexts,” he said. “And hopefully that process also includes evaluating whether we should be using AI in such contexts.”

All included benchmarks genAI Generative AI reference points Research
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleApple will ease the transition to Android by fall 2025
Next Article LLMs are ready to make logging business intelligence tools easier and faster to use
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Amazon and Google are winning the AI ​​capital race — but what’s the prize?

6 February 2026

AWS revenue continues to grow as cloud demand remains high

5 February 2026

Reddit sees AI search as the next big opportunity

5 February 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Substack confirms that the data breach affects users’ email addresses and phone numbers

6 February 2026

Fundamental raises $255 million in Series A with a new approach to big data analytics

6 February 2026

Secondary sales are shifting from founders’ windfalls to employee retention tools

6 February 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

5 February 2026

Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

3 February 2026

How Sequoia-backed Ethos went public while rivals lagged behind

30 January 2026
Startups

Fundamental raises $255 million in Series A with a new approach to big data analytics

a16z VC wants founders to stop stressing about crazy ARR numbers

Lunar Energy raises $232 million to develop home batteries that support the grid

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.