It’s been almost two years since Microsoft CEO Satya Nadella predicted Artificial intelligence will replace knowledge work — the jobs held by lawyers, investment bankers, librarians, accountants, IT and others.
But despite the tremendous progress made by institutional models, change in knowledge work has been slow to arrive. Models have mastered the in-depth research and programming of agents, but for whatever reason, most white-collar jobs were relatively unaffected.
It’s one of the biggest mysteries in artificial intelligence — and thanks to new research from training data giant Mercor, we’re finally getting some answers.
New research examines how leading AI models hold up to doing real-world work tasks, drawn from consulting, investment banking and law. The result is a new reference point called APEX-Agents — and so far, every AI lab gets a failing grade. Faced with questions from real professionals, even the best models struggled to answer more than a quarter of the questions correctly. Most of the time, the model returned the wrong answer or no answer at all.
According to Mercor CEO Brendan Foody, who worked on the paper, the models’ biggest hurdle was detecting information across multiple domains – which is integral to most knowledge work performed by humans.
“One of the big changes in this benchmark is that we built the entire environment, modeled after real business services,” Foody told TechCrunch. “The way we do our work is not with one person giving us all the context in one place. In real life, you work in Slack and Google Drive and all these other tools.” For many AI agency models, this kind of multi-domain reasoning is still hit or miss.
All scenarios were sourced from real professionals in Mercor’s niche market who laid out the questions and set the standard for a successful response. Looking at the questions, which are posted publicly on Hugging Facegives a sense of how complex tasks can become.
Techcrunch event
San Francisco
|
13-15 October 2026
A question in the “Law” section reads:
During the first 48 minutes of the EU production outage, Northstar’s engineering team exported one or two batch sets of EU production event logs containing personal data to the US analytics vendor…. Under Northstar’s own policies, can it reasonably treat exports of one or two logs as consistent with Article 49?
The correct answer is yes, but getting there requires an in-depth assessment of company policies as well as relevant EU privacy laws.
This could throw off even a well-informed person, but the researchers were trying to model the work done by professionals in the field. If an LLM can reliably answer these questions, it could effectively replace many of the lawyers working today. “I think this is probably the most important issue in the economy,” Foody told TechCrunch. “The benchmark is very reflective of the actual work that these people do.”
OpenAI also tried to measure professional skills with the GDPval benchmark — but the APEX-Agents test is significantly different. Where GDPval tests general knowledge in a wide range of occupations, the APEX-Agents benchmark measures the system’s ability to perform continuous tasks in a narrow set of high-value occupations. The result is more difficult for models, but also more closely related to whether these tasks can be automated.
While none of the models proved ready to take over as investment bankers, some were clearly closer to the mark. The Gemini 3 Flash was the best performer of the bunch with 24% single-shot accuracy, closely followed by the GPT-5.2 at 23%. Below that, the Opus 4.5, Gemini 3 Pro and GPT-5 all scored around 18%.
While initial results are lacking, the field of artificial intelligence has a history of beating difficult benchmarks. Now that the APEX-Agents trial is public, it’s an open challenge to AI labs that think they can do better — something Foody fully expects in the coming months.
“It’s improving very quickly,” he told TechCrunch. “Right now, it’s fair to say he’s like an intern who gets it right a quarter of the time, but last year he was the intern who gets it right five or 10% of the time. That kind of year-on-year improvement can make an impact so quickly.”
]
