AI labs are racing to build data centers as big as manhattan each costing billions of dollars and consuming as much energy as a small city. The effort is driven by a deep belief in “scalability”—the idea that adding more computing power to existing AI training methods will eventually yield superintelligent systems capable of performing all kinds of tasks.
However, a growing chorus of AI researchers say that the scaling of large language models may be reaching its limits and that other breakthroughs may be needed to improve AI performance.
That’s the bet Sara Hooker, Cohere’s former VP of Artificial Intelligence and Google Brain graduate, is taking with her new startup, Adaptation workshops. He co-founded the company with fellow Cohere and Google veteran Sudip Roy, and it’s based on the idea that scaling LLMs has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Cohere in August, it was quietly announced the startup this month to begin hiring more broadly.
In an interview with TechCrunch, Hooker says that Adaption Labs builds AI systems that can constantly adapt and learn from their experiences in the real world, and do so extremely effectively. He declined to share details about the methods behind this approach or whether the company is based on LLM or another architecture.
“There’s an inflection point now where it’s very clear that the formula of just scaling these models — scaling approaches, which are attractive but extremely boring — has not produced intelligence that is able to navigate or interact with the world,” Hooker said.
Adaptation is the “heart of learning,” according to Hooker. For example, stub your toe when walking past your dining room table and you’ll learn to step more carefully next time. AI labs have attempted to capture this idea through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled settings. However, today’s RL methods do not help AI models in production — that is, systems already in use by customers — learn from their mistakes in real time. They just keep pricking their finger.
Some AI labs offer consulting services to help businesses adapt AI models to their custom needs, but it comes at a price. OpenAI reportedly requires customers to spend over $10 million with the company offering its micro-tuning consulting services.
Techcrunch event
San Francisco
|
27-29 October 2025
“We have a handful of frontier labs that define this set of AI models that are served the same way to everyone, and it’s very expensive to adapt,” Hooker said. “And actually, I think that doesn’t have to be the case anymore, and AI systems can very effectively learn from an environment. Proving that will completely change the dynamics of who can control and shape AI and really who these models serve at the end of the day.”
Adaption Labs is the latest sign that the industry’s faith in scaling LLMs is wavering. A recent paper by MIT researchers found that the world’s largest artificial intelligence models may soon experience diminishing returns. The vibes in San Francisco seem to be changing as well. The AI world’s favorite podcaster, Dwarkesh Patel, recently hosted some unusually skeptical conversations with renowned AI researchers.
Richard Sutton, a Turing Award winner considered “the father of RL,” told Patel in September that LLMs are not really scalable because they don’t learn from real world experience. This month, OpenAI’s first employee Andrej Karpathy told Patel that had reservations on the long-term potential of RL to improve AI models.
These types of fears are not unheard of. In late 2024, some AI researchers raised concerns that scaling AI models through pretraining—in which AI models learn patterns from piles of data sets—had diminishing returns. Until then, training was the secret sauce for OpenAI and Google to improve their models.
These scaling concerns are now showing up in the data, but the AI industry has found other ways to improve the models. In 2025, breakthroughs in reasoning AI models, which require additional time and computing resources to work through problems before they are answered, have pushed the capabilities of AI models even further.
AI labs seem convinced that scaling RL and AI reasoning models is the new frontier. OpenAI researchers previously told TechCrunch that they developed their first reasoning AI model, o1, because they believed it would scale well. Meta researchers and Periodic Labs recently published a paper exploring how RL could further scale performance — a study that reportedly cost more than $4 million, highlighting how accurate current approaches remain.
Adaption Labs, by contrast, aims to find the next breakthrough and prove that learning from experience can be much cheaper. The startup was in talks to raise a $20 million to $40 million seed round earlier this fall, according to three investors who reviewed its pitch decks. They say the round has since closed, although the final amount is unclear. Hooker declined to comment.
“We’re prepared to be very ambitious,” Hooker said when asked about her investors.
Hooker previously led Cohere Labs, where she trained small AI models for enterprise use cases. Compact AI systems routinely outperform their larger counterparts on coding, math and reasoning benchmarks – a trend Hooker wants to continue to push forward.
It has also built a reputation for expanding access to AI research globally, recruiting research talent from underrepresented regions such as Africa. While Adaption Labs will open an office in San Francisco soon, Hooker says she plans to hire around the world.
If Hooker and Adaption Labs are right about the limitations of scaling, the implications could be huge. Billions have already been invested in scaling LLMs, with the assumption that larger models will lead to general intelligence. But it’s possible that true adaptive learning could prove not only more powerful—but far more effective.
Marina Temkin contributed reporting.
