We are at a unique time for AI companies building their own foundation model.
First, there’s a whole generation of industry veterans who made their names at big tech companies and are now going it alone. You also have legendary researchers with vast experience but ambiguous commercial ambitions. There is a distinct possibility that at least some of these new labs will become OpenAI-sized behemoths, but there is also room for them to do interesting research without worrying too much about commercialization.
The end result? It becomes hard to tell who is really trying to make money.
To make things simpler, I recommend some sort of sliding scale for any company that builds a foundation model. It’s a five-level scale where it doesn’t matter if you actually earn money – only if you try. The idea here is to measure ambition, not success.
Think of it in these terms:
- Level 5: We are already earning millions of dollars every day, thank you very much.
- Level 4: We have a detailed multi-step plan to become the richest people on Earth.
- Level 3: We have many promising product ideas, which will be revealed in the fullness of time.
- Level 2: We have the outlines of a concept of a design.
- Level 1: True wealth is when you love yourself.
The big names are all in Tier 5: OpenAI, Anthropic, Gemini and so on. The scale becomes more interesting with the new generation of labs starting now, with big dreams but ambitions that can be harder to read.
The important thing is that people who participate in these workshops can generally choose whatever level they want. There is so much money in AI right now that no one is going to question them about a business plan. Even if the lab is just a research project, investors will consider themselves happy to participate. If you are not particularly motivated to become a billionaire, you may well live a happier life at Level 2 than at Level 5.
Techcrunch event
San Francisco
|
13-15 October 2026
The problems arise because it’s not always clear where an AI lab lands on the scale — and much of the AI industry’s current drama stems from this confusion. Much of the anxiety surrounding OpenAI’s transition from a non-profit came from the lab spending years at Level 1 and then jumping to Level 5 almost overnight. On the other hand, you could argue that Meta’s early AI research was firmly at Level 2, when what the company really wanted was Level 4.
With that in mind, here’s a quick review of four of the biggest modern AI labs and how they measure up.
People&
Humans& was the big AI news this week and part of the inspiration to come up with this whole scale. The founders have an exciting pitch for the next generation of AI models, with scaling laws giving way to an emphasis on communication and coordination tools.
Yet for all the glowing press, Humans& was tight-lipped about how this would translate into actual monetizable products. It seems that does Do you want to create products? the group simply won’t commit to anything specific. The most they’ve said is that they’re going to create some sort of AI tool for the workplace, replacing products like Slack, Jira, and Google Docs, but also redefining how these other tools work at a fundamental level. Workplace software for workplace after software!
It’s my job to know what that thing means, and I’m still pretty confused about that last part. But it’s specific enough that I think we can put them at Level 3.
Thinking Machines Lab
This is very difficult to rate! In general, if you have a former CTO and project lead for ChatGPT raising a $2 billion round, you have to assume that there is a pretty concrete roadmap. Mira Murati doesn’t strike me as someone who jumps in without a plan, so coming into 2026, I’d feel good putting TML at Level 4.
But then the last two weeks happened. The departure of CTO and co-founder Barret Zoph has made most of the headlines, in part because the special circumstances involved. But at least five other employees left with Zoph, many citing concerns about the company’s direction. Just one year later, nearly half of TML’s founding team no longer work there. One way to read the facts is that they thought they had a solid plan to become a world-class artificial intelligence lab, only to find that the plan wasn’t as solid as they thought. Or in terms of scale, they wanted a Level 4 lab, but realized they were at Level 2 or 3.
There’s not enough evidence yet to warrant a downgrade, but it’s getting close.
World Labs
Fei-Fei Li is one of the most respected names in artificial intelligence research, best known for establishing the ImageNet challenge that launched modern deep learning techniques. He currently holds a Sequoia Endowed Chair at Stanford, where he directs two different artificial intelligence labs. I won’t bore you with going through all the different honors and academy positions, but suffice it to say that if she wanted to, she could spend the rest of her life receiving awards and being told how great she is. Her book it’s also very good!
So in 2024, when Li announced that she had raised $230 million for a spatial AI company called World Labs, you might think we were operating at Level 2 or lower.
But that was over a year ago, which is a long time in the AI world. Since then, World Labs has shipped a full production model worldwide and a commercialized product built on top of it. At the same time, we’ve seen real signs of demand for global modeling from both the video game and special effects industries — and none of the major labs have built anything that can compete. The result looks a lot like a Tier 4 company, perhaps graduating to Tier 5 soon.
Secure SuperIntelligence (SSI)
Founded by former OpenAI chief scientist Ilya Sutskever, Safe Superintelligence (or SSI) seems like a classic example of a Level 1 startup. Sutskever has gone to great lengths to keep SSI insulated from commercial pressures, to the extent that rejecting a takeover attempt by Meta. There are no product cycles, and aside from the still-baking super smart base model, there doesn’t seem to be any product at all. With this pitch he collected 3 billion dollars! Sutskever has always been more interested in the science of AI than business, and every indication is that this is a truly scientific project at heart.
That said, the world of AI is moving fast — and it would be foolish to count SSI out of the commercial realm entirely. On his recent Dwarkesh appearanceSutskever gave two reasons why SSI might spin, either “if the timelines turned out to be long, which they could” or because “there’s a lot of value in the best and most powerful AI out there impacting the world.” In other words, if research goes really well or really badly, we might see the SSI go up a few levels in a hurry.
