The Alchemist Accelerator has a new stack of AI startups pitching their products today, if you’re interested in I’m watching, and the program itself is making some international moves in Tokyo and Doha. Read on for our picks for the lot.
Speaking with Alchemist CEO and founder Ravi Belani ahead of demo day (today at 10:30am Pacific) about this team, it was clear that ambitions for AI startups have shrunk, and that it is not bad.
No early-stage startup today is at all likely to become the next OpenAI or Anthropic – their lead is too huge right now in the area of basic big language models.
“The cost of doing a basic LLM is prohibitively high. you go into hundreds of millions of dollars just to make it. The question is, as a startup, how do you compete?” Belani said. “VCs don’t want wrappers around LLMs. We’re looking for companies where there’s a vertical play, where they own the end user and there’s a network effect and lock-in over time.”
This was also my reading as the companies selected for this group are all very specific in their applications, using AI but solving a specific problem in a specific area.
An example of this is healthcare, where AI models to aid in diagnosis, care planning and so on are increasingly but still being tested carefully. The specter of liability and bias hangs heavy over this heavily regulated industry, but there are also many legacy processes that could be replaced with real, tangible benefit.
Equality AI it’s not trying to revolutionize cancer care or anything — the goal is to ensure that the models put in place don’t violate important anti-discrimination safeguards in AI regulation. This is a serious risk because if your model of care or diagnosis is found to be biased against a protected class (for example assigning a higher risk to a Muslim or a queer person), it could sink the product and open you up lawsuits.
Do you want to trust the model manufacturer or seller? Or do you want a disinterested (in the original sense of having no conflicting interests) expert who knows the ins and outs of policies and also how to properly evaluate a model?
“We all deserve the right to trust that the AI behind the medical curtain is safe and effective,” CEO and founder Maia Hightower told TechCrunch. “Healthcare leaders are struggling to keep pace with the complex regulatory environment and rapidly changing AI technology. Over the next two years, AI compliance and the risk of litigation will continue to increase, leading to widespread adoption of responsible AI practices in healthcare. The risk of non-compliance and penalties as severe as loss of certification makes our solution very timely.”
It’s a similar story for Cerevox, which works to eliminate delusions and other mistakes from today’s LLMs. But not just in a general sense: They work with companies to structure their pipelines and data structures so that these bad habits of AI models can be minimized and observed. It’s not about keeping ChatGPT from making a physicist when you ask it about a non-existent 1800 discovery, it’s about preventing a risk assessment engine from extrapolating from data in a column that should be there but isn’t.
They’re working first with fintech and insurtech companies, which Belani acknowledged is “a non-sexy use case, but it’s a path to building a product.” A trail of paying customers, which is, you know, how you start a business.
Quickr Bio builds on the new world of biotechnology based on Crispr-Cas9 gene editing, which brings with it new risks as well as new opportunities. How do you verify that the changes you make are correct? Being 99% sure isn’t enough (again, regulations and liability), but testing to increase your confidence can be time-consuming and expensive. Quickr claims that its method of quantifying and understanding the actual modifications made (as opposed to the theoretical ones – ideally these are identical) is up to 100 times faster than existing methods.
In other words, they don’t create a new paradigm, they just aim to be the best solution to empower the existing one. If they can show even a significant percentage of their claimed effectiveness, they could be indispensable in many laboratories.
You can see the rest of the cohort here — you’ll see that the above is representative of the vibe. Demonstrations begin at 10:30am. Pacific.
As for the program itself, it receives some serious market for programs in Tokyo and Doha.
“We think it’s a turning point in Japan, it’s going to be an exciting place to get stories from and companies coming from,” Belani said. A recent change in tax policy should free up early-stage capital for startups, and investment slipping out of China is landing in Japan, particularly Tokyo, where a new (or rather renovated) tech hub is expected to emerge. The fact that OpenAI is building a satellite there it’s really, he suggested, all you need to know.
Mitsubishi is investing through one arm or another, and Japan’s External Trade Organization is also involved. I’ll definitely be interested to see what the awakened Japanese startup economy is producing.
Alchemist Doha receives a $13 million commitment from the government, in an interesting twist.
“The mandate there is focused on the founders of emerging markets, the 90 percent of the world orphaned from where a lot of technological innovation occurs,” Belani said. “We found that some of the best companies in the US are not from the US. There’s something about having an outside perspective that creates amazing companies. There’s also a lot of volatility out there and that talent needs a home.”
He noted that they will make larger investments, from $200,000 to $1 million, from this program, which is subject to change the type of companies involved.