Many in the industry believe that the winners of the AI model market have already been decided: Big Tech will own it (Google, Meta, Microsoft, a little Amazon) along with the model makers of their choice, notably OpenAI and Anthropic.
But a tiny startup of 30 people Arcee AI he disagrees. The company has just released a truly and permanently open (Apache licensed) general purpose foundation model called Trinity, and Arcee claims that at 400B parameters, it is one of the largest open source foundation models ever trained and released by a US company.
Arcee says Trinity compares to Meta’s Llama 4 Maverick 400B and Z.ai’s GLM-4.5, a high-performance open-source model from China’s Tsinghua University, according to benchmark tests conducted with base models (very shortly after training).
Like other state-of-the-art (SOTA) models, Trinity is designed for coding and multi-step processes like agents. However, despite its size, it is still not a true SOTA competitor because it currently only supports text.
More features are in the works — a vision model is currently in development, and a speech-to-text version is on the roadmap, CTO Lucas Atkins told TechCrunch (pictured above, left). By comparison, Meta’s Llama 4 Maverick is already multimodal, supporting text and images.
But before adding more AI functions to her roster, Arcee says, she wanted a basic LLM that would impress her primary target customers: developers and academics. The group is particularly keen to lure US companies of all sizes away from China’s choice of open-ended models.
“Ultimately, the winners of this game, and the only way to really win usage, is to have the best open-weight model,” Atkins said. “To win the hearts and minds of developers, you have to give them the best.”
Techcrunch event
Boston, MA
|
June 23, 2026
Benchmarks show that the basic Trinity model, which is in preview while more further training takes place, largely holds its own and, in some cases, slightly outperforms the Llama in tests of coding and math, common sense, cognition and reasoning.
The progress Arcee has made so far in becoming a competitive AI lab is impressive. Next is the large Trinity model two previous small models released in December: the 26B-parameter Trinity Mini, a fully trained inference model for tasks ranging from web apps to agents, and the 6B-parameter Trinity Nano, an experimental model designed to push the boundaries of models that are tiny but chatty.
More importantly, Arcee trained them all in six months for a total of $20 million, using 2,048 Nvidia Blackwell B300 GPUs. That’s out of about $50 million the company has raised so far, said founder and CEO Mark McQuade (pictured above, right).
That kind of cash was “a lot for us,” said Atkins, who led the modeling effort. However, he acknowledged that it pales in comparison to how much larger labs are currently spending.
The six-month timeline “was very calculated,” said Atkins, whose pre-LLM career included building voice agents for cars. “We’re a younger startup that’s very hungry. We have a tremendous amount of talent and bright young researchers who, when given the opportunity to spend this amount of money and train a model of this size, we believed they would rise to the occasion. And they certainly did, with many sleepless nights, many long hours.”
McQuade, formerly of the open-source modeling marketplace Hugging Face, says Arcee didn’t start out wanting to be a new AI lab in the US: The company initially customized models for large enterprise clients like SK Telecom.
“We just did post-training. So we’d do other people’s great work: We’d take a Llama model, we’d take a Mistral model, we’d take a Qwen model that was open source, and then train it to make it better” for a company’s intended use, he said, including reinforcement learning.
But as their client list grew, Atkins said, the need for their own model became a necessity, and McQuade worried about relying on other companies. At the same time, many of the best open models came from China, which American companies were reluctant or forbidden to use.
It was a nerve-wracking decision. “I think there are less than 20 companies in the world that have trained and released their own model” at the size and level that Arcee was shooting for, McQuade said.
The company started small at first, trying its hand at a tiny 4.5B model created in partnership with training company DatologyAI. The success of the project then encouraged greater efforts.
But if the US already has the Llama, why does it need another open-weight model? Atkins says that by choosing the Apache open source license, the startup is committed to always keeping its models open. This comes after Meta CEO Mark Zuckerberg said last year that his company might not always make all the most advanced models open source.
“Llama can be considered not truly open source, as it uses a Meta-controlled license with commercial and usage warnings,” he says. This has caused some open source organizations to claim that Llama is not open source compatible at all.
“Arcee exists because the US needs a permanently open, Apache-licensed, edge-quality alternative that can actually compete on today’s frontier,” said McQuade.
All Trinity models, large and small, can be downloaded for free. The larger version will come in three flavors. Trinity Large Preview is a lightly trained instruction model, meaning it has been trained to follow human instructions, not just predict the next word, which orients it for general conversational use. The Trinity Large Base is the base model without further training.
Then we have TrueBase, a model with any instructional or post-training data, so businesses or researchers who want to customize it won’t have to untangle data, rules, or assumptions.
Arcee AI will eventually offer a hosted version of its general release model for what it says is competitive API pricing. That release is up to six weeks away as the startup continues to refine the model’s reasoning training.
API pricing for Trinity Mini is $0.045 / $0.15 and there is also a free tier available with a limited price. Meanwhile, the company still sells post-training and customization options.
