Companies are increasingly curious about AI and the ways it can be used to (potentially) increase productivity. But they are also wary of risks. On a recent business day overviewbusinesses cite timeliness and reliability of underlying data, potential bias, and security and privacy as the top barriers to implementing AI.
Sensing a business opportunity, Scott Clark, who previously co-founded AI training and experimentation platform SigOpt (which was acquired by Intel in 2020), set out to build what he describes as “software that makes AI safe, reliable and secure ». Clark started a company, Distributionto launch the initial release of this software, with the goal of scaling and standardizing testing across different AI use cases.
“Distributional is building the modern business platform for AI testing and evaluation,” Clark told TechCrunch in an email interview. “As AI applications grow in power, so does the risk of harm. Our platform is built for AI product teams to proactively and continuously identify, understand and address AI risk before it harms their customers in production.”
Clark was inspired to launch Distributional after facing AI challenges related to the technology during the Intel acquisition after SigOpt. While overseeing a team as Intel’s vice president and general manager of artificial intelligence and high-performance computing, he found it nearly impossible to ensure that high-quality AI testing was happening at a steady pace.
“The lessons I learned from the convergence of experiences showed me the need for AI testing and evaluation,” Clark continued. “Whether from hallucinations, volatility, inaccuracy, embedding or dozens of other potential challenges, teams often struggle to identify, understand and address AI risk through testing. Proper AI testing requires an understanding of depth and distribution, which is difficult to solve.”
Distributional’s core product aims to identify and diagnose AI “breakage” from large language models (à la OpenAI’s ChatGPT) and other types of AI models by trying to semi-automatically determine what, how and where to test models. The software gives organizations a “complete” view of AI risk, Clark says, in a sandbox-like preproduction environment.
“Most groups choose to bear the risk of model behavior and accept that models will have problems.” Clark said. “Some may try ad-hoc manual testing to identify these issues, which is resource-intensive, disorganized and inherently incomplete. Others may try to passively capture these issues with passive monitoring tools after the AI is produced… [That’s why] Our platform includes an extensible test framework for continuous testing and analysis of stability and robustness, a configurable test dashboard for visualizing and understanding test results, and an intelligent test suite for planning, prioritizing and building the right test combination.
Now, Clark was tight-lipped about the specifics of how this all works — and the broad outlines of Distributional’s platform for that matter. It’s too early, he said in his defense. The distribution is still in the process of co-designing the product with corporate partners.
So given that Distributional is pre-revenue, pre-launch, and with no paying customers to speak of, how can it hope to compete with the AI testing and evaluation platforms already on the market? There are several after all, including Kolena, Prolific, Giskard and Patronus – many of which are well funded. And if the competition wasn’t fierce enough, tech giants like Google Cloud, AWS, and Azure also offer model evaluation tools.
Clark says he believes Distributional differentiates itself in the enterprise slant of its software. “Since day one, we’ve been building software capable of meeting the data privacy, scalability and complexity requirements of large enterprises in both unregulated and highly regulated industries,” he said. “The types of businesses we’re designing our product with have requirements that extend beyond the existing offerings available in the market, which tend to be individual developer-centric tools.”
If all goes according to plan, Distributional will start generating revenue sometime next year, once its platform goes into general availability and some of its design partners convert to paying customers. Meanwhile, the startup’s fundraising from VCs. Distributional announced today that it has closed an $11 million round led by Martin Casado of Andreessen Horowitz with participation from Operator Stack, Point72 Ventures, SV Angel, Two Sigma and angel investors.
“We hope to start a virtuous cycle for our customers,” Clark said. “With better testing, teams will have more confidence in deploying AI in their applications. As they develop more AI, they will see its impact grow exponentially. And as they see that scale of impact, they’ll apply it to more complex and meaningful problems, which in turn will need even more testing to make sure it’s safe, reliable and secure.”