On Wednesday, AI Security Company, AI, AI, announced $ 80 million in new funding in a round led by Sequoia Capital and Redpoint Ventures, with the participation of CEO of Wiz Assaf Rappaport. A source close to the deal said the round rated irregular at $ 450 million.
“Our view is that soon, a lot of economic activity is going to come from the interaction of human to-oh and the ai-on-ai interaction,” co-founder Dan Lahav in TechCrunch, “and this will break the security stack at multiple points.”
Previously known as Labs Pattern, Arregular is already an important player in AI ratings. Company’s work refers to security ratings For Claude 3.7 Sonnet as well as Models O3 and O4-MINI Openai. More generally, the company framework for grading the ability to detect vulnerability of a model (Is called resolving) is widely used in the industry.
While Arregular has done a significant job on the existing dangers of models, the company concentrates the concentration of capital with a look even more ambitious: identifying emerging dangers and behaviors before the surface of the wild. The company has built a complex system of simulated environments, allowing an intensive test of a model before it is released.
“We have complex network simulations where we have the AI to take on the role of the attacker and the defender,” says co -founder Omer Nevo. “So when a new model comes out. We can see where the defenses hold and where they don’t.”
Security has become a strong focus for the AI industry, as the potential risks of border models as more risks have arisen. Openai revised internal security measures this summer, with attention to possible corporate espionage.
At the same time, AI models are increasingly experienced in finding vulnerabilities of software – a power with serious implications for both attackers and defenders.
TechCrunch event
Francisco
|
27-29 October 2025
For the irregular founders, it is the first of the many security headaches caused by the growing potential of large language models.
“If Lab Frontier’s goal is to create more and more sophisticated and capable models, our goal is to secure these models,” says Lahav. “But it’s a moving goal. So there is a lot, much, much more work in the future.”
