Companies and governments are looking for tools to run AI locally in an effort to reduce cloud infrastructure costs and build mainstream capability. Squarea chip-IP startup founded by veterans of early bitcoin mining company 21E6 is trying to fuel this shift by scaling beyond automotive to laptops and industrial devices with on-device inference technology.
This extension is already paying off.
Quadric saw $15 million to $20 million in licensing revenue in 2025, up from about $4 million in 2024, CEO Veerbhan Kheterpal (pictured above, center) said in an interview with TechCrunch. The company, which is based in San Francisco and has an office in Pune, India, is targeting up to $35 million this year as it builds a rights-based in-device AI business. That growth boosted the company, which now has a post-money valuation of between $270 million and $300 million, up from about $100 million in a 2022 Series B, Kheterpal said.
It also helped attract investors to the company. Square was announced last week a $30 million Series C round led by ACCELERATE Fund, managed by BEENEXT Capital Management, bringing its total funding to $72 million. The increase comes as investors and chipmakers look for ways to push more AI workloads from central cloud infrastructure to devices and local servers, Kheterpal told TechCrunch.
From the car to everything
Quadric started in the automotive industry, where on-device AI can power real-time functions such as driver assistance. Kheterpal said the spread of transformer-based models in 2023 pushed the conclusion to “everything,” creating a sharp business tilt over the past 18 months as more companies look to run AI on-premise rather than cloud-based.
“Nvidia is a powerful platform for data center artificial intelligence,” said Kheterpal. “We were looking to create a similar CUDA-like or programmable AI infrastructure on the device.”
Unlike Nvidia, Quadric doesn’t make its own chips. Instead, it licenses programmable AI processor IP, which Kheterpal described as a “blueprint” that customers can build into their own silicon, along with a software stack and toolchain to run models, including vision and voice, on the device.
Techcrunch event
San Francisco
|
13-15 October 2026
The startup’s customers use AI in printers, cars and laptops, including Kyocera and Japanese auto supplier Denso, which makes chips for Toyota vehicles. The first products based on Quadric’s technology are expected to launch this year, starting with laptops, Kheterpal told TechCrunch.
However, Quadric is now looking beyond traditional commercial deployments and into markets exploring “dominant AI” strategies to reduce reliance on US-based infrastructure, Kheterpal said. The startup is exploring customers in India and Malaysia, he added, and counts Moglix CEO Rahul Garg as a strategic investor to help shape its “dominant” approach to India. Quadric employs nearly 70 people worldwide, including about 40 in the US and about 10 in India.
The push is driven by the rising cost of centralized AI infrastructure and the difficulty many countries face in building hyperscale data centers, Kheterpal said, prompting more interest in “distributed AI” setups where inference is run on laptops or small servers in offices rather than relying on cloud-based services for every query.
The World Economic Forum sharp in this shift in a recent article as AI inference moves closer to users and away from purely centralized architectures. Likewise, EY he said in a November report that the mainstream AI approach has gained traction as policymakers and industry groups push for domestic AI capabilities that span computers, models and data, rather than relying entirely on foreign infrastructure.
For chipmakers, the challenge is that AI models are evolving faster than hardware design cycles, Kheterpal said. He argued that customers need programmable processor IP that can keep up with software updates rather than requiring costly redesigns every time architectures shift from previous vision-focused models to today’s transformer-based systems.
Quadric is pitching itself as an alternative to chip vendors like Qualcomm, which typically uses AI technology inside its own processors, as well as IP vendors like Synopsys and Cadence, which sell blocks of neural processing engines. Kheterpal said Qualcomm’s approach can lock customers into its own silicon, while traditional IP vendors offer engine blocks that many customers struggle to program.
Quadric’s programmable approach allows customers to support new AI models through software updates rather than hardware redesign, giving an edge in an industry where chip development can take years, while Model architectures change within a few months in our time.
However, Quadric remains early in the game, with few signed customers so far, and much of its long-term upside depends on converting current licensing deals into high-volume shipments and recurring royalties.
