There’s an old adage in management: What you measure matters. And, usually, you get more than you bargain for.
Software engineers have discussed productivity metrics for decades, starting with lines of code. But as the new generation of AI coding agents deliver more code than ever before, what their managers should be measuring is less clear.
Huge token budgets—essentially, the amount of AI processing power a developer is authorized to consume—have become a badge of honor among Silicon Valley developers, but that’s a very strange way to think about productivity. Measuring an input to the process doesn’t make sense when you’re probably more interested in the output. It might make sense if you’re trying to encourage more AI adoption (or sell brands), but not if you’re trying to become more efficient.
Check out the data from a new class of companies operating in the “developer productivity insight” space. They find that developers using tools like Claude Code, Cursor, and Codex produce much more acceptable code than before. But they also find that engineers must return to review this accepted code much more often than before, undercutting claims of increased productivity.
Alex Circei, its CEO and founder Waydevit creates a layer of intelligence to monitor these dynamics. His company works with 50 different clients that employ more than 10,000 software engineers. (Circei has contributed to TechCrunch before, but this reporter had never met him before.)
He says engineering managers are seeing code acceptance rates of 80% to 90%—that is, the share of AI-generated code that developers approve and maintain—but they miss the churn that occurs when engineers have to review that code in the coming weeks, which reduces the real-world acceptance rate to between 10% and 30% of produced code.
The rise of AI coding tools has led Waydev, founded in 2017 to provide developer analytics, to completely rework its platform over the past six months to address the proliferation of fast coding tools. Now, the company is releasing new tools that track the metadata created by AI agents, offering insights into the quality and cost of their code to give engineering managers more insight into both the adoption and effectiveness of AI.
Techcrunch event
San Francisco, California
|
13-15 October 2026
While analytics firms have an incentive to point out the problems they find, there is growing evidence that large organizations are still figuring out how to use AI tools effectively. Big companies are taking notice — Atlassian bought DX, another intelligence technology startup, for $1 billion last year to help its clients understand the return on investment in coding agents.
Data from across the industry tells a consistent story: More code is being written, but a disproportionate amount of it isn’t sticking.
GitClearanother company in this space, published a report in January found that AI tools increased productivity, but also that its data showed that “regular AI users had an average of 9.4 times higher code diversion than their non-AI counterparts” — more than double the productivity gains from the provided tools.
Faros AI, an analytical engineering platform, drew on two years of customer data for it March 2026 report. The finding: code churn — lines of code deleted versus lines added — had increased by 861% under high AI adoption.
Jellyfish, which bills itself as an intelligence platform for AI-embedded engineering, data were collected to 7,548 engineers in the first quarter of 2026. The company found that engineers with the largest contract budgets generated the most pull requests (proposed changes to a common codebase), but the productivity improvement did not scale. They achieved twice the return with 10 times the cost of the tokens. In other words, tools produce volume, not value.
These kinds of statistics ring true when you talk to developers, who find that code review and technical debt pile up even as they enjoy the freedom of new tools. A common finding is the difference between senior and junior engineers, with the latter accepting much more AI-generated code and consequently facing a greater volume of rewrites.
But even as developers work to figure out exactly what their agents are up to, they don’t foresee them coming back anytime soon.
“This is a new era of software development and you have to adapt and you’re forced to adapt as a company,” Circei told TechCrunch. “It’s not like it’s going to be a passing cycle.”
