When we talk about the cost of AI infrastructure, the focus is usually on Nvidia and GPUs — but memory is an increasingly important part of the picture. As superscalers prepare to build new billion-dollar data centers, the price of DRAM chips has soared about 7 times in the last year.
At the same time, there is an increasing discipline in orchestrating all that memory to make sure the right data gets to the right agent at the right time. Companies that own it will be able to make the same queries with fewer tokens, which can be the difference between folding and staying in business.
Semiconductor Analyzer Doug O’Loughlin has an interesting look at the importance of memory chips in his Substack where he chats with Val Bercovici, Head of AI at Weka. They’re both types of semiconductors, so the focus is more on the chips than the broader architecture. The implications for AI software are also very significant.
I was particularly struck by this passage, in which Bercovici examines the increasing complexity of Anthropic direct caching documentation:
Tell it is if we go to Anthropic’s direct caching pricing page. It started as a very simple page six or seven months ago, especially as Claude Code came out — just “use caching, it’s cheaper”. Now it’s an encyclopedia of advice on exactly how much cache writes to pre-purchase. You have 5-minute levels, which are very common across the industry, or 1-hour levels — and nothing more. This is a very important element. Then, of course, you have all kinds of arbitrage opportunities around pricing for cache reads based on the number of cache writes you’ve pre-purchased.
The question here is how long Claude caches your prompt: You can pay for a 5-minute window, or pay more for an hour-long window. It’s much cheaper to pull data that’s still in cache, so if you manage it right, you can save a lot. However, there’s a catch: Each new piece of data you add to the query may display something else than the cache window.
This is complex stuff, but the bottom line is pretty simple: Memory management in AI models is going to be a huge part of AI in the future. Companies that do it well will rise to the top.
And there is much progress to be made in this new field. Back in October, I covered a startup called Tensormesh that was working on a layer in the stack known as cache optimization.
Techcrunch event
Boston, MA
|
June 23, 2026
Opportunities exist elsewhere in the stack. For example, lower down the stack, there’s the question of how data centers use the different types of memory they have. (The interview includes a nice discussion of when DRAM chips are used instead of HBM, though it’s pretty deep into the hardware.) Further up, end users figure out how to structure their model clusters to take advantage of shared cache.
As companies get better at orchestrating memory, they will use fewer tokens and inference will become cheaper. Meantime, Models become more efficient in processing each tokenpushing costs even further. As server costs come down, many applications that don’t seem viable now will start to increase their profitability.
