Google just made a major move in the AI infrastructure arms race, elevating Amin Vahdat to chief AI infrastructure technologist, a new position that reports directly to CEO Sundar Pichai, according to a memo first reports Semafor and later confirmed by TechCrunch. It’s a sign of how critical this project has become as Google moves up 93 billion dollars in capital spending by the end of 2025 — a number that parent company Alphabet expects to be much higher next year.
Vahdat is not new to the game. The computer scientist, who holds a PhD from UC Berkeley and started as a research intern at Xerox PARC in the early 90s, has been quietly building Google’s AI backbone for the past 15 years. Before joining Google in 2010 as an associate engineer and vice president, he was an associate professor at Duke University and later a professor and chair of SAIC at UC San Diego. His academic credentials are formidable – by all accounts 395 published papers — and his research has always focused on making computers work more efficiently on a massive scale.
Vahdat already maintains a high profile with Google. Just eight months ago, at Google Cloud Next, he took the stage to unveil the company’s seventh-generation TPU, called Ironwood, in his role as vice president and general manager of ML, Systems and Cloud AI. The specs he touted at the event were also staggering: more than 9,000 chips per pod delivering 42.5 exaflops of computation — more than 24 times the power of the world’s No. 1 supercomputer at the time, he said. “Demand for AI computing has grown by 100 million in just eight years,” he told the audience.
Behind the scenes, as Semafor notes, Vahdat orchestrates the unlikely and essential work that keeps Google competitive, including those custom TPU chips for AI training and inference that give Google an edge over rivals like OpenAI, as well as the Jupiter Network, the ultra-fast internal network that allows all of its servers to talk to each other and move around en masse. (In a blog post (late last year, Vahdat said Jupiter now scales to 13 petabits per second, explaining that’s enough bandwidth to theoretically support a video call for all 8 billion people on Earth simultaneously.)
Vahdat has also been deeply involved in the ongoing development of the Borg software system, Google’s cluster management system that acts as the brain that coordinates all the work that happens in its data centers. And he said he oversaw the development of Axion, Google’s first custom Arm-based general-purpose processors designed for data centers, which the company revealed last year and continues to build.
In short, Vahdat is central to Google’s AI story.
Indeed, in a market where top AI talent commands astronomical compensation and constant hiring, Google’s decision to elevate Vahdat to the C-suite may also be about retention. When you’ve spent 15 years building someone as a linchpin of your AI strategy, you make sure they stick around.
Techcrunch event
San Francisco
|
13-15 October 2026
