CEO of Microsoft Satya Nadella on Thursday Tweeted a video From his first developed huge AI system of his company – or AI “Factory” as Nvidia wants to call them. He promised that this is the “first of the many” such NVIDIA AI factories that will be developed in all Microsoft Azure’s global data centers to perform the openai workload.
Each system is a cluster of more than 4,600 NVIDIA GB300 rack computers sporting Blackwell’s Blackwell Ultra GPU chip and connected through the Super-Fast Nvidia networking technology called Infiniband. (In addition to AI Chips, NVIDIA CEO Jensen Huang also had the foresight to turn the market at Infiniband when his company acquired Mellanox for $ 6.9 billion in 2019.)
Microsoft promises to develop “hundreds of thousands of GPU Blackwell Ultra GPU” as it increases these systems worldwide. While the size of these systems is the eye (and the company shared plenty More technical details For material lovers to forward), the time of this announcement is also remarkable.
It comes shortly after Openai, his partner and well -documented Frenemy, based in high profile database, deals with Nvidia and AMD. In 2025, Openai has, with some estimates, raised $ 1 trillion in commitments to build its own data centers. And CEO Sam Altman said this week that more were coming.
Microsoft clearly wants people to know that it already has the data centers – More than 300 In 34 countries – and that they are “uniquely positioned” to “meet the requirements of Frontier AI today,” the company said. These Monster AI systems are also able to perform the next generation of models with “hundreds of trillion parameters”, he said.
We expect to hear more about how Microsoft is growing to serve AI workload later this month. Microsoft CTO Kevin Scott will speak on TechCrunch Disrupt, which will take place on October 27 until October 29 in San Francisco.
