IBM releases the latest version of the Mainframe material that includes new updates intended to accelerate AI’s adoption.
The material and consultants company announced the IBM Z17 on Monday, the latest version of the mainframe computer material. This fully encrypted mainframe is powered by an IBM Telum II processor and is designed for more than 250 AI cases, according to the company, including AI agents and genetics.
Mainframes may look like an old hat, but are used today by 71% of Fortune 500 companies, According to one source. In 2024, the Mainframe market is worth about $ 5.3 billion per marketplace consulting firm.
The Z17 can process 450 billion of conclusions in one day, a 50% increase over its predecessor, the IBM Z16, which was released in 2022 and ran into the company’s original Tellum processor. The system is designed to be able to be fully integrated with other hardware, software and open source tools.
Tina Tarquinio, vice president of product management and design for IBM Z, told TechCrunch that this mainframe upgrade is in the works for five years – long before the current AI frenzy started with the release of Openai Chatgpt in November 2022.
IBM spent more than 2,000 hours of research receiving feedback from more than 100 customers as it created the Z17, Tarquinio said. He thinks it is interesting to see that, now, five years later, the comments that were aligned with where the market ended up being directed.
“He was wildly knowing that we are introducing an AI accelerator and then see, especially in the last half of 2022, all the changes to the AI industry,” Tarquinio told TechCrunch. “It was really exciting. I think the biggest point was [that] We do not know what we do not know what is coming, right? Thus, the possibilities are truly unlimited in terms of what can help us make AI. ”
The Z17 is set up to adapt and host where AI chiefs said Tarquinio. The Mainframe will support 48 IBM Spyre Ai Accelerator Chips after release, with the plan carrying this number up to 96 within 12 months.
“We are deliberately building on the head,” Tarquinio said. “We are deliberately building on AI flexibility. So, as new models are introduced, [we’re] Make sure we have built on the head for larger, larger models – models that may need more local memory to talk to each other. We have built this because we know that it is really the approach that will change, right? The new models will come and go. ”
Tarquinio said that one of the most important points of this last material-though he justified that it was as if she was asked to get her favorite child-is that the Z17 is more energy efficient than its predecessor and its supposed competitors.
“On-chip. We increase the acceleration of AI by seven and a half, but this is five and a half times less than you will need to do, such as multi-model in another type of accelerator or platform in the industry,” Tarquinio said.
The Mainframes of the Z17 will generally be available on June 8.
