OpenAI released its latest model, GPT-5.2, on Thursday amid growing competition from Google, touting it as its most advanced model yet designed for developers and everyday professional use.
OpenAI’s GPT-5.2 comes to ChatGPT paid users and developers via the API in three versions: Instant, a speed-optimized model for routine queries such as information retrieval, writing, and translation; Thinking, which excels at complex structured tasks such as coding, analyzing large documents, mathematics and programming. and Pro, the flagship model that aims to provide maximum accuracy and reliability for difficult problems.
“We designed 5.2 to unlock even more economic value for people,” OpenAI Chief Product Officer Fidji Simo said Thursday during a briefing with reporters. “It’s better at creating spreadsheets, creating presentations, writing code, perceiving images, understanding big context, using tools, and then connecting complex multi-step projects.”
GPT-5.2 lands in the middle of an arms race with Google’s Gemini 3, which tops LMARena’s leaderboard in most benchmarks (except encoding — which Anthropic’s Claude Opus-4.5 still has locked down).
Earlier this month, The Information rwas transferred that CEO Sam Altman circulated an internal “code red” memo to staff amid ChatGPT Traffic Reduction and worries that it is losing consumer market share to Google. The code red required a shift in priorities, including delaying commitments like introducing ads and instead focusing on building a better ChatGPT experience.
GPT-5.2 is OpenAI’s push to reclaim leadership, even as some workers according to information asked to delay the launch of the model so that the company has more time to improve it. And despite indications that OpenAI will focus its attention on consumer use cases by adding more personalization and customization to ChatGPT, the release of GPT-5.2 looks set to boost its business opportunities.
The company is specifically targeting developers and its ecosystem of tools, aiming to become the default foundation for building AI-powered applications. Earlier this week, OpenAI released new data showing that business use of AI tools has grown dramatically over the past year.
Techcrunch event
San Francisco
|
13-15 October 2026
This comes as Gemini 3 is tightly integrated into Google’s product and cloud ecosystem for multimodal and representative workflows. Google this week released managed MCP servers that make it easy for Google and cloud services like Maps and BigQuery to connect to agents. (MCPs are the links between AI systems and data and tools.)
OpenAI says GPT-5.2 sets new benchmarks in coding, mathematics, science, vision, long-range logic, and tooling, which the company claims could lead to “more reliable workflows, production code, and complex systems operating in large environments and real-world data.”
These capabilities put it in direct competition with Gemini 3’s Deep Think feature, which has been touted as a major reasoning advance aimed at math, logic and science. In OpenAI’s benchmark chart, GPT-5.2 Thinking outperforms Gemini 3 and Anthropic’s Claude Opus 4.5 on nearly every reasoning test list, from real-world software engineering tasks (SWE-Bench Pro) to PhD-level science knowledge (GPQA Diamond) and abstract pattern collections (AGIARC).
Research leader Aidan Clark said stronger maths scores weren’t just about solving equations. Mathematical reasoning, he explained, is a proxy for whether a model can follow multi-step logic, keep numbers consistent over time and avoid subtle errors that could compound over time.
“These are all qualities that really matter across a wide range of different workloads,” Clark said. “Things like financial modeling, forecasting, data analysis.”
During the update, OpenAI product lead Max Schwarzer said that GPT-5.2 “makes substantial improvements to code generation and debugging” and can walk through complex math and logic step-by-step. Coding startups like Windsurf and CharlieCode, he added, are reporting “high agent coding efficiency” and measurable gains in complex, multi-step workflows.
Beyond coding, Schwarzer said GPT-5.2 thought responses contained 38 percent fewer errors than its predecessor, making the model more reliable for everyday decision-making, research and writing.
GPT-5.2 appears to be less of a reinvention and more of a consolidation of the last two OpenAI upgrades. GPT-5, which dropped in August, was a reset that laid the groundwork for a unified system with a router to switch the model between a fast default model and a deeper “Think” mode. November’s GPT-5.1 focused on making this system warmer, more conversational, and better suited to agent and coding tasks. The latest model, GPT-5.2, seems to enable all of these developments, making it a more reliable foundation for production use.
For OpenAI, the stakes have never been higher. The company has made $1.4 trillion in commitments to build AI infrastructure over the next few years to support its growth — commitments it made when it still had first-mover advantage among AI companies. But now that Google, which initially lagged behind, is moving forward, that bet may be what drives Altman’s “code red.”
OpenAI’s renewed focus on logic models is also a dangerous flexibility. The systems behind Thinking and Deep Research functions are more expensive to run than standard chatbots because they chew up more calculations. By doubling down on this kind of model with GPT-5.2, OpenAI can create a vicious cycle: spend more on computing to win the leaderboard, then spend even more to keep these high-cost models at scale.
OpenAI is reportedly already spending more on computing than it previously let on. As TechCrunch recently reported, most of OpenAI’s inference costs — the money it spends on the computation to run a trained AI model — is paid in cash rather than through cloud credits, suggesting that the company’s computing costs have grown beyond what partnerships and credits can subsidize.
During the call, Simo suggested that as OpenAI scales, it is able to offer more products and services to generate more revenue to pay for additional computing.
“But I think it’s important to place it in the big arc of efficiency,” Simo said. “You get, today, a lot more intelligence for the same amount of computation and the same amount of dollars as you did a year ago.”
Despite its focus on logic, one thing missing from today’s launch is a new imaging device. Altman reportedly said in his code red memo that creating images would be a top priority moving forward, particularly since Google’s Nano Banana (the nickname for Google’s Gemini 2.5 Flash Image model) had a viral moment after its release in August.
Last month, Google released Nano Banana Pro (aka Gemini 3 Pro Image), an upgraded version with even better text rendering, global knowledge and eerie, real, raw atmosphere in his photos. It also integrates better with Google products, as demonstrated last week as it appears in tools and workflows like Google Labs Mixboard for automated presentation creation.
OpenAI reportedly plans to release another new model in January with better images, improved speed and a better personality, though the company did not confirm those plans on Thursday.
OpenAI also said Thursday it was rolling out new security measures around mental health use and age verification for teens, but didn’t spend much of the release promoting those changes.
This article has been updated with more information about the state of OpenAI’s computational performance.
Do you have a sensitive tip or confidential documents? We cover the inner workings of the AI industry — from the companies shaping its future to the people affected by their decisions. Contact Rebecca Bellan at rebecca.bellan@techcrunch.com or Russell Brandom at russell.brandom@techcrunch.com. For secure communication, you can reach them via Signal at @rebeccabellan.491 and russellbrandom.49.
