Apple brings representative coding to Xcode. On Tuesday the company was announced the release of Xcode 26.3, which will allow developers to use representative tools, including Anthropic’s Claude Agent and OpenAI’s Codex, directly in Apple’s official app development suite.
Xcode 26.3 Release Candidate is available to all Apple developers today from the developer site and will appear in the App Store shortly thereafter.
This latest update comes on the heels of last year’s release of Xcode 26, which introduced support for ChatGPT and Claude to Apple’s integrated development environment (IDE) used by these build apps for iPhone, iPad, Mac, Apple Watch and other Apple hardware platforms.
The integration of coding tools agents allows AI models to leverage more of Xcode’s capabilities to perform their tasks and perform more complex automation.
Models will also have access to Apple’s current developer documentation to ensure they use the latest APIs and follow best practices when building them.
At startup, agents can help developers explore their project, understand its structure and metadata, then build the project and run tests to see if there are any bugs and fix them if so.
To prepare for this launch, Apple said it worked closely with both Anthropic and OpenAI to design the new experience. In particular, the company said it did a lot of work to optimize token usage and tool calling so that agents work efficiently in Xcode.
Xcode leverages MCP (Model Context Protocol) to expose its capabilities to agents and connect them to its tools. This means that Xcode can now work with any external MCP-compliant agent for things like project discovery, changes, file management, previews and snippets, and access to the latest documentation.
Developers who want to test the agentic coding feature should first download the agents they want to use from the Xcode settings. They can also connect their accounts with AI providers by signing in or adding their API key. An in-app drop-down menu allows developers to select the version of the model they want to use (eg GPT-5.2-Codex vs. GPT-5.1 mini).
In a prompt box on the left side of the screen, developers can tell the agent what kind of project they want to create or change the code they want to create using natural language commands. For example, they could direct Xcode to add a feature to their app that uses one of Apple’s provided frameworks and how it should appear and work.


As the agent starts running, it breaks tasks down into smaller steps, so it’s easy to see what’s happening and how the code is changing. It will also look for the documentation it needs before it starts coding. Changes are visually highlighted within the code, and project transcription on the side of the screen lets developers know what’s going on under the hood.
This transparency could especially help new developers learning to code, Apple believes. For this purpose, the company hosts a “with the code” workshop on Thursday on its developer site, where users can watch and learn how to use agent coding tools as they code in real time with their own copy of Xcode.
At the end of its process, the AI agent verifies that the code it generated works as expected. Armed with the results of his tests on this front, the agent can further iterate the project if necessary to fix bugs or other problems. (Apple noted that asking the agent to think about its plans before writing code can sometimes help improve the process, as it forces the agent to do some pre-planning.)
Additionally, if developers are not happy with the results, they can easily revert their code back to its original state at any time, as Xcode generates milestones every time the agent makes a change.
