Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
70,816
42,796


With Xcode 26.3, Apple is adding support for agentic coding, allowing developers to use tools like Anthropic's Claude Agent and OpenAI's Codex right in Xcode for app creation.

macOS-26-Xcode.jpg

Agentic coding will allow Xcode to complete more complex app development tasks autonomously. Claude, ChatGPT, and other AI models have been available for use in Xcode since Apple added intelligence features in Xcode 26, but until now, AI was limited and was not able to take action on its own. That will change with the option to use an AI coding assistant.

AI models can access more of Xcode's features to work toward a project goal, and Apple worked directly with Anthropic and OpenAI to configure their agents for use in Xcode. Agents can create new files, examine the structure of a project in Xcode, build a project directly and run tests, take image snapshots to double-check work, and access full Apple developer documentation that has been designed for AI agents.

Adding an agent to Xcode can be done with a single click in the Xcode settings, with agents able to be updated automatically as AI companies release updates. Developers will need to set up an Anthropic or OpenAI account to use those coding tools in Xcode, paying fees based on API usage.

Apple says that it aimed to ensure that Claude Agent and Codex run efficiently, with reduced token usage. It is simple to swap between agents in the same project, giving developers the flexibility to choose the agent best suited for a particular task.

While Apple worked with OpenAI and Anthropic for Xcode integration, the Xcode 26.3 features can be used with any agent or tool that uses the open standard Model Context Protocol. Apple is releasing documentation so that developers can configure and connect MCP agents to Xcode.

Using natural language commands, developers are able to instruct AI agents to complete a project, such as adding a new feature to an app. Xcode then works with the agent to break down the instructions into small tasks, and the agent is able to work on its own from there. Here's how the process works:
  • A developer asks an integrated agent to add a new feature to an app.
  • The agent looks at the current project to see how it's organized.
  • The agent checks all relevant documentation, looking at code snippets, code samples, and the latest APIs.
  • The agent begins working on the project, adding code as it goes.
  • The agent builds the project, then uses Xcode to verify its work.
  • If there are errors or warnings, the agent continues to work until all issues are addressed. It is able to access build logs and revise until a project is perfect.
  • The agent wraps up by providing a summary of everything that happened so developers have a clear view of the implementation.
In the sidebar of a project, developers can follow along with what the agent is doing using the transcript, and can click to see where code is added to keep track of what the agent is doing. At any point, developers can go back to before an agent or model made a modification, so there are options to undo unwanted results or try out multiple options for introducing a new feature.

Apple says that agentic coding will allow developers to simplify workflows, make changes quicker, and bring new ideas to life. Apple also sees it as a learning tool that provides developers with the opportunity to learn new ways to build something or to implement an API in an app.
"At Apple, our goal is to make tools that put industry-leading technologies directly in developers' hands so they can build the very best apps," said Susan Prescott, Apple's vice president of Worldwide Developer Relations. "Agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation."
The release candidate of Xcode 26.3 is available for developers as of today, and a launch will likely follow in the next week or so.

Article Link: Xcode 26.3 Lets AI Agents From Anthropic and OpenAI Build Apps Autonomously
 
To be clear, I trust AI slop code, and therefore “agents” about as much as I trust Siri to turn on the correct light switch
As someone who has been coding for 20 years and now does a lot of vibe coding, you couldn't possibly be more misinformed.

Edit: LOVE the downvotes by the people being left behind in the dust.
 
Last edited:
Explain to me again why we are having to rent ( subscribe ) Apple software when increasingly it’s not even human made ?
 
  • Like
Reactions: KeithBN
The agent harness is more important than people think. I'm not confident Xcode is better than Claude Code.

As someone who has been coding for 20 years and now does a lot of vibe coding, you couldn't possibly be more misinformed.

Edit: LOVE the downvotes by the people being left behind in the dust.

You are right. The world has changed, and so has software development. Not everyone is a vibe coder with zero dev experience.

Learning to properly code is the first step to effectively vibe coding
Then it's no longer vibe coding, it's vibe engineering. Vibe coding is just accepting everything and not even looking at code.
 
As someone who has been coding for 20 years and now does a lot of vibe coding, you couldn't possibly be more misinformed.

Edit: LOVE the downvotes by the people being left behind in the dust.
I think the downvotes are your use of the term vibe coding. The way I treat that term is if you perform ZERO checks on what the AI produces. Literally telling AI to write an app, then you sub,it that app for approval / check in the code to git as-is.

However, I hope you do what I do and ground the AI quite heavily and scrutinize the output heavily. That to me is not vibe coding. We have literal advertising career people doing vibe coding and have zero programming skills. That is what vibe coding truly means.
 
Embrace this, it's the future. I have coded with AI and it's undeniable. If you're not leveraging AI then you're going to be left behind.

People who are highly skilled in any profession are never left behind because they are highly skilled field leaders and the model training depends on learning from them.

All AI does here is let lots of people generate lots of trash quickly. Go to Reddit Macapps and marvel at hundreds of vibe coded garbage posted every week and some of these people have the audacity to ask for subscription fees for a little vibe coded garbage utility.
 
People who are highly skilled in any profession are never left behind because they are highly skilled field leaders and the model training depends on learning from them.

Now put AI in the hands of those skilled people, and you will watch them increase their productivity while still being aware of what the agent is doing. You should not let it just do whatever like vibe coders allow it to.

Being 100% against this technology is just asinine at this point. If you are in the workforce, corporations will force you to use it eventually. You can learn now and be ahead of your peers (thus increasing your value), or be mandated by management down the line.
 
That's debatable, considering that to vibe code means you literally don't look at (and therefore can't judge) the code.

True. The term vibe coding meant just accepting whatever the model says to accept and just keep going until your credits run out and the AI company charges you more money. It’s like sitting at a slot machine and calling yourself a gambling expert.
 
  • Haha
  • Like
Reactions: jlnr and KeithBN
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.