iOS 27 Extensions: Pick Gemini, Claude, or ChatGPT
May 7, 2026
Apple is preparing to do at scale what it has so far done only through one bilateral deal: let users pick which third-party AI powers Apple Intelligence. According to a Bloomberg report from Mark Gurman dated May 5, 20261, iOS 27 will introduce a new framework — internally called Extensions — that lets users select which AI service handles Siri, Writing Tools, and Image Playground. Google's Gemini and Anthropic's Claude are already being tested internally, and OpenAI's ChatGPT — the only third-party model integrated since iOS 18.2 in December 20242 — will move from a special-case partnership into one option among many.
The change is slated to be formally announced at WWDC 2026, which Apple confirmed runs June 8 through 12 with the keynote on Monday, June 8 at 10am Pacific3. The actual public release of iOS 27, iPadOS 27, and macOS 27 is expected in the fall.
TL;DR
- What changed: Apple's iOS 27 will introduce an "Extensions" framework letting users route Apple Intelligence requests to third-party models like Google Gemini, Anthropic Claude, and OpenAI ChatGPT — not just Apple's own foundation models.
- Why it matters: This ends ChatGPT's de facto exclusivity (in place since iOS 18.2 in December 2024) and turns the iPhone into a neutral host for whichever AI a user prefers.
- How it works: Providers add Extensions support to their existing App Store apps; users pick one in Settings. The choice is system-wide and routes Siri, Writing Tools, and Image Playground requests to the selected model.
- When: Expected unveiling at the WWDC 2026 keynote on June 8, 2026; consumer release with iOS 27 in fall 2026.
- What's separate: This is distinct from Apple's January 2026 deal to use a custom 1.2-trillion-parameter Gemini model behind the scenes for the next-generation Siri4. Extensions is user-facing; that deal is invisible plumbing.
What you'll learn
- What "Extensions" actually is and how it differs from a model swap
- Why this is a much bigger architectural shift than the existing ChatGPT integration
- How the user experience changes for Siri, Writing Tools, and Image Playground
- How this is different from Apple's behind-the-scenes Gemini-powered Siri rebuild
- What questions remain unanswered until WWDC 2026
What "Extensions" actually is
The Extensions name is borrowed from Apple's existing app-extension architecture, where a third-party app can advertise capabilities the system can call into — share sheets, photo editing extensions, keyboards, and so on. According to onboarding text discovered in test builds of iOS 27 by Bloomberg, the new AI Extensions are described as letting users access generative AI capabilities from installed apps on demand, surfaced through Siri, Writing Tools, Image Playground, "and more"1.
The mechanics described in the report are simple from a user perspective:
- Install the AI provider's iOS app — for example, the Gemini app, the Claude app, or the ChatGPT app.
- The app declares Extensions support to the OS, the same way an app today declares share-extension or shortcut support.
- In Settings, the user picks one provider as their preferred Apple Intelligence backend.
- From that point, asking Siri to summarize an email, asking Writing Tools to rewrite a paragraph, or asking Image Playground to generate an image all route to the chosen model — not Apple's own foundation models.
There are two design choices in that flow worth flagging:
The selection is system-wide, not feature-by-feature. A single toggle decides Siri, Writing Tools, and Image Playground all at once. That's friendlier than a settings panel with twenty switches, but it forces users to pick an all-rounder rather than mixing best-in-class models per task.
Voices can differ between providers. Bloomberg reports Apple will let each provider supply its own Siri voice, so a Claude-routed query sounds audibly different from one handled by Apple's in-house model. That's a small touch with a big psychological effect: it makes the routing perceptible without a notification banner.
Why this is bigger than the current ChatGPT integration
When Apple shipped ChatGPT support in iOS 18.2 in December 2024, the experience was a one-off integration powered initially by GPT-4o2 and later upgraded to GPT-5 with iOS 26 in fall 2025. Siri can ask ChatGPT for help when it doesn't know the answer; Writing Tools can pipe a passage to ChatGPT for a rewrite. But ChatGPT was the only outside option, the routing decisions were Apple's to make, and there was no clear path for other providers to join.
The Extensions framework is structurally different in three ways:
| Dimension | iOS 18.2 ChatGPT integration | iOS 27 Extensions |
|---|---|---|
| Number of providers | One (OpenAI) | Many — any provider that builds Extensions support |
| Who chooses | Apple decides when to invoke ChatGPT | User picks the default provider in Settings |
| Routing scope | Specific features call out to ChatGPT | System-wide swap of the Apple Intelligence backend |
| Provider integration path | Bilateral deal with OpenAI | Public framework added to a provider's App Store app |
The shift is from "Apple negotiated a partnership" to "Apple shipped a market." Once the SDK and App Store guidelines are public, players that already ship consumer iOS apps — Perplexity, Mistral's Le Chat, xAI's Grok, and open-weight model wrappers — could plug in without a phone call to Cupertino.
How this differs from the Gemini-powered Siri
It's easy to conflate two stories. They're not the same thing.
Story 1 — the behind-the-scenes Gemini deal (announced January 12, 2026). Apple and Google publicly confirmed a multi-year collaboration in which a custom 1.2-trillion-parameter Gemini model would help power Siri's "summarizer" and "planner" components, running on Apple's Private Cloud Compute infrastructure. Bloomberg reported the deal was worth roughly $1 billion per year4. The first phase of this rebuilt Siri shipped in iOS 26.4 in early 2026; a more conversational version is planned for iOS 27. Crucially, end users do not see Google branding — to them, it's just Siri.
Story 2 — the iOS 27 Extensions framework (leaked May 5, 2026). This is the user-visible choice. The user picks Gemini, Claude, ChatGPT, or another provider as their default. The provider's branding, voice, and behavior are exposed.
You can think of it as two layers. The Gemini deal upgrades Apple's own Apple Intelligence stack; Extensions lets users opt out of that stack altogether and use someone else's. Both can coexist on the same iPhone at the same time, and that's by design.
For a deeper history of the Gemini-Siri tie-up, see our earlier coverage in Apple Siri's Gemini AI overhaul. For the iOS 26 baseline that this is being layered on top of, see Exploring the Exciting Features of iOS 26.
What changes for the three Apple Intelligence surfaces
Siri
Today, Siri uses Apple's own on-device 3-billion-parameter foundation model and its server-side Parallel-Track Mixture-of-Experts (PT-MoE) model on Private Cloud Compute5, with a fallback to ChatGPT for complex queries when the user opts in. With Extensions enabled and a third-party provider selected, complex Siri requests would instead route to the chosen model. According to Bloomberg, the assistant could carry that model's distinct voice, making it easier to tell which brain answered1.
Writing Tools
The "Rewrite," "Proofread," and "Summarize" actions across system text fields would call into the user's chosen provider. A request to "rewrite this in a friendlier tone" might hit Claude or Gemini instead of Apple's server foundation model.
Image Playground
Today's Image Playground generates from Apple's image-generation stack. With Extensions, a user who prefers a particular external image model could route generation through the provider's app, assuming the provider exposes image generation through its Extension.
The Bloomberg report uses the language "and more"1, suggesting the framework will eventually cover other Apple Intelligence surfaces. Notification summaries, Genmoji, and Image Wand are obvious candidates, though none has been confirmed.
Liability and the "not our content" disclaimer
Apple plans to disclaim responsibility for outputs produced by third-party AI providers. That sounds boring, but it's the legal scaffolding that makes the whole thing possible: if a Gemini or Claude response is incorrect, biased, or harmful, Apple's position is that the model's developer owns the outcome. This mirrors Apple's posture with the existing iOS 18.2 ChatGPT integration but extends it to a wider field.
What's unclear from the leak is how privacy will be communicated. Apple's existing Private Cloud Compute provides cryptographic guarantees — verifiable software images, ephemeral processing, no data retention by Apple. Once a request is routed to Gemini's or Claude's servers, those guarantees end at the boundary. The user is then operating under the third-party provider's privacy terms. How clearly Apple surfaces that hand-off in Settings will be one of the more interesting WWDC reveals.
What we still don't know
Several details have not been reported and likely won't be confirmed until the WWDC keynote on June 8 or the developer documentation that follows:
- The SDK shape. Is this a structured-output API (provider implements a defined contract for "summarize," "rewrite," etc.) or a freer pipe? The structured-API path gives Apple more control over user experience consistency; the free-pipe path gives providers more headroom.
- Whether Apple will charge providers. A revenue share, listing fee, or "free if you ship a great app" model are all plausible.
- Geographic and regulatory rollout. Apple Intelligence has lagged in the EU and China relative to the US; whether Extensions inherits that staggered rollout is unclear.
- How "default model" interacts with on-device privacy. Some Apple Intelligence requests run entirely on-device today. If a third-party provider is selected, will simple requests still stay on-device using Apple's models, or will all routed work go to the provider's cloud?
- Voice cloning constraints. Letting Claude or Gemini speak through Siri implies a voice pipeline. Whether providers ship their own voice models or feed text into Apple's voice synthesis hasn't been described.
- Enterprise and MDM controls. IT administrators will want to lock the Extension to a specific provider — or block it entirely — in regulated industries.
Why Apple is doing this now
A few forces converge on this announcement:
Regulatory pressure. The EU's Digital Markets Act has pushed Apple toward platform-level openness in browsers, app stores, and default applications. AI is the next logical front. Letting users choose their AI provider is a credible "we are not gatekeeping AI" answer to inquiries about default-app fairness.
Strategic hedging. Apple's own foundation models are competitive at certain sizes but not state-of-the-art at the frontier. By letting users plug in Gemini, Claude, or others, Apple insulates itself from criticism that "Siri is dumb" — the user can always opt out.
Distribution leverage. Owning the surface where AI runs is more valuable than owning the model. Apple effectively turns the iPhone into a router for AI demand, which is structurally similar to how it monetizes the App Store.
The China problem. Apple Intelligence has not officially launched in mainland China as of May 2026 — the company's Alibaba and Baidu partnerships are awaiting Cyberspace Administration of China approval, and a brief accidental rollout in March 2026 was pulled. Extensions gives Apple an architectural escape hatch: any Chinese-government-approved AI provider could plug in via a local app, decoupling per-region launches from a single global integration.
What developers and businesses should do now
If you build a consumer AI product on iOS, the Extensions framework is your single most important upcoming OS event. You'll want to:
- Watch the WWDC 2026 keynote on Monday, June 8, and follow up with the Platforms State of the Union session and the developer videos that drop that week.
- Audit your current iOS app for the surface area you'd want to expose: text generation, summarization, image generation, voice. The Extensions API will likely be capability-scoped.
- Decide whether you want to be a default-replacement provider (compete with Apple's stack head-on) or a feature-specific extension (excel at one job).
If you're an enterprise IT lead, plan to evaluate the Extension's behavior under your existing MDM stack. The privacy boundary moves the moment a user picks a third-party provider, and security review of any externally-routed Apple Intelligence traffic will be a legitimate ask.
Bottom line
Extensions is a structural change, not a feature. It moves Apple from "we built one AI integration" to "we built a market for AI integrations" — and the difference matters more than any single model swap. If the framework ships as described, by late 2026 the question on every iPhone will not be which AI is built into Apple Intelligence but which AI did the user pick today. That's a different operating system, and a different power balance between the model labs and the device makers, than the one we have on May 7, 2026.
Footnotes
-
Mark Gurman, "Apple to Let Users Choose Rival AI Models Across Its iOS 27 Features," Bloomberg, May 5, 2026. https://www.bloomberg.com/news/articles/2026-05-05/ios-27-features-apple-plans-to-let-users-swap-models-across-apple-intelligence ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
"Apple releases new Apple Intelligence features including ChatGPT integration with iOS 18.2," TechCrunch, December 11, 2024. https://techcrunch.com/2024/12/11/apple-releases-new-apple-intelligence-features-including-chatgpt-integration-with-ios-18-2/ ↩ ↩2 ↩3
-
"Apple's Worldwide Developers Conference returns the week of June 8," Apple Newsroom, March 23, 2026. https://www.apple.com/newsroom/2026/03/apples-worldwide-developers-conference-returns-the-week-of-june-8/ ↩ ↩2 ↩3
-
"Joint statement from Google and Apple," Google Blog, January 12, 2026. Bloomberg's reporting (Nov 5, 2025) priced the partnership at roughly $1 billion per year for a custom 1.2-trillion-parameter Gemini model running on Apple's Private Cloud Compute. https://blog.google/company-news/inside-google/company-announcements/joint-statement-google-apple/ ↩ ↩2 ↩3
-
"Apple Intelligence Foundation Language Models Tech Report 2025," Apple Machine Learning Research. Describes the ~3-billion-parameter on-device model and the server-side Parallel-Track Mixture-of-Experts (PT-MoE) model. https://machinelearning.apple.com/research/apple-foundation-models-tech-report-2025 ↩ ↩2