Apple’s $1B Siri Deal with Google Gemini
Apple is paying Google $1B a year for Gemini to power Siri, a sharp sign that its AI push needs outside help.

Apple is reportedly paying Google $1 billion a year to power the next version of Siri with Gemini models. That is a wild number for a company that built its brand on owning the whole stack, from chips to software to services.
The deal matters because it changes more than Siri. It hints that Apple’s long-promised AI upgrade is less about building everything in-house and more about buying time, buying capability, and hoping users care more about results than who wrote the model.
Project Campos is Apple’s AI reset
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
Apple’s Siri overhaul is internally known as Project Campos. The new assistant is expected to arrive with iOS 26.4, after an official preview at WWDC in June 2026. Apple wants Siri to do things the old version never handled well: understand what is on your screen, remember earlier parts of a conversation, and complete tasks across apps without constant hand-holding.

That is a tall order, and Apple appears to have decided that its own models were not ready. Instead, it is plugging in Google’s Gemini family to do the heavy lifting. If the reporting is accurate, this is one of the clearest signs yet that consumer AI is moving from “who built it?” to “who made it useful?”
- Reported annual payment: about $1 billion
- Expected launch window: 2026, tied to iOS 26.4
- Core features: on-screen awareness, memory, and multi-step task execution
- Strategic shift: Apple is outsourcing the assistant’s intelligence layer
The interesting part is not just that Apple made a deal. It is what the deal says about its internal timeline. Siri launched in 2011 and was once the face of voice assistants. By the mid-2020s, it had become the example people used when they wanted to describe a product that fell behind. Apple spent years trying to catch up, and this agreement reads like the company finally admitting the gap was bigger than a software update.
Why Google gets the better end of the bargain
On paper, Apple is the one writing the check. In practice, Google may be the bigger winner. If Gemini powers Siri across roughly 2 billion active Apple devices, Google gets a distribution channel most AI companies can only dream about. That reach matters more than the annual fee, because consumer AI products live or die by habit and default placement.
There is also a branding angle here. Even if users never see “Gemini” splashed across Siri, Google’s models will be doing the work behind one of the world’s most used consumer assistants. That puts Google’s technology inside a product people touch every day, on a scale that dwarfs most standalone AI apps.
Apple, meanwhile, is paying for speed. It needs a capable assistant now, not after another few years of model training. That urgency explains the price. A $1 billion annual bill may sound huge, but for Apple it is a manageable expense if it helps Siri stop being the butt of jokes.
“Apple is very much in the position of being able to say, ‘We’re going to buy the best technology and integrate it into our products,’” said Tim Cook in Apple’s Q3 2023 earnings call, when asked about generative AI.
Cook’s comment is useful because it captures the company’s current posture without any marketing gloss. Apple is not pretending it has to invent every layer itself. It is choosing integration over purity, and that is a meaningful change for a company that usually prefers total control.
That choice also lines up with Apple’s broader history. It ditched Intel for its own chips when it wanted tighter control over performance and power. With Siri, though, Apple seems to have judged that the fastest route to a competitive assistant is to buy intelligence from outside and wrap it in Apple software, design, and privacy messaging.
What Siri will actually do differently
The most talked-about feature in Project Campos is on-screen awareness. In plain English, Siri will be able to look at what is displayed on your device and use that context when answering or acting. If you are reading an email, looking at a photo, or reviewing a document, Siri should be able to understand the content instead of treating your request like it came from nowhere.

Apple is also adding context memory, which means Siri can remember earlier parts of a conversation and reference them later. The third piece is agentic behavior: the assistant should be able to complete multi-step actions across apps, rather than just answering questions or setting timers. That is the feature set that makes Siri feel less like a voice shortcut and more like a real digital helper.
- On-screen awareness: Siri interprets what is visible on the display
- Context memory: Siri remembers prior conversation details
- Agentic tasks: Siri can act across apps to finish a workflow
- Underlying model: Google Gemini, not Apple’s own foundation model
The privacy question is where things get complicated. Apple will almost certainly keep promoting its device-level privacy architecture, but the assistant still needs access to the content on your screen to do its job. That means the real test is not whether Apple uses privacy language. The test is whether the implementation keeps sensitive data local, minimizes exposure, and gives users meaningful control.
That is a high bar, especially when the intelligence behind the feature comes from a company whose business is built on data, ads, and scale. Even if the system is carefully designed, Apple will have to convince users that Siri can read context without turning every interaction into a trust exercise.
How this compares with Apple’s other big deals
The easiest way to understand the size of this move is to compare it with Apple’s existing Google relationship. Apple reportedly earns around $20 billion a year from Google for default search placement on iPhone and other devices. That is a deal where Apple has leverage. Google pays because access to Apple users is valuable and hard to replace.
The Siri arrangement feels different. Apple is the buyer, not the seller, and that changes the power balance. A $1 billion annual payment for AI models across a massive device base suggests Apple needed a competitive answer more than Google needed the cash. In other words, the search deal is about distribution. The Siri deal is about catching up.
There is also a broader market comparison worth making. If Apple and Google both rely on Gemini in important consumer experiences, then the AI assistant market starts to look less open than it did a year ago. OpenAI, Anthropic, and other model companies can still compete in chat apps, enterprise tools, and developer products, but the default assistant layer on mobile could become far more concentrated.
- Apple-Google search deal: about $20 billion a year to Google
- Apple-Google Siri deal: about $1 billion a year to Google
- Search deal dynamic: Apple has leverage because Google wants distribution
- Siri deal dynamic: Apple needs capability fast
That is why this story matters beyond Apple fandom or Google rivalry. It shows that model quality is becoming a commodity for the biggest platforms, while distribution and product integration are where the real power sits. If Apple can make Gemini feel like Siri, most users will care about the experience, not the vendor behind it.
Still, there is a risk here. If Siri becomes good enough to rely on but weird enough to feel inconsistent, Apple will own the blame while Google gets the model credit. That is a tricky split, and it could shape how both companies talk about AI over the next year.
Apple’s real test starts at WWDC
The first public proof point will come at WWDC in June 2026. Apple will have to show that Project Campos is more than a demo and more than a licensing story. It needs to prove that Siri can handle real tasks, respect privacy expectations, and work well enough that people actually change their habits.
If Apple gets this right, the company will have done something pragmatic rather than glamorous: it will have accepted that owning every model is less important than shipping an assistant people can use every day. If it gets it wrong, the deal will read like a very expensive confession.
My bet is that Apple will make Siri noticeably better, but keep the most sensitive features tightly constrained. That would fit the company’s style: useful, polished, and carefully limited. The bigger question is whether users will accept a Siri that feels smarter because Google is inside it. If they do, Apple may have found the fastest path back into the AI race.
// Related Articles
- [IND]
Circle’s Agent Stack targets machine-speed payments
- [IND]
IREN signs Nvidia AI infrastructure pact
- [IND]
Circle launches Agent Stack for AI payments
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods