Apple Intelligence is about to hit the mainstream. Within the next couple of weeks, Apple is likely to ship iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, all bringing the first in the suite of features that Apple has been talking about since June. If you have any doubt, just go watch a TV show with ads and I guarantee in 15 minutes or less you’ll see Snoop Dogg hawking it as a selling point of the new iPhone 16.
Apple isn’t usually the first to enter a brand new market, but it does have a reputation for helping usher technologies into the mainstream, and for setting the bar for its rivals, including being often imitated. With millions of Apple Intelligence-capable devices already in customers’ hands, it’s sure to make a big splash.
But as Apple is prepared to embark upon this new venture, it might be worth zooming out a little bit and looking at the company’s overall strategy. That includes both how it’s rolling out these features, as well as where the company is ultimately aiming.
Round 1: The table stakes
Despite being Apple’s opening bid in the generative AI feature department, the upcoming point-one releases aren’t likely to blow anybody’s hair back. The main Apple Intelligence features include rewriting tools, summarizations, and the ability to remove unwanted elements from photos—all tasks that other products, including those from Apple’s competitors, already do and often do well.
Apple
But more to the point, very few, if any of them, have a particularly Apple feel. Could “only Apple” summarize web pages or emails? Making these tools pervasive throughout the entire operating system is certainly something that only a limited number of companies can do, but there’s not much that’s unique or novel about most of these features.
That’s likely a byproduct of the fact that Apple came late to the generative AI game. This first round of features are the ones that were probably easiest to get out of the door, in order to let Apple to plant its flag about being serious in this market. They might not break any new ground–the exception being, perhaps, notification integrations–=but they provide just enough for Apple to point to them as examples of how the company is all in on Apple Intelligence and this is just the beginning.
Overall, though, these are tools, not destinations. Because they’re focused on dealing with existing material (emails you’ve written, for example, or web pages, or texts you’ve received) they aren’t exactly features where users are likely to be showing off what they can do (except in the all too common case where they go wrong). For that, we’ll need to wait a bit longer.
Round 2: Raising the stakes
The second round of Apple Intelligence features will likely surface by the end of the year, including Genmoji, Image Playground, and Siri integration with ChatGPT. All of these are more whiz-bang applications that will offer much more visible and prominent results of Apple Intelligence: custom-generated emoji, AI-generated imagery, answers from ChatGPT about what Apple has dubbed “world knowledge.”
The second phase of Apple Intelligence will
let Apple know what kind of demand there is for AI features.
Apple
These features are closer in line with the elements of generative AI that have already captured the public’s attention. But they also present some of the largest risks for Apple: what happens, for example, when somebody gets Image Playground to generate something unpleasant or offensive? Apple’s already trying to cover itself a bit; the company has been very clear that users will be prompted to have Siri send queries to ChatGPT, and that no identifiable information will be sent to OpenAI or stored.
This, I think, is the tipping point of Apple Intelligence, because it will be the moment when it becomes clear whether or not users really want these features. I remain worried about Image Playground because it’s one place where I think the risk is greater than the reward: the chance that people will misuse these features exceeds the dopamine hit you get from making generic-looking images of Craig Federighi’s dog.
Round 3: The payoff
Ultimately, the end goal here is–and probably always has been–Siri. The third round of Apple Intelligence features are all about Apple’s virtual assistant getting a significant upgrade to be exactly what it should always have been: a time-saving tool that actually understands you. There’s a reason that Apple’s started advertising this even though the Siri-related features won’t ship for months: they’re legitimately compelling. Who hasn’t wanted a tool that can remind them where they met that person?
These Siri upgrades are also the Apple Intelligence features that feel the most like, well, Apple. While many of us might use voice assistants already, we’ve basically had to train ourselves to adapt to them—learn which words to use, in what order to say them, and what they can or can’t do. Apple’s promise of a context-aware Siri that can carry out tasks in other your apps feels not just like a dream, but specifically like a dream that Apple has always tried to sell: that the computer is a bicycle for your mind.
Over 14 years after its release, Siri could finally be having its moment.
Apple
But these features are also undoubtedly the most tricky of the ones that Apple has promised, which is one reason they’ll be among the last to arrive. And even once they do, they’ll be subjected to a lot of scrutiny: we’ve seen a lot of promises about improving Siri over the last many years, and they don’t always bear out. But if they do, they could make a huge difference in users’ lives.
And on and on?
The question is: what then? It’s hard to imagine that Apple will be done once it rolls out these initial features. After all, you don’t create a whole brand and marketing plan around something like this if you’re not planning on capitalizing on it.
Moroever, there are hints that Apple is cagey about the future of generative AI. Take, for example, a soon-to-be-published paper by Apple researchers who showed that AI algorithms don’t actually reason; that it’s fairly easy to trip them up by changing small details that wouldn’t affect human reasoning (names, quantities), or by adding extraneous information that the models then misinterpret.
To me, this feels like stage-setting. It’s not to say that Apple won’t continue to play in generative AI, because it knows the market wants it to, but it also seems very aware that this tendency to sprinkle AI into anything and everything is a bubble that will probably burst at some point, no less than the dot-coms, cryptocurrency, and NFTs. There are limits to what AI can do and what it’s good at, and while Apple may want to push those limits as far as they can go, the company also seems to be well aware that there’s a brink it doesn’t want to careen over. Meanwhile, the rest of will be waiting with bated breath to see if they manage to hit the brakes in time.