Summary created by Smart Answers AI
In summary:
- Macworld explores Apple’s Visual Intelligence technology, which launched with iPhone 16 Pro and could become the defining feature for upcoming AI wearables like smart glasses and camera-equipped AirPods Pro.
- This AI-powered system identifies objects through cameras and provides contextual information while maintaining privacy through on-device processing and Private Cloud Compute architecture.
- Tim Cook positions Visual Intelligence as central to Apple’s future product strategy, potentially giving Apple a competitive edge in the emerging AI wearables market.
Mark Gurman’s latest Power On newsletter has several interesting tidbits about upcoming Apple products, but perhaps the most fascinating concerns Apple’s plans for future AI-powered wearables.
We’ve heard about these before—Apple is working on smart glasses (similar to the Meta Ray-Bans), AirPods Pro with cameras, and some sort of pin/pendant item. All are at various stages of development, and all of them will apparently lean heavily on Visual Intelligence.
That’s Apple’s brand for the application of AI to things your device’s camera sees. It launched as part of the iPhone 16 Pro and then came to other devices with expanded capabilities. You can take a photo of something around you to get contextual information about it, or even take a screenshot and do the same.
You can ask ChatGPT about the subject as well, and the system is smart enough to change your options contextually. If you’re looking at an event poster with dates and times, you can simply add it to your calendar. If it’s a restaurant, you can look up reviews, hours, or the menu. You can identify plants or animals, and do google image search to find similar objects online.
Apparently, Tim Cook sees this area of AI technology as central to its upcoming AI devices. Apple is building its own visual models and intends to make this technology—contextual awareness based on what the AI “sees”—a central pillar of future devices.
For example, you could simply look at your plate of food to get information on ingredients, portions, or nutritional info. Turn-by-turn directions could use visual landmarks instead of just street names or distances. Reminders could be triggered by walking up to and seeing something, not just times and locations.
Cook has been singling out the feature in recent appearances. He gave it a shout-out at the company’s last earnings call, and at an all-hands meeting in which he discussed the company’s AI ambitions. It’s a little odd to bring it up so consistently when it’s not exactly new and hasn’t changed much in the last year or more. Clearly, the technology is on his mind, likely because he’s focused on the company’s upcoming new products.
Obviously, privacy is central to AI that is processing what it sees around you. And in this area, Apple has an advantage—strong neural processors in hundreds of billions of devices enables more on-device processing than most competitors, and the company’s Private Cloud Compute architecture ensures that anything that is processed in the cloud protects your privacy by design, too.



