Living with Apple Intelligence: A Month of Experience
I’ve been using a beta version of Apple Intelligence on my iPhone 16 Pro for over a month now, and honestly, I haven’t noticed many changes since its features were introduced.
If you haven’t tried the public beta yet, you’ll soon get the chance. Apple is finally launching its highly anticipated artificial intelligence features with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. These updates are now being rolled out for select iPhones, iPads, and Macs.
Apple Intelligence, powered by the company’s large language models, has been marketed as one of the main reasons to purchase the new iPhone 16, iPad Mini, or iMac. At the Worldwide Developers Conference (WWDC) this past June, Tim Cook claimed it would elevate the user experience of Apple products to “new heights.” However, the reality is that the current experience feels somewhat flat.
An Unusual Rollout
The rollout of Apple Intelligence is different from what we usually see from Apple. Typically, the company releases all its flagship features in one major update alongside new devices. In this case, iOS 18.1 arrives just a month after iOS 18 and the iPhone 16 series. After you install iOS 18.1, you’ll need to join a waitlist to access Apple Intelligence—provided your device is compatible. This waitlist approval usually takes only a few hours. However, many of the most exciting features won’t be available until the upcoming iOS 18.2 update.
Features Available Now
So, what can you do with Apple Intelligence right now? One of the standout tools is the Writing Tools feature. This tool helps you rewrite, proofread, or summarize text throughout the operating system. For instance, the Rewrite function can adjust a sentence’s tone from casual to professional, while the Proofread feature corrects typos and improves grammar.
Unfortunately, this feature can be hard to remember since it only appears when you highlight text. It might be more useful if it were integrated as a button on the virtual keyboard.
You can also now type commands to Siri, although this isn’t entirely new. Previously, this was an accessibility feature, but Apple has now made it a default capability, catching up to Alexa and Google Assistant, which have offered this for years. Siri also has a new design, featuring a glowing effect around the screen and improved understanding of questions, even if you stumble while asking. Despite these updates, using Siri still feels familiar and somewhat underwhelming.
In the Messages and Mail apps, you can send Smart Replies—quick, AI-generated responses based on the context of your conversation, like “Thank you” or “Sounds good.” While this feature is convenient, it’s hard to feel excited about something Gmail has offered since 2017.
Summarizing content is another key feature of Apple Intelligence. You can get quick overviews of web pages and notifications. For example, if you have multiple messages in a group chat, the summary will highlight the important points, allowing you to click through for full details. However, I’ve found this feature can be unreliable; my summaries often come out as a jumbled mess. Once, it mentioned a “medical emergency” in my work emails, prompting me to check my inbox unnecessarily.
More Practical Features
Some of the most useful features of Apple Intelligence are the Clean Up tool and real-time transcription capabilities in the Phone, Notes, and Voice Memo apps. The Clean Up tool allows you to erase unwanted objects in photos, similar to a feature Google introduced on its Pixel phones in 2021. Simply select the photo, hit Edit, and choose Clean Up to remove selected items seamlessly.
The transcription feature is particularly handy for me as a journalist. It allows you to record in Notes, Voice Memo, or during phone calls and saves the transcription automatically.
A lesser-known but valuable feature is the enhanced search function in Apple Photos. Now, when you search for a photo, you have more flexibility. For instance, you can type “at the park with [your spouse’s name],” and it will pull up all relevant images in your library. This functionality relies on features like labeling people and pets, similar to Google Photos.
Looking Ahead
The few features available in this initial Apple Intelligence update are largely things we’ve seen from competitors for years. While it doesn’t matter if Apple is playing catch-up, it’s worth noting that these capabilities are now available to users in a more private and secure manner thanks to Apple’s Private Cloud Compute.
However, Apple should have waited to launch Apple Intelligence until all key features were ready to go. Siri’s reputation has been shaky for some time, and while Apple Intelligence promises improvement, Siri still feels largely the same in iOS 18.1. Most queries return the standard response: “Here’s what I found on the web.” The more advanced features will come with iOS 18.2, which includes integration with ChatGPT, allowing for more open-ended questions and detailed responses.
The upcoming update will also enhance Siri’s functionality, enabling it to understand the context of your screen. For example, if someone texts you an address, you can ask Siri to save it to their contact card. Siri will also be able to access your emails and messages, providing personalized responses based on your context. You might ask, “When do I need to leave for the airport to pick up my sister?” and Siri will respond by considering flight information and traffic estimates.
The most exciting features, which might truly elevate the iPhone experience, will arrive with iOS 18.2. These include Image Playground, which generates images from text; Genmoji, which creates custom emojis from text; and Visual Intelligence, which identifies objects in your environment and provides contextual information (like the name of an actor on a poster). I anticipate that Genmoji will become quite popular, as who wouldn’t want to create their own emojis?
I’ve downloaded the developer beta of iOS 18.2, which was released recently. So far, I’ve only accessed Visual Intelligence, but it’s more engaging than many of the current Apple Intelligence features. To use it, press and hold the Camera Control button and point your camera at something in the real world you want to learn more about. You can then “Ask” through ChatGPT or “Search” through Google. Is this similar to Google Lens, which has been around for seven years? Yes, but I’ve found myself using this feature more in the past week than any other Apple Intelligence functions.
For example, my wife was curious about our neighbor’s flowers and wondered why they were still blooming beautifully. She pulled out Google Lens on her Pixel, while I used Visual Intelligence on my iPhone. We both snapped pictures of the flowers, and both technologies confirmed we were looking at daisies, noting that some varieties can bloom in the fall with proper care.
If you’re feeling underwhelmed by Apple Intelligence, especially with Siri, rest assured that the next update aims to enhance this long-standing assistant and bring it up to par with the competition. However, given Siri’s history, it’s reasonable to be skeptical about whether it will ever be useful for anything beyond checking the weather.