At the Worldwide Developer Conference (WWDC) 2023 on Monday, Apple announced various new software features for its devices such as Apple Watch, Apple TV, AirPods, iPad, iPhone, computers, and the new Apple Vision Pro headset. These new features were mostly powered by artificial intelligence (AI) or “machine learning” (ML). Apple is committed to user privacy and security, and to maintain that, these new AI features are primarily reliant on on-device processing power instead of connecting and transferring user data to the cloud. Let’s take a closer look at some of the most exciting features coming to Apple devices powered by AI.

New Features

Apple Vision Pro

Apple Vision Pro, the new augmented reality headset, was the star of the event. This device, which is set to release early in 2024 with a starting cost of $3,499, looks similar to chunky ski goggles. It allows users to see graphics overlaid on their view of the real world. The device contains many remarkable features such as support for many existing mobile apps, and even allows Mac computer interfaces to be moved into floating digital windows in mid-air. One major innovation that Apple showed off on the Vision Pro depends heavily on ML known as Persona. This feature uses built-in cameras to scan a user’s face to quickly create a lifelike, interactive digital doppelganger. This way, when a user dons the device and joins a FaceTime call or other video conference, a digital twin appears in place of them in the clunky helmet, mapping their expressions and gestures in real time.

iOS 17 Autocorrect

Apple’s current built-in autocorrect features for texting and typing can sometimes be wrong and unhelpful, suggesting words that are not even close to what the user intended (“ducking” instead of…another word that rhymes but begins with “f”). However, the company claims that with iOS 17, the new autocorrect, which uses a “transformer model” and is a part of AI program that includes GPT-4 and Claude, will improve autocorrect’s word prediction capabilities. Autocorrect now also offers suggestions for entire sentences and presents its suggestions in-line, similar to the smart compose feature found in Google’s Gmail.

Live Voicemail

The new “Live Voicemail” feature for the iPhone’s default Phone app is one of the most useful features that Apple has shown off. This feature comes into play when someone calls a recipient with an iPhone, can’t get ahold of them, and begins to leave a voicemail. The Phone app then displays a text-based transcript of the in-progress voicemail on the recipient’s screen, word-by-word, as the caller speaks. Essentially, it is turning audio into text, live and on the fly. Apple said this feature was powered by its neural engine and “occurs entirely on device… this information is not shared with Apple.”

Journal App

Apple has introduced Journal App, which is a better way to help users “reflect and practice gratitude” powered by “on-device ML.” The new Apple Journal app on iOS 17 automatically pulls in recent photos, workouts, and other activities from a user’s phone and presents them as an unfinished digital journal entry, allowing users to edit the content and add text and new multimedia as they see fit. Apple is releasing a new API Journaling Suggestions, which allows app developers to code their apps to appear as possible Journal content for users.

AirPods Personalized Volume

Personalized Volume is a feature for AirPods that “uses ML to understand environmental conditions and listening preferences over time” and automatically adjusts the volume to what it thinks users want.

Photo Recognition Feature

Apple’s previous on-device ML systems for iPhone and iPad allowed its default photo organization app Photos to identify different people based on their appearance. However, it clearly left someone out: Our furry companions. Well, no more. At WWDC 2023, Apple announced that, thanks to an improved ML program, the photo recognition feature now works on cats and dogs, too.

Apple TV

Apple did not announce a new physical Apple TV box, but it did unveil a major new feature: FaceTime for Apple TV, which takes advantage of a user’s nearby iPhone or iPad (presuming they have one) and uses that as its incoming video camera while displaying other FaceTime call participants on a user’s TV. Another new aspect of the FaceTime experience is a presentation mode. This allows users to present an app or their computer screen to others in a FaceTime call, while also displaying a live view of their own face or head and shoulders in front of it. One view shrinks the presenter’s face to a small circle that they can reposition around the presentation material, while the other places the presenter’s head and shoulders in front of their content, allowing them to gesture to it like they are a TV meteorologist pointing at a digital weather map.

Apple has introduced many exciting new features powered by AI to its devices. These features are primarily reliant on on-device processing power, which is a huge relief for users who prioritize privacy and security. From the new augmented reality headset, Apple Vision Pro, to the new autocorrect and Live Voicemail features, to the Journal App and photo recognition feature, it’s clear that Apple is committed to making its devices even more user-friendly and innovative.

AI

Articles You May Like

Brazil’s “Censorship Bill” Aims to Regulate Disinformation Online
A Breakthrough in Exploring Exotic Spin Interactions: Solid-State Spin Quantum Sensors
Abriss: Destroying Brutalist Architecture in a Futuristic Cityscape
Broadcom Set to Receive Conditional EU Approval for VMware Acquisition

Leave a Reply

Your email address will not be published. Required fields are marked *