Connect with us

Tech

Apple Intelligence To Bring Personal AI To Mac, IPhone And IPad

Published

on

Apple Intelligence To Bring Personal AI To Mac, IPhone And IPad

Apple has incorporated its Neural Engine Neural Processing Unit (NPU) into all Apple processors for several generations, using it for local machine learning such as intelligent photo processing. Now, during the recent WWDC keynote, Apple announced it is going use those Neural Engines to bring generative and large language models (LLMs) that can run on-device on its main Apple products. This is classic Apple approach, laying the ground work well in advance, but then releasing a new product feature when it feels it can bring the right “Apple” level user experience and solve customer pain points.

After the first hour of the keynote describing updates to the various Apple operating systems (hey, iPad got an amazing calculator), Apple’s CEO Tim Cook introduced the company’s answer to generative AI – Apple Intelligence. Apple Intelligence will be able to run on local devices, but there will also be an Apple Cloud element for more complex queries and even the option to use OpenAI’s ChatGPT and other 3rd party generative AI (GenAI) products. This allows Apple users to chose privacy or get additional functionality from third-party solutions, allowing Apple to remain competitive with Microsoft and Google.

Cook indicated that they wanted to build an AI that was powerful enough to be effective, intuitive enough to be easy to use, personal enough to be a true assistant, and be deeply integrated into the Apple experience with privacy incorporated from the ground up. Cook called Apple Intelligence personal intelligence. Apple Intelligence will be released in the Fall on iOS 18, iPad OS 18, and MacOS “Sequoia.”

Not all Apple products will be able to support Apple Intelligence – only Mac and iPad products with M-series processors (all the way back to M1) and last year’s iPhone 15 Pro with the A17Pro processor. In addition to a certain level of TOPs (tera operations per second) performance required for the Neural Engine in the Apple silicon, it also appears that Apple needs at least 8GB of DRAM to hold the GenAI models (The A16 Bionic processor from the iPhone 14 Pro only has 6GB of DRAM, but a 17 TOPS neural engine, more than the M1 and M2 processors).

Because Apple has been incorporating the Neural Engine NPU for several generations of Apple silicon, it allows Apple to run Apple Intelligence on older systems, unlike Microsoft which is requiring new AI PC silicon in order to run Copilot+ features on PCs. This could be a significant advantage for Apple as it rolls out Apple Intelligence to a wider ecosystem already in place without having to build a brand-new ecosystem and installed base as Microsoft is attempting to do. In fact, Apple will support Apple Intelligence as far back as the 2020 edition of the MacBook Air and MacBook Pro, which used the M1 processor.

The initial applications of Apple Intelligence were rather modest in scope. Initially, it will be used to enhance Apple existing services. But, in conjunction with Siri, it will also be able to take certain actions on your behalf across Apple Apps.

Because Apple Intelligence will be deeply integrated into the operating system and will be able to use your personal context, Apple hopes it will become a tightly integrated part of Apple product interfaces. The company will also be using Apple Intelligence to supercharge the Siri personal assistant, with better natural language processing and onscreen context awareness. Apple Intelligence is also multimodal, including speech, written language and images.

Apple will have an improved Siri using AI for richer language processing and adding better understanding of relevant on-device context. Apple will also add typing as an input to Siri. The enhanced Siri can be used for feature descriptions for control functions or to find information, even across apps. It will also have on-screen awareness of the information on the display screen when in use.

For example, while Apple didn’t focus on GenAI doing the creative writing for you, the company talked about how Apple Intelligence will be able to rewrite, proofread and summarize existing work. This may be Apple’s way to avoid the issue of GenAI replacing creative professionals.

Another example is in Apple Mail where you can use the GenAI function to summarize an e-mail instead of just seeing the first few lines. It can also prioritize important messages and notifications and reduce notifications of less important messages.

One of the more fun uses of this new Apple Intelligence will be the creation of custom AI generated emoji characters, which Apple is calling “Genmoji.” The Image Playground will use a description (text or spoken) describing the image and the style to generate these customized emoji. The image generator can use your own picture library for the image generation. Apple Intelligence will allow natural language image search on photos and videos and allow you to create your own memory movie based on your inputs.

While Apple Intelligence has some distinct boundaries on functionality and privacy, Apple is providing support in Siri for third party GenAI solutions such as ChatGPT. However, the user will have to explicitly grant permissions to ChatGPT. Apple’s writing and images tools, such as Compose, will support ChatGPT for free and allows access to premium ChatGPT subscriptions directly through the Apple interfaces.

With a collection of new API’s and AppIntents frameworks it will also be possible for third party applications to utilize the Apple Intelligence GenAI services on device.

The goal of Apple Intelligence is to make your interface with the PC, phone or tablet grounded with personal information and real-time screen information capture. This gives the local AI context using real time and personalized information. Privacy can be maintained by keeping the data on-device.

Apple Intelligence is based on compact large language and diffusion foundation models that can access an in-device sematic index of the personal data. More details can be found on this Apple blog post. The Apple Intelligence on-device foundation model is about 8 billion parameters. Apple uses a Low-Rank Adaptation (LoRA) adapter for a mix of 2-bit and 4-bit integer quantization to reduce the model size and memory requirements, without losing accuracy.

If the local model cannot handle the complexity of the request. Apple has created a specialized server for larger models it calls Private Cloud Compute (PCC). Apple uses its own silicon for these servers. More details are provided in this detailed blog post from Apple. The process is that if Apple evaluates that the on-device AI is insufficient; it sends relevant data to PCC to process.

Apple has taken a very conservative approach to AI by adding easy to integrate enhancements to existing functions for its Mac, iPad and iPhone products. Support for older products also builds in a larger installed base which will entice developers. What was missing from the supported products was the Apple Vision Pro XR headset. Here’s a place where adding AI support for hands-free task directives would be greatly appreciated. Also, generative images would be even more profound in mixed reality. We will have to wait until the fall release of the Apple Intelligence functions to see the impact on battery life and how it performs compared with Microsoft’s Copilot+ PCs.

Blog – Private Cloud Compute: A new frontier for AI privacy in the cloud – Apple Security ResearchBlog – Private Cloud Compute: A new frontier for AI privacy in the cloud – Apple Security Research

Continue Reading