Technology

Apple Was Trailing In AI; Now, After 'Apple Intelligence,' It's Trailing On Purpose

  • Apple Intelligence may lack novelty, trailing the AI innovators, but it could succeed if its AI simply works as intended.

Karan KambleJun 17, 2024, 10:51 AM | Updated Aug 05, 2024, 03:57 PM IST
Apple Intelligence was unveiled on 10 June after a long wait on its AI.

Apple Intelligence was unveiled on 10 June after a long wait on its AI.


Apple quenched widespread curiosity for what its artificial intelligence (AI) offering will be earlier this week. After barely mentioning the word “artificial intelligence” nearly two-thirds of the way into the presentation at its annual developer conference, the Cupertino company reluctantly uttered the word, only to then hurry off in a different direction, rebranding its AI as “Apple Intelligence.”

Apple’s reluctance follows a relatively long period of apparent obliviousness to the generative AI wave, sparked by the launch of ChatGPT by OpenAI towards the end of 2022. While the big technology companies, and Apple rivals, Microsoft, Google, and Samsung, among others, followed suit by launching their own AI, Apple chose to watch from the sidelines.

The speculation was that Apple was either cooking up a great large language model (LLM) — the thing that makes conversational AI tick — and its silence was building up to a bang or it was falling short of cooking up a decent enough LLM all by itself. After all, LLM — or more broadly, generative AI — was the hot iron and Apple wasn’t striking. (Otherwise, generally speaking, Apple and others have been at AI for several years).

That was until 10 June. At its annual Worldwide Developers Conference, Apple unveiled its Apple Intelligence. Apple’s AI software is held together by a small suite of generative models, all built in-house. Principally, there’s the roughly 3-billion parameter on-device language model and the larger server-based language model on a private, secure cloud (private cloud compute) running on Apple silicon servers.

Other models include a diffusion model to help users express themselves visually — for example, in the Messages app — and a coding model for developers.

These AI models aren’t even close to the level of, say, a GPT-4. Even OpenAI’s older GPT-3 is a 175-billion-parameter LLM! However, Apple has deliberately built smaller models and served up the option to level up two tiers — one purely optional — to process more advanced tasks.

If a model is state of the art, like OpenAI’s, there arises the need for a remote server and transfer of user data to the cloud server for processing. This data exchange also necessitates an internet connection. Because the data leaves the device, the user whose data it is loses control over it; the data roams to places where it can be stored or misused.

Moreover, cloud-based LLMs rely on the security measures of the cloud provider. Any vulnerabilities or breaches in the provider’s infrastructure can expose sensitive data. The user is also called upon to trust the cloud provider — in a non-trusting environment — with data security.

On the other hand, when the LLM operates on the device, as Apple does, all data processing occurs locally. With sensitive information never leaving the device, the risk of data interception or unauthorised access during transmission is reduced. Further, as no data is sent to a remote server, there’s less exposure to potential breaches or attacks on cloud infrastructure.

Other advantages of on-device LLMs include not requiring an internet connection for functioning, faster response times, simplifying adherence to privacy regulations like the European Union’s General Data Protection Regulation (GDPR), and, most importantly, being well-suited to work within the limited computational and storage context of a smartphone. Hence, Apple chose a trade-off in model size and complexity.

That being said, it must have been clear to Apple that its users might want to carry out larger, more complex tasks, which they probably routinely do on the AI chatbots in use today. So, they went ahead with an expandable generative model infrastructure by additionally providing a server-based language model, which is set to run on their private cloud and won’t store any data, and an optional ChatGPT service for the most advanced queries.


There’s little new to see here, though the utility and practical benefit for anyone using the most advanced iPhones, iPads, and Macs — for Apple Intelligence will only appear on the latest and upcoming devices — is undeniable. The deep integration at the operating system level probably makes Apple’s AI a breeze to use.

In addition, it helps that Siri is smarter and more useful than ever before. One might be able to speak with or type to Apple’s digital assistant more naturally — finally! — and it’s likely to be able to quickly pluck out relevant information from within various apps to provide useful answers and perform simple, essential tasks. A Siri revamp was very long overdue, but it’s here, and Apple users — again, only those in possession of the very latest models — will feel grateful for it.

It’s safe to say, though, that Apple has played it small and safe with its AI. It has promised a very good upgrade to a subset of its users, who will no doubt love the features on offer but hasn’t moved the needle much in the larger, ever-evolving AI landscape. (It’s worth noting that expectations of any AI offering are presently sky-high and hard to match.)

Whether or not Apple Intelligence does wonders for its users or anyone else, the Cupertino giant can still pick up a satisfactory win if its AI, in line with Steve Jobs’ legendary catchphrase, “(It) just works.”  

“Apple is taking baby steps because right now that is all we can count on to deliver successfully. This is a solid first effort by Apple focusing on real benefits to users (ex. Siri),” Tony Fadell, known as “the father of the iPod,” said on X. He also made the point that “Today's AI LLMs are mostly glorified demos for the really interesting applications,” implying that Apple, contrary to the other AI players in the big tech race, has put out a mature product that does what it’s supposed to.

If it turns out later this year that Apple Intelligence works just as intended, it will count as a triumph among a slew of flopping ‘AI in a box’ releases, such as the Humane Ai Pin and the Rabbit r1, which have put out worse than half-baked products this past year. Even Apple’s rivals have released embarrassing AI products, such as, most recently and notably, Google’s AI Overviews and Microsoft’s Recall.

Against this backdrop, Apple would do well to even present a finished product — emphasising the practical benefits of AI, ease of use, and privacy over mind-blowing, novel applications and the spectre of AI surpassing human intelligence and spelling doom for the world.

Having said that, Apple might still have to watch from the sidelines as other companies push the boundaries of AI, even if messily, while it goes about perfecting the ABCs. It will be interesting to see whether and how these two approaches intersect in the future, and what it will mean for the AI story.

Also Read:

Join our WhatsApp channel - no spam, only sharp analysis