News & Analysis

AI or Apple Intelligence: Which Works Better?

Apple’s late entry into the AI minefield calls for a wholesale shift in mindset around actual use cases and a whole lot of fluff

After two years of hype, artificial intelligence (AI) is facing its sternest test caused by bad reputation, recall of AI tools and growing security concerns. That Apple decided to launch its own AI (Apple Intelligence) at this juncture reeked of a daftness, hitherto unknown to a company that moves at its own pace and believes others would follow. 

However, a closer study of all that Apple announced at the WWDC 2024 might tell a different story, one that would have to be proved when the company actually puts the beta version out there for us to test. Maybe, it is just the way Apple thinks that is making the difference – the close integration of software with hardware that defines its products. 

Apple has done Apple things with Apple Intelligence

Apple showcased its iOS 18 at the WWDC and what strikes us is their cautious approach towards launching AI features. Barring a few fluffy items like the AI emojis, the company has attempted to launch AI that is useful on a daily basis. So, we have Apple Intelligence coming to daily use apps. 

There’s the additional support for writing and proofreading, AI summaries and transcripts, smart responses and smarter search, prioritized notifications, photo editing and a new “Do not Disturb” feature that auto responds and lets through important messages while holding back the rest. Of course, none of these are as exciting as ChatGPT that promises to assist us in everything barring making a cup-of-tea. 

Maybe, that’s what Apple Intelligence aims to be. No need to worry about chatbots that hallucinate when they are confused. No fear of having to recall search features when they advise users to use glue as a binder for pizzas. And, no need to switch to default off mode when a recording feature comes with security flaws. Hey! We didn’t make these up – these are issues that OpenAI, Google and Microsoft faced in recent times. 

Apple Intelligence – a custom layer on existing Apps

Based on inputs gathered from Apple’s revelations at the WWDC, it is safe to say that their version of AI would help users glean out the really important stuff from long texts as well as notifications. Which we believe is a good start to save on time. In addition, users could get answers to queries like “who’s in this photo” or have an audio transcribed in seconds. 

That it could fix grammar, syntax and spellings, rewrite information in different style and suggest common responses is a given. As would be its ability to do basic photo edits like getting rid of unwanted objects or even create images on request based on some really top notch guardrails that Apple researchers have put in place. 

Now, one could turn back and ask how is any of this AI? And the answer would be no, it isn’t AI in the hyped up fashion. These are tools made smarter and probably that’s the intention as Apple says it is seeking use cases to specific problems that are solvable instead of dealing with exception mapping on a massive scale that AI chatbots require. 

Who needs LLMs? Apple is going with Small Language Models

The idea is to narrow the focus towards giving assured and expected results and not meme-worthy responses as its competitors are delivering in their efforts to stay ahead. Moreover, the company appears to have limited security and privacy dangers from AI misuse through a focus on small language models not the LLMs. 

By doing so, Apple is leaving the LLM-led answers and solutions to ChatGPT such as writing a poem or a script. What it takes ownership of is providing support to creators, such as summarizing a large volume of text, replying to an email maybe. And it makes business sense too as Apple has a large user base of creators, who are baulking at ChatGPT anyways. 

The same holds good for images too. Apple Intelligence creates them on the fly when texting a friend but refrains from doing so or prompting users to create an AI image when the chat is explicit or around inappropriate topics. There seem to be guardrails on what AI must do in situations which would effectively limit the harm that it can do in unforeseen circumstances. 

Strong guardrails and a safety first approach

Be it adding images to Keynote or on Image Playground, Apple Intelligence guides users to suggestions and limits the selection styles. This means photorealistic deep fakes are not on. Once again, Apple has kept its creator user base happy. What it has done smartly is to shift users over to ChatGPT when query prompts are outside of the guardrails.

So, if you ask a question to Siri that it doesn’t have to answer, Apple Intelligence offers to switch you over to ChatGPT. This way, if there is a screw-up, the onus is on Sam Altman! In some ways, Apple’s innovation isn’t actually a chat with AI, but a means to leverage AI to narrow down use cases where a single click transforms text, images intuitively.

Both fanboys and detractors of Apple would agree that Apple Intelligence is at best an added layer to existing apps. That it solves everyday problems quicker using AI is all that one can expect and what one is likely to get upon its beta launch this winter. Compared to OpenAI, Gemini and others, Apple Intelligence is boring. Maybe, that’s intentional too.