Contents

On Apple intelligence, LLM limitations and the next AI winter

Apple intelligence is a project that has been in the works for a while now, and it has been rumored to be a major feature in iOS 18 and the new iPhone 16. However, the project has faced several delays, and it seems that Apple is struggling to get it right. In this post, I will share my thoughts on the reasons behind these delays and what they mean for the future of Apple intelligence and AI.

Disclaimer

  • This is a personal blog, and the opinions expressed here are my own.
  • I am not an Apple employee, and I do not have any insider information about the company or its products.
  • There are not a lot of facts to back my opinions, so take everything with a grain of salt.

Is Apple really bad with AI ?

From an outsider perspective, yes Apple looks like they are struggling, they promised Apple intelligence which is supposed to do a lot of “contextual” actions using your data but what we currently see implemented so far is not very different than your typical text LLM that can reformat your text, write something on your behalf or some text/image post-processing.

So yes, Apple did promise us AI but what we got isn’t far from what we already have.

if we stop here, the conclusion is that Apple is lacking behind and other companies will crush them, but let’s think of why is apple struggling.

What happened?

I don’t think Apple is incapable of building that they promised, in contrast, I think they are the most capable of doing so since they control the hardware/software of the whole ecosystem, so what went wrong?

What I believe is that Apple really tried to deliver on their promises but they were kind of shocked by the current state of AI, LLMs and transformers; they thought that they can build a system that can understand context, user intent and do complex tasks on behalf of users but found the technology is not there yet.

One indication of this is Apple’s “controversial” paper The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity that talks about how LLM can’t really reason behind a certain threshold of problem complexity ( which Apple Intelligence requires). You can also see other LLMS SaaS companies also struggling with their new releases, every couple of months we get a small benchmark improment on top of the previous one and the promise of AGI gets pushed for the next release, as if we understand what AGI is to have an ETA on it !

To make it simple, I think Apple promised their users a lot more than what the current AI tech can deliver; they didn’t really do all of the due diligence and now they are paying for it. They’re scrambling to get something out the door that brings value to their users and not be very embrassing for them. Let’s see how this plays out in the next couple of months.

The promise of AI, Transformers and LLMs

There was been a lot of talk about whether the current AI progress is just hype and the tech isn’t really there yet, while I do agree with that current AI capabilities are extremely oversold and hyped, I still believe that there’s merit in the current tech that is not just air and can be used in a helpful way.

LLMS are not super intelligence and we don’t have a clear path to make them more intelligent than they already are now. They are just a tool that extracts patterns from data and uses this information in a way that looks intelligent, but they don’t really understand the world or have any real intelligence. Should we just abadon ship and call it a day ? No! LLMs are still useful tools that can be used to build applications that can help people, but we need to be realistic with our expectations.

What does this mean for AI; Who’s the culprit

Honestly, for someone who works in this field, it’s a little bit scary/exciting, when people realize that LLMs are not really AGI (whatever that is) and they are not capable of doing super human intelligence and that the next model from [ your favorite LLM SaaS] will not be AGI but a small improvemnt on top of what we currently have, people will start losing interest in AI and the whole field will suffer.

This is what happened in the previous AI winters, and we might be heading towards something similar. If that evers happens, just remember, it’s not really the tech that’s lacking, it’s all the stuff around it; the overvalued startups, the greedy investors/gamblers, your Linkedin guru who is trying to sell you his n8n course that will bring you millions of dollars in leads, it’s all of the stuff around the tech; but not really the tech.