Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.
Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.
Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via “function calling”; kernel process LLM that can schedule and coordinate work across them given user queries.
Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, “always on”, and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.
Step 4 Initiative. Don’t perform a task given a prompt, anticipate the prompt, suggest, initiate.
Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.
Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).
Step 7 Privacy. <3
We’re quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.
Founding member of company that stands to make fortunes through a product endorses said product.
Yikes. Just hit em with the ol’ “<3” for privacy. Does not inspire confidence.
#trustmebro
<3
I thought the original post was satire - list all of the privacy issues, then throw in “Privacy <3” at the end. Seriously, almost every one of those points has a potential privacy issue.
Guess I was being too generous.
How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.
Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.
You think your iPhone isn’t collecting data on you? Is that what you’re saying?
This isn’t satire? What?
Check out OP defending Apple in every comment in this thread. It would be funny if it weren’t so… yeah.
I am just sitting here like… how. Am I too autistic to distinguish satire from non-satire ones
X
twitter
NONE of the features on this list are in Apple Intelligence. Apple AI is such a flop. They released the iPhone 16 lineup saying it’s for Apple AI and it’s not even going to be released on them for at least another year. What a fail.
I look forward to Apple Marketing coming up with their usual line of nonsense, like a meaningless name for an existing capability that they are claiming to have invented.
“and it just works”
has he even used an llm before?
He sort of invented it, so you have to think he’s commenting on the concept here, not the implementation.
I have tried a lot of medium and small models, and there it just no good replacement for the larger ones for natural text output. And they won’t run on device.
Still, fine-tuning smaller models can do wonders, so my guess would be that Apple Intelligence is really 20+ small and fine tuned models that kick in based on which action you take.
An LLM has no comprehension of what it says. It’s just a puppy that is really good at performing for treats. This will always yield nonsense a meaningful proportion of the time.
I don’t care how statistically good your model can be under certain constraints and inputs. At the end of the day, all you’ve done is classically condition your computer.
It goes a tad bit beyond classical conditioning… LLM’a provides a much better semantic experience than any previous technology, and is great for relating input to meaningful content. Think of it as an improved search engine that gives you more relevant info / actions / tool-suggestions etc based on where and how you are using it.
Here’s a great article that gives some insight into the knowledge features embedded into a larger model: https://transformer-circuits.pub/2024/scaling-monosemanticity/
What the hell is the fella smoking if he thinks Apple would ever let others use their on-device LLM? Like, the company that deems it too dangerous if apps could change a wallpaper?
The amount of corporate speak makes me sick. Especially the mix of buzzwords being mixed with shit like “KERNEL PROCESS”, shit’s cursed.
Now just need one of those headsets that read your vocal cord movements in order to “read your thoughts”, and I can silently make the AI do anything.
Kernel process LLM
God I hope not. That sounds extremely insecure. Definitely do not do this in the kernel.
Why not just have the LLM replace the kernel?