Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

senttoschool

macrumors 68030
Nov 2, 2017
2,625
5,477
How can their pants have been caught down when they’ve been working towards this holistic offering since before the M1 was announced?

Critical thinking people, it’s time to stop just repeating the “common wisdom” of the CNBC’s and “analysts” of the world.

None of this was just cobbled together in the last one, two, or even three years…

This post makes no sense. How was Apple caught with pants down. Nvidia is great investment for training but not so much. For inference. There is a reason, GroQ AI, Amazon, Google are all investing in custom chips for inferences. Apple does use Nvidia for training, but they can run inferences on custom hardware. Forget apple, there are users who connect few MS ultras and run distributed models for inferences. Very cost effective compared to Nvidia.
Apple models are running most of the tasks locally on device, there are few who have started playing with apple models on iPhone 15 Pro, looks like these models consume around 4GB memory. It will be interesting to see if Apple bumps RAM on all iphone 16 models or restricts to pro models with more memory.
Apple was so prepared for the GenAI era that:
  • An iPhone 15 and iPhone 15 Plus that is selling in stores right now will NOT support Apple Intelligence
  • Apple doesn't have a server NPU for their Apple Intelligence Cloud while all other big tech have their own. Reports are that they have to use M2 Ultra, which is not designed for server inference at all.
  • Apple has to use OpenAI's model for Siri, and open to using Google's Gemini as well while they have no cutting edge LLM of their own
Apple got caught with their pants down in the GenAI era. It's not a controversial opinion.
 
Last edited:

galad

macrumors 6502a
Apr 22, 2022
598
484
1. Right, because everyone else is already running their massive LLM locally on their Android phones.
2. We have no idea
3. They aren't using OpenAI model for Siri, ChatGPT is an optionally feature. Apple has got its own model that run locally and on their servers.
 

histeachn81

macrumors member
Sep 30, 2015
37
46
I’m curious about this plus the mentions about how processes will be done on device to maintain privacy. How would that likely affect battery life in that juggle between AI on device and the cloud?
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,620
1,089
  • Love
Reactions: tenthousandthings

TechnoMonk

macrumors 68030
Oct 15, 2022
2,551
4,026
Apple was so prepared for the GenAI era that:
  • An iPhone 15 and iPhone 15 Plus that is selling in stores right now will NOT support Apple Intelligence
  • Apple doesn't have a server NPU for their Apple Intelligence Cloud while all other big tech have their own. Reports are that they have to use M2 Ultra, which is not designed for server inference at all.
  • Apple has to use OpenAI's model for Siri, and open to using Google's Gemini as well while they have no cutting edge LLM of their own
Apple got caught with their pants down in the GenAI era. It's not a controversial opinion.
lol. Who else is even trying to do on device inferences. As some one who has been following Apple AI?ML for past 2-3 years, this is ignorant comment. What Apple released didn’t magically appear from thin air. They released lot of those models/tuning and fine tuning for a while now. Apple is building an intelligent OS based on ML/AI.

GO read up on Apple technical details and follow their GitHub repositories. The models Apple is releasing is estimated to take 4.2GB RAM. It’s no surprise that phones with less memory wont be supported. There is no guarantee iPhone 16 will support, it may be Pro option unless Apple bumps up RAM to at least 8 GB on all models.
 

steve123

macrumors 65816
Original poster
Aug 26, 2007
1,151
716
Where Apple has come up a bit 'short' is in shrinking/compressing the models. This 'punt extra compute to cloud' is structured ( layered on top a thinned out iOS) so that as the Apple Silicon devices get RAM uplift more and more of the compute can relatively easily migrate to the client devices. ( possibility without almost no changes at all except for where the threshold to cloud point is set to. )
This makes a lot of sense. Their devices are generally deficient on RAM and they did not have the foresight to add RAM to the base configurations years ago. So, their PCC is really a bandaid to allow running larger models on their RAM deficient devices to remain competitive in AI. The implications of this extend far beyond the handheld devices. All those base RAM Mac's also are limited in what they can do and will experience the latency of going to the cloud not to mention that it requires an internet connection so if you are unplugged for some reason you are out of luck.


Where Apple missed was the hype train of making the models as large as possible as quickly as possible. (the piled higher and deeper is better mania. )
Indeed. These are choppy waters. It remains to be seen how all of this will work out.

8GB Mac's are likely a thing of the past. Even 16 GB is looking like it may be questionable in a few years. Will they jump to 32 GB for a base model? The AI models will not be getting smaller as time goes on, they will only grow. The situation Apple is now experiencing is a good example of how a fixed RAM configuration now threatens Apple. I wonder how MS will respond when they release their new Surface Pro ... will they take the opportunity to push base configuration to 32 GB?

And what to make of the base RAM for phones and tablets?

This all reminds me of 640K.
 
Last edited:
  • Like
Reactions: dk001

TechnoMonk

macrumors 68030
Oct 15, 2022
2,551
4,026
l
This makes a lot of sense. Their devices are generally deficient on RAM and they did not have the foresight to add RAM to the base configurations years ago. So, their PCC is really a bandaid to allow running larger models on their RAM deficient devices to remain competitive in AI. The implications of this extend far beyond the handheld devices. All those base RAM Mac's also are limited in what they can do and will experience the latency of going to the cloud not to mention that it requires an internet connection so if you are unplugged for some reason you are out of luck.



Indeed. These are choppy waters. It remains to be seen how all of this will work out.

8GB Mac's are likely a thing of the past. Even 16 GB is looking like it may be questionable in a few years. Will they jump to 32 GB for a base model? The AI models will not be getting smaller as time goes on, they will only grow. The situation Apple is now experiencing is a good example of how a fixed RAM configuration now threatens Apple. I wonder how MS will respond when they release their new Surface Pro ... will they take the opportunity to push base configuration to 32 GB?

And what to make of the base RAM for phones and tablets?

This all reminds me of 640K.
Actually Apple is using a Quantized 3.5bit models heavily optimized for low memory on devices. They also have adapter based models to minimize memory foot print. Any mobile device iPhone/ipad with 8GB RAM will be ok. ios and iPad OS have 5 GB limitations per app, a devices 8 GB should be fine. It’s just the first iteration, and a low handing fruit to improve.
Mac OS will definitely be challenging give multiple apps can take up lot of memory and running models on 8GB base models witch out memory optimization of iOS and iPad OS. Apple would have to move Mac Base models to 12 GB or 16GB at the least. I doubt Apple will ever use local inferences which need 32 GB RAM, even Laptops running Nvidia 4090 are limited to 16GB GPU RAM.
 
  • Like
Reactions: steve123

Chuckeee

macrumors 68040
Aug 18, 2023
3,005
8,628
Southern California
Apple isn't charging for "Apple Intelligence"... which means no users is paying for these servers. At least directly, which means the 'recovery of costs' on these servers is low. More people , paying MORE money in the aggregate might drive down costs.
Alternatively, everyone will be paying for it if you buy new hardware. It doesn’t matter if or how much you utilize AI features, everyone pays. Nothing is free. AI isn’t special, this is true for many features in the system (e.g., Bluetooth, headphone jack, thunderbolt 4, Genius Bar). Its cost is part of the total cost, you are paying for it whether you use it or not.
 

TechnoMonk

macrumors 68030
Oct 15, 2022
2,551
4,026
Alternatively, everyone will be paying for it if you buy new hardware. It doesn’t matter if or how much you utilize AI features, everyone pays. Nothing is free. AI isn’t special, this is true for many features in the system (e.g., Bluetooth, headphone jack, thunderbolt 4, Genius Bar). Its cost is part of the total cost, you are paying for it whether you use it or not.
Apple could charge at a later point like Google and openAI. Bundle with iCloud storage plans or Apple one. Google offers 30TB free space with Gemini pro subscription. Either ways it will be covered with device cost or subscription.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Today's announcement at WWDC that the Apple Intelligence Private Cloud will be powered using Apple Silicon is likely the most significant announcement. nVidia has pretty much been sole supplier of AI hardware and they limit any companies ability to adopt AI. Apple making the decision to use their own silicon suggests they will be better able to accelerate their plunge into AI because they will have more supply of AI chips. Moreover, it bodes well that Apple will continue to aggressively innovate the ANE hardware to be competitive with nVidia.

This is fantastic news.
No one should be surprised by this.

Apple is probably paying by the wafer for everything TSMC makes for them.

Of course they are going to use Apple Silicon and probably a stripped down OS called AIOS, or serverOS to do what it needs.

After 4 years, Apple probably has a warehouse full of binned Apple Silicon parts that have too many cores not useable, or not able to meet the targeted frequency. Why spend $100k per server with NVidia when they have their own goldmine?

Full transparency, none of us know what the yield rate is for Apple Silicon parts. They could have enough to start this off for the first 6 months (likely higher), they may have enough for the first few years. Theres no logical way we can know.

With that said though, it’s not like Apple HASN’T built SOCs before. It’s not like Apple has never taken a fully developed operating system and made it purpose built before.

Apple has done this more times than Intel made schedule for fabricating 10nm processors and then some.
 
Last edited:
  • Like
Reactions: LockOn2B

Chuckeee

macrumors 68040
Aug 18, 2023
3,005
8,628
Southern California
With that said though, it’s not like Apple HASN’T build SOCs before. It’s not like Apple has never taken a fully developed operating system and made it purpose built before.

Apple has done this more times than Intel made schedule for fabricating 10nm processors and then some.
That is a valid and very interesting perspective.
 

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
I wonder if Apple can come up with a home network appliance for a 'local apple intelligence cloud': encrypt everything, let it train off your idevice additions - notes, pics, etc - and use that to fine tune any foundational llm. Always working to stay local and fresh and incorporating your latest ideas. Even allow for different fine tuning for each family member.

They can slap a speaker/screen/backup disk on it and call it some kind of pod... but the enticing part of it will be some quantifiably better performing AI.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,492
4,052
Alternatively, everyone will be paying for it if you buy new hardware.

The context I was responding to was that the price of Mac Studio and/or Mac Pro would go down. If 'everyone' is paying for it that would include the Studio and MP buyers also. With more 'costs' at being coupled to those devices, why would their costs go down. Transfering it to 'everyone' isn't going to make the Studio/MP get cheaper.

It doesn’t matter if or how much you utilize AI features, everyone pays. Nothing is free. AI isn’t special, this is true for many features in the system (e.g., Bluetooth, headphone jack, thunderbolt 4, Genius Bar). Its cost is part of the total cost, you are paying for it whether you use it or not.

Again there already is a kitchen sink of 'free' that Apple puts on their devices. ( Pages , Numbers , there is extensive list of app software also. Free iCloud storage. Free iMessage brokering. etc. ). And none of Apple devices are more inexpensive than the average Windows PC or Android phone. All of those 'freebies' are not free and drive up unit system costs. Not down. Most of the stuff in App Store is free so the 'for pay' apps are charged 30% overhead. ( all those free apps doesn't drive Apple tax on developer revenue lower. )

Similarly, just about every year Amazon makes the 'kitchen sink' of Amazon Prime features get bigger and the end user cost doesn't go down over time.

Apple is going to 'tax' all of the systems to cover the costs. The folks who pay will pay more as Apple Cloud services costs go up with little to/no Cloud services revenue increases. Pushing anonymized data to ChapGPT APIs with limited feedback probably costs money also. ( Apple doing common and/or basic queries/prompts on their own stuff probably helps keep the OpenAI charges down somewhat. )
 

deconstruct60

macrumors G5
Mar 10, 2009
12,492
4,052
No one should be surprised by this.

Apple is probably paying by the wafer for everything TSMC makes for them.

Of course they are going to use Apple Silicon and probably a stripped down OS called AIOS, or serverOS to do what it needs.

After 4 years, Apple probably has a warehouse full of binned Apple Silicon parts that have too many cores not useable, or not able to meet the targeted frequency. Why spend $100k per server with NVidia when they have their own goldmine?

Full transparency, none of us know what the yield rate is for Apple Silicon parts. They could have enough to start this off for the first 6 months (likely higher), they may have enough for the first few years. Theres no logical way we can know.

No logical way?

iPhones sell in an order of magnitude range higher than Macs. This 'Cloud Compute ' rolled out to iPhones likely is going to swap the number of Max chips made. The largest selling Macs are MBA , MBP 13/14" with the plain Mn SoC in them. The Mn Pro are more affordable than the Mn Max so likely also substantively outsells the Max.

The Max dies is likely in the single digit millions range. The iPhone Pro is likely in the 10 of millions range.

The bigger factor here is how many users 'opt in' to Apple Intelligence. It is not going to be 'on' by default.
If only 10% of folks opt in then the iPhone number could drop to single digit millions range. But if it is 50% then have substantive issues.

The M2 is on N5P. At this point, it is more than several years old. The defects are not just going to land in specific CPU/GPU cores. defect in the SSD controller will kill the die as something that can boot iBoot. Defect in memory controller likely render it ineffective as a "inference server" also ( it is a bandwidth intensive task).

Apple also sells binned Max dies. So this is all not coming from the 'cannot be sold' defect pile. A larger number of 'out of the garabage can' servers isn't going to help. If need twice as many server boards then that takes up twice as much datacenter space. Also eats up more electricity. Ditto network switching costs. Dies that are mostly garbage ... do not necessarily lower long term service ongoing operating costs. ( early on Google bought lots of stuff out of the discount bargain bins at Fry's. That really didn't help long term to delivery 5 9's like service stability. )

What typical load factor and who well they can aggreate service is what will matter. If it is in similar ballpark range as competing hardware and Apple doesn't charge themselves the 'Apple Tax' for hardware , then yes there is upside in using their own 'dogfood' here. It isn't 'free' hardware though. Mininally the REST of the server board ( ethernet , NAND chips , RAM , etc. ) isn't going to be free at all. (Apple bought none of those wafers from TSMC. )


With that said though, it’s not like Apple HASN’T built SOCs before. It’s not like Apple has never taken a fully developed operating system and made it purpose built before.

Apple has not build a specific SoC for the entry iPad. AppleTV. iPhoneSE. iPad Air. Mac Pro. The majority of Apple is all about reusing SoCs that lead out on other products.



Apple has done this more times than Intel made schedule for fabricating 10nm processors and then some.

That is pretty much comparing "apple's to oranges" . The number of dies that Intel has rolled out on 10nm is likely bigger. Intel's product line is much broader. INtel has Xeon D , Xeon SP . A couple of desktop dies ( Xeon E , Core i3 , i5 , i9) . multiple laptop dies. Atom processors, Celeron/Pentium. ( while had Altera ... FPGA 10nm product. ) Intel sells an order of magnitude more stuff to two orders of magnitude more system vendors than Apple Silicon group does.

Apple mutated A1_X dies into Mn dies to keep the die variant count down.


P.S. if go back to post 9

Apple basically says it is largely a trimmed down iOS. The primary point is basically to run just a larger variant of the same ML software that they are running locally. Same software just in a bigger RAM/core container , but on a long latency external line.

They aren't trying to do "mega LLM #42". All of that is a 'punt' to 3rd parties.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,492
4,052
Apple was so prepared for the GenAI era that:
  • An iPhone 15 and iPhone 15 Plus that is selling in stores right now will NOT support Apple Intelligence

Apple Maps initial roll out. Bad. Apple music social media rollouts on the first couple of iterations. Bad. SNAFUs on download iOS/macOS Fall releases on day 1. Bad.

Yes, their cloud services team is more experienced now, but they also really haven't done this service before with a relatively new operating system (and management tools) either.

Rolling this out to too many users too fast will likely lead to a system failure. Apple has XCode Cloud in 'beta' for many, many , many months before it open to any dev user production.

Even iPhone 15 Pro is still 10's of millions of users. With a 20% opt-in rate that would be 2+ million users right there. See if that is stable and then make that bigger.

Once Apple has a system that is stable and has proven track record for scaling, then there is nothing in what they presented so far that they couldn't scale to more iPhones at the cost of longer latency. I suspect a bigger factor is server cost recovery ( if they get a model where people are paying, that too will likely get more coverage because they can more easily pay for the servers and ongoing service costs. )


  • Apple doesn't have a server NPU for their Apple Intelligence Cloud while all other big tech have their own. Reports are that they have to use M2 Ultra, which is not designed for server inference at all.

It isn't all about NPU cores. RAM capacity and bandwidth are going to matter about as much. Apple's solution runs stateless ( between sessions). That isn't what other folks are doing for training at all. This is very much a specific inference focused solution. Even more so in that it "has to" run Apple's code/model that runs on iOS/macOS. It isn't meant to run generic ML workloads (or boot off of generic SSDs. )

The long term goal is to push more stuff to local and back off the cloud. That is exactly the opposite of what the cloud services vendors want to do long term. They want stuff that is locked into the cloud services for deployment for the long term.



  • Apple has to use OpenAI's model for Siri, and open to using Google's Gemini as well while they have no cutting edge LLM of their own

The primary point here is to DIVERT most common usages away from OpenAI. It is NOT to try to drive the maximum traffic there. "Hey Siri, open my calendar and share my dentist appointment next week with my Mom." isn't going to go to OpenAI at all.

Siri has had 'feature' of punting requests to a 3rd party when it 'knew' the request was beyond scope that it could accurately hanlde. The bulk of the "49er gold rush" hyper large LLMs tend to drift into hallucinations. They are suppose to know "everything" so they will just make something up if they don't have it. I don't think Apple wants to cover that. And the more they can punt that kind of stuff to a 3rd party , the more it is "not Apple's fault" when it goes sideways.
 

steve123

macrumors 65816
Original poster
Aug 26, 2007
1,151
716
early on Google bought lots of stuff out of the discount bargain bins at Fry's. That really didn't help long term to delivery 5 9's like service stability.
LOL!!!! I did not know this.


Even iPhone 15 Pro is still 10's of millions of users. With a 20% opt-in rate that would be 2+ million users right there. See if that is stable and then make that bigger.
The bigger issue is that all of them will try it out for a few days.


The bulk of the "49er gold rush" hyper large LLMs tend to drift into hallucinations. They are suppose to know "everything" so they will just make something up if they don't have it.
It is all in the training. The problem with the large LLM's is how they are trained. You feed them garbage and that is what they learn to say. One thing this AI movement brings into crystal clear view is how important education and truth is to our kids.
 
  • Like
Reactions: LockOn2B

Chuckeee

macrumors 68040
Aug 18, 2023
3,005
8,628
Southern California
It is all in the training. The problem with the large LLM's is how they are trained. You feed them garbage and that is what they learn to say. One thing this AI movement brings into crystal clear view is how important education and truth is to our kids.
This is THE problem with most of the current major LLM trained AI. The emphasis was on pure quantity of the training materials with zero regard if the material was accurate or even if they had the legal right to use the material. The goal was for an AI that looked and sounded “good” and to avoid a “I don’t know” response at any cost. Assuming accuracy will all “eventually” get checked & worked out.

Perhaps Apple’s attempt will be different.
 
  • Like
Reactions: steve123

dk001

macrumors demi-god
Oct 3, 2014
11,121
15,472
Sage, Lightning, and Mountains
Yes. It confirms that Apple is definitely committed to AI and the next Mac Pro announcement is going to be something to watch for.

Sounds more like Apple got caught off guard annd pushed into AI sooner than they wanted to else we would have never seen the ChatGPT tie-in.

Maybe by 2026 or later we’ll see something top notch. My thought is where or what is Apple using for data as they have no access to what Google and MS do.

Should be interesting.
 
  • Like
Reactions: steve123

steve123

macrumors 65816
Original poster
Aug 26, 2007
1,151
716
Sounds more like Apple got caught off guard annd pushed into AI sooner than they wanted to else we would have never seen the ChatGPT tie-in.

Maybe by 2026 or later we’ll see something top notch. My thought is where or what is Apple using for data as they have no access to what Google and MS do.

Should be interesting.
I completely agree with you on all points. I think this was a watershed moment for Apple to get in the game or call it a day. Truly a number of decisions had so be made that are existential to Apple. The first one was how to deal with the RAM limitation they imposed on all their deployed devices. It appears they may thread the needle on that one.

As you say, it should be interesting now.
 
  • Like
Reactions: dk001

smulji

macrumors 68030
Feb 21, 2011
2,996
2,889
My thought is where or what is Apple using for data as they have no access to what Google and MS do.

“We do not use our users’ private personal data or user interactions when training our foundation models,” Apple says. But this leads to an obvious question, given that AI models have to be trained on something. How did Apple train these models?

According to the research document, Apple trained its models on “licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. Web publishers have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.”


This is the research document this part of the article is referencing: https://machinelearning.apple.com/research/introducing-apple-foundation-models
 

dk001

macrumors demi-god
Oct 3, 2014
11,121
15,472
Sage, Lightning, and Mountains

“We do not use our users’ private personal data or user interactions when training our foundation models,” Apple says. But this leads to an obvious question, given that AI models have to be trained on something. How did Apple train these models?

According to the research document, Apple trained its models on “licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. Web publishers have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.”


This is the research document this part of the article is referencing: https://machinelearning.apple.com/research/introducing-apple-foundation-models

I got that however it is far inferior to what others access, privacy claim or not. Apple needs something more or they will need to continue relying on other AI’s.

By the way, who saw the latest OpenAI hire? Former head of NAS.
 

NEPOBABY

Suspended
Jan 10, 2023
697
1,687
Wall Street Silver and Kimdotcom regularly post conspiracy theories to manipulate followers many of whom are nutty extremists who think the world is going to end.

It’s good to be skeptic when ex government people join companies but it’s also because corporations like to employ them not for “spying” purposes but just to rub elbows and be friendly with regulators.

Look at the names of the people who were on Theranos’s board. Looks freaky right? They all left with egg on their faces. No big brother conspiracy needed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.