Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Mac sales came in at $7.6 billion in Q4 2023, down 34 percent from $11.5 billion in the year-ago quarter. (Source)

According to the latest report from Gartner, global PC shipments totaled 63.3 million units in the fourth quarter of 2023, increasing for the first time after eight consecutive quarters of decline. (Source)

Initially. M-chips did cause an increase in sales but there were two contributing factors:
  • Covid (all PC sales increased)
  • Pent-up demand. Switch to AS was telegraphed well in advance so people stopped buying Intel-based Macs.
In the end, Mac market share is where it was before the switch to AS.
Mac market has increased, both in the US and worldwide. Any basic Google search would show that.

Furthermore, I'm not sure why you're using Q4 2023 when we already have Q1 2024.

It's hard to disseminate marketshare since most estimates are just guesses and you have to factor in things like growth or decline of the whole industry.

However, one telling statistics came at the last earnings call:

2:26 pm: "Mac generated revenue of $7.8 billion and returned to growth despite one less week of sales this year. This represents a significant acceleration from the September quarter and we faced a challenging compare due to the supply disruptions and subsequent demand recapture we experienced a year ago. Customer response to our latest iMac and MacBook Pro models powered by the M3 chips has been great. And our Mac installed base reached an all time high with almost half of Mac buyers during the quarter being new to the product. Also, 451 Research recently reported customer satisfaction of 97% for Mac in the US."

Half of Mac buyers are new to the product. Unless you think half of old Mac users are transitioning to PCs, then Mac shares must be increasing.

Lastly, I think it's important to remember that Apple owns the lion's share of premium PCs sold (can't remember source) and Apple likely has significantly higher margins due to cutting out middlemen in chips. IE. It costs Apple around $50 to make the M1 chip but $250+ to get a lower powered Intel chip for the Macbook Air.

I have no doubt that Apple could increase marketshare significantly anytime. However, they'd have to sacrifice margin % to do so. They could release a Macbook SE for $750-$800 permanently. They could increase the RAM and SSD size on standard models. Apple has obvious ways to increase marketshare. You can't say the same about PC makers. In fact, Dell has such a hard time differentiating that their new XPS computers have a touchbar-like feature that we know doesn't work. But they can't differentiate in other ways so they're forced to use gimmicks like Intel Macbooks.
 
Last edited:
  • Wow
Reactions: gusmula

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Ada is significantly faster yet — the 4090 mobile features whopping 76 SMs running at 1.4-1.7Ghz — that's around 2x-2.4x more theoretical compute than M3 Max! Ada also features larger caches and faster RT — all this allows it to achieve a commanding lead of 2x over M3 Max and 3080 Ti mobile in Blender.

This is the situation we have today. What about tomorrow though?

Well, the obvious next step for Apple to do is to "pull an Ampere" and add FP32 capability to either FP16 or FP32 pipe....

Actually the obvious 'next step' is a 'two times' M3 Max which would fundalmentally close the gap on any '2x commanding lead'. Don't really "have to" wait another whole silicon generation .

The bar is going to move for both Apple and Nvidia when both sides get to take a 'whack' at new GPUs cores resting on incrementally newer fab processing constraints.


(I think FP16 pipe is a more likely candidate as this would retain useful concurrent FP+INT execution). If Apple goes this route, each of their cores would be capable of 256FP32+128FP32 per clock, making them 2x more capable than Nvidia SMs. This should instantly boost Apple's performance on FP-heavy code by 30-60%, without increasing clocks. And this should be fairly easy for Apple to do, as their register files and caches already support the operand pressure.

Support the operand pressure of the the magnitude of the current operand flow? Yeah. Already latent support for operand flow that is 2x what have deployed now? Probably not. There is a substantive extra 1x 'horsepower' buffer still around doing nothing 98% of the time? I suspect that substantive parts of 'extra' data pressure handling is assigned to something else other than FP32 and also that this "other stuff" will still be around in the future config to take claim on what it substantively concurrently uses.


In theory, they might go even further: make all pipes symmetrical and do 3x dispatch per cycle, but that would likely be very expensive.

As long as sharing transistor budget with P , E , NPU, A/V , etc cores (and higher than average caches to offset bandwidth constraints of "poor man's HBM" on the same die that makes it relatively even more expensive.


At any rate, if we look at Apple's progress with the GPUs, I think one can see long term plan. Each generation incrementally adds new features that are used to unlock new features the next generation. This is a well-executed multi-year plan, that has been delivering consistent performance and capability improvements every single release.

In the general sense, that 'plan' is about as applicable to Intel as it is to Apple ( never mind Nvidia and AMD.)

There is relatively little in Apple's move so far to indicate that they are ramping to a x090(Nvidia) / x900 (AMD) 'killer' kind of path. They are incrementally making progress but it is at best the upper-mid , lower-high range they are targeting. And AMD and Nvidia are steadily folding there bigger stuff back down into that range ( with shrinks and bandwidth upgrades as the costs for those fall). The 4090+4080Ti/Super/etc are not the bulk of the cards that Nvidia will sell this generation. Many of these treads about Apple getting 'killed' in the desktop are about moving to some of the lowest volume cards that Nvidia sales and saying Apple isn't there. Apple doesn't 'have to' be there to be generally competitive.


I doubt that the current Apple GPUs architecture is close to its plateau, simply because there are still obvious things they can do to get healthy performance boosts. The same isn't really true for Nvidia or AMD. I just don't see how Nvidia can further improve Ada SM design to make it significantly faster —

1. Faster at what? Faster in RT and Tensor. Yes there probably is get out the FP32 myopic focus. ( Apple using their Metal in iOS/iPadOS to drag uplilft in the macOS context ... Nvidia doing same thing with their AI/ML foothold to pull up more mainstream Windows/Linux users into larger, faster loads wouldn't be shocking. If Microsoft pragmatically mandates that Windows 11+ has NPU present the growth factor pretty much a given. )

2. Apple boat anchored on LPDDR, while competitors are on GDDR, and cache sizes plateauing because not scaling on new fab processes means Apple's monolithic dies are going to face headwaters in performance gains. Apple has already engaged workarounds for those limitations. Those are 'cards already played' just like Nvidia has lots of 'plaid cards'.


they can either continue increasing the SM count and raising the clocks, or they have to design a fundamentally new SM architecture that boosts compute density. Definitely looking forward to see what Blackwell will bring — does Nvidia intend to continue pushing their successful SM model with bigger and bigger designs, or will they do something new? For Nvidia's sake, I hope it is the latter.

Apple is on N3 already. Nvidia is on N4. Apple has already used the shrinkage while Nvidia can still cram 'more' into the same size dies they are using now. And if Nvidia waits until N3P to jump into the 'just as big' game then all the more. ( Nvidia doing just midrange just on N3E wouldn't be surprising. Also wouldn't be surprising for them to basically walk away from the low end. ). Also decent chance there is a even bigger gap between arch of B100 and the 'mainstream raster , high frame rate' oriented stuff.

More than decent chance that the 5090 will be more expensive than the 4090 is. Reports are that AMD cancelled their "expensive as possible " 5090 competitor. For AMD even higher prices probably wouldn't work well marketwise and limited CoWoS throughput likely solely devoted to MI300 (and up) which have better margins to go with the lower relative volume. If AMD puts a hefty focus on providing performance value in the mid range, then that is likely a bigger 'desktop' problem for Apple in immediate future than Nvidia's next moves are.


Even higher likelihood that Nvidia is going to attack MI300 like competition with chiplets also (not just a bigger monolithic die). There is a pretty good chance the cache ratio isn't going to be the same if they go 3D stacking. (and Apple won't in the interim future. Expense and performance/watt won't be laptop optimized. )

Apple sells its GPUs at 'workstation GPU card' price levels so Nvidia's higher prices really aren't going to hurt them much in head-to-head with Apple on that aspect.
 

turbineseaplane

macrumors P6
Mar 19, 2008
17,368
40,147
I have no doubt that Apple could increase marketshare significantly anytime. However, they'd have to sacrifice margin % to do so. They could release a Macbook SE for $750-$800 permanently. They could increase the RAM and SSD size on standard models

That would increase market share?

I keep getting told everywhere here that all the Apple base model decisions make total sense and are "plenty" for most users...

Are you suggesting that people are using Windows or Linux instead of macOS, just to avoid the component upgrade gouging by Apple?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
....
It's hard to disseminate marketshare since most estimates are just guesses and you have to factor in things like growth or decline of the whole industry.

However, one telling statistics came at the last earnings call:


: "Mac generated revenue of $7.8 billion and returned to growth despite one less week of sales this year. This represents a significant acceleration from the September quarter and we faced a challenging compare due to the supply disruptions and subsequent demand recapture we experienced a year ago. Customer response to our latest iMac and MacBook Pro models powered by the M3 chips has been great. And our Mac installed base reached an all time high with almost half of Mac buyers during the quarter being new to the product. Also, 451 Research recently reported customer satisfaction of 97% for Mac in the US."


Half of Mac buyers are new to the product. Unless you think half of old Mac users are transitioning to PCs, then Mac shares must be increasing.

new Mac buyers are not necessarily former PC buyers. That term is equally applicable to folks who only had an iPhone (or Android phone or non-Windows tablet) and no classic PC form factor, who finally got around to buying something in the 'classic' factor.

The installed base reaching an all time is also indicative that folks are not perhaps not rapidly trashing their Intel Macs as fast as Apple would hope. No way the folks are relatively rapidly dumping old Macs and the renewal rate is not cracking 50% of Macs bought. If there was a massive rapid stampede off of Intel Macs the "new to Mac", then the percentage should be shrinking; not doubling.

Pretty good chance that is really a "we are having to more so rely on the current Apple customers for sales (iPhone/iPad synergies) than bringing in Windows expats" being spun like they are sucking away gobs of Windows folks.

The key factor buried here is what is how long are Apple holding onto Macs. If folks held onto Macs on average just 3 years pre pandemic and now they are holding onto Macs on avarge for 4 years ... guess what the install base grows. Stuff just isn't being retired as fast.

That is fine for revenue growth if going to get reoccurring subscription revenue out of them. Otherwise it is a somewhat also a bigger cost basis.


P.S. it probably won't so up significantly in the install base growth or aggregate new/leave numbers , but in Mac Pro space in particular there probably is substantive outflow. The "hyper modular" focused folks are likely heading for classic "box with slots" pastures.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
That would increase market share?

I keep getting told everywhere here that all the Apple base model decisions make total sense and are "plenty" for most users...

Are you suggesting that people are using Windows or Linux instead of macOS, just to avoid the component upgrade gouging by Apple?
Yes. There are enough of them to make an impact on market share. It’s not controversial to state that increasing base RAM and SSD would increase Mac market share.
 

vantelimus

macrumors 6502
Feb 16, 2013
334
554
Seeing the latest earnings report from NVIDIA, it just shows how dominate NVIDIA currently is, nobody even comes close. Why not embrace the best technologies the industry has to offer for the Mac Pro?

Even Google who build their own custom GPU's, has embraced NVIDIA in the end. Amazon also build custom ARM chips, but they also heavily rely on NVIDIA, there is no way around it.
What do earnings have to do with which technology path is the best? Apple has its own GPU architecture with an integrated memory architecture that allows direct access to 196 GB of main memory by the GPUs. How does cramming Nvidia cards into an Apple machine and paging data into them over a slow interface make that better?
 
  • Like
Reactions: MacPowerLvr

vantelimus

macrumors 6502
Feb 16, 2013
334
554
The installed base reaching an all time is also indicative that folks are not perhaps not rapidly trashing their Intel Macs as fast as Apple would hope. No way the folks are relatively rapidly dumping old Macs and the renewal rate is not cracking 50% of Macs bought. If there was a massive rapid stampede off of Intel Macs the "new to Mac", then the percentage should be shrinking; not doubling.
No doubt that this is true. I have a 2013 Mac that is certainly capable of handling the current content on the web without performance problems. The PC/Mac market is mature. There are no new categories of large adopters left. Machines are sold to replace previous machines and to give to children who are getting their first machine.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,669
1. Faster at what? Faster in RT and Tensor. Yes there probably is get out the FP32 myopic focus.

I hoped it was clear from the context that I was talking about general-purpose GPU compute. Matrix stuff — yes, Nvidia is miles ahead in raw performance here, mainly because they have a matmul unit in the GPU itself. If Apple wants to compete at that front they would have to build one as well

Regarding matmul support on Apple GPUs... suppose they indeed go the 2x FP32 route, that's 256 FP32 MAC units per GPU core. If they arrange it in a grid similar to AMX units and add limited-precision support they could get 512*2 = 1024 BF16 grid OPS per cycle — close enough to the current Nvidia SM (of course, Nvidia still has many more SMs...)

Actually the obvious 'next step' is a 'two times' M3 Max which would fundalmentally close the gap on any '2x commanding lead'.

Sure, but I was commenting on the GPU architecture and how it can be improved.


Support the operand pressure of the the magnitude of the current operand flow? Yeah. Already latent support for operand flow that is 2x what have deployed now? Probably not. There is a substantive extra 1x 'horsepower' buffer still around doing nothing 98% of the time? I suspect that substantive parts of 'extra' data pressure handling is assigned to something else other than FP32 and also that this "other stuff" will still be around in the future config to take claim on what it substantively concurrently uses.

The current architecture already supports 2x 32-bit operand dispatch per SIMD (one for FP32 and one for INT32). Let's say they make the current FP16 unit FP32 capable. Then they can do 256 FP32 ops per clock per GPU core with no changes to the data paths.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,518
19,669
How do you envision Apple can scale their GPU to NVIDIA level? NVIDIA H200 Tensor Core GPUs
pack up to 288 gigabytes of the HBM3e memory.

H200 is a data center part that goes into a custom carrier board and the value of single such system is priced at hundreds of thousands of dollars. I am not sure how this is relevant to the current discussion. You are not buying a H200 in your workstation either. Entirely different market.

If Apple were interested in that kind of market (and so far they show zero interest), they would probably stack a bunch of AMX units with a few CPU cores to feed them, a lot of cache, and a bunch of memory controllers with HBM.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,660
OBX
Does anyone know if Apple "eats its own dog food" WRT to AI training using Metal (aka their own hardware) and not using any Nvidia or AMD AI hardware?

If they want to train all this AI goodness that is supposed to come in iOS 18 and macOS 15, how are they doing it now?
 

leman

macrumors Core
Oct 14, 2008
19,518
19,669
Does anyone know if Apple "eats its own dog food" WRT to AI training using Metal (aka their own hardware) and not using any Nvidia or AMD AI hardware?

If they want to train all this AI goodness that is supposed to come in iOS 18 and macOS 15, how are they doing it now?

They 100% use Nvidia GPUs. People I know working on Apple ML infrastructure use Amazon AWS. It would be entirely nonsensical for them to use their own machines for training. If you are a car company building family sedans you don't freight them over land using your own sedans, you charter trucks or trains.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,660
OBX
They 100% use Nvidia GPUs. People I know working on Apple ML infrastructure use Amazon AWS. It would be entirely nonsensical for them to use their own machines for training. If you are a car company building family sedans you don't freight them over land using your own sedans, you charter trucks or trains.
Nvidia/AMD uses their own hardware for training, why would it be weird for Apple to?
 

leman

macrumors Core
Oct 14, 2008
19,518
19,669
Nvidia/AMD uses their own hardware for training, why would it be weird for Apple to?

Because Apple doesn't make any hardware for training at large scale. Current Apple Silicon is perfectly adequate for prototyping smaller models (either using the GPU or the AMX units), and Apple has long been a leader in energy-efficient Ml inference on simple models. But it is simply not the right tool for the job for big stuff. Even if Apple overhauls their GPU architecture to be 4-8x better at matrix multiplication, they still won't be suitable for large training jobs, where you need multiple GPUs with very fast interconnects. That is datacenter niche, and Nvidia currently reigns supreme there.

However, given right investments and some time, Apple Silicon might become an attractive target for local inference and model development for large models.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,660
OBX
Because Apple doesn't make any hardware for training at large scale. Current Apple Silicon is perfectly adequate for prototyping smaller models (either using the GPU or the AMX units), and Apple has long been a leader in energy-efficient Ml inference on simple models. But it is simply not the right tool for the job for big stuff. Even if Apple overhauls their GPU architecture to be 4-8x better at matrix multiplication, they still won't be suitable for large training jobs, where you need multiple GPUs with very fast interconnects. That is datacenter niche, and Nvidia currently reigns supreme there.

However, given right investments and some time, Apple Silicon might become an attractive target for local inference and model development for large models.
I see. It just seems like Nvidia (and others) will always have a place in any sort of AI training at scale.
What models has AMD trained on their own hardware? Do you have any links on this?
It took a bit to find anything at all. AMD has a lot of marketing fluff, but it appears the LUMI Supercomputer uses AMD hardware and it appears ComPatAI has been trained on AMD's Instinct Acclerators.
first-finnish-lumi-projects-chosen-advancing-cancer-research-developing-digital-twins-of-the-earth-and-more

I'm not sure why it is so hard to find anything on it.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,660
OBX
Thinking on this more, wouldn't Apple need something to continuously train their Autonomous car project? I know Tesla uses Nvidia for training, though they are trying to replace it with Dojo (their own in house hardware) and so far are failing on that (as far as I know).


How often are LLM retrained?
 

leman

macrumors Core
Oct 14, 2008
19,518
19,669
I see. It just seems like Nvidia (and others) will always have a place in any sort of AI training at scale.

”Always“ is a long word :) ML is still a very young discipline and the current generation of models does little else than fancy compression of regurgitation of vast bodies of data, making it incredibly inefficient. We don’t even know if this style of ML will be still relevant in 20 years.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
AMD has a lot of marketing fluff, but it appears the LUMI Supercomputer uses AMD hardware and it appears ComPatAI has been trained on AMD's Instinct Acclerators.
first-finnish-lumi-projects-chosen-advancing-cancer-research-developing-digital-twins-of-the-earth-and-more
The link talks about an AMD customer, not AMD itself, using AMD hardware for machine learning.

Hugging Face lists the models offered by AMD. It is unclear whether AMD trained them on its own hardware, although it adapts them to work with AMD's NPU.

Out of curiosity, those are the models offered by Nvidia and Apple.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,660
OBX
”Always“ is a long word :) ML is still a very young discipline and the current generation of models does little else than fancy compression of regurgitation of vast bodies of data, making it incredibly inefficient. We don’t even know if this style of ML will be still relevant in 20 years.
At least in the automotive realm it will probably be relevant for far longer (or at least until people driving cars is banned).

Not sure if Apple will really waste(?) their time working on an autonomous car (either in their own vehicle or via licensed hardware), but training on edge cases will be the name of the game for a long while (imo).
The link talks about an AMD customer, not AMD itself, using AMD hardware for machine learning.

Hugging Face lists the models offered by AMD. It is unclear whether AMD trained them on its own hardware, although it adapts them to work with AMD's NPU.

Out of curiosity, those are the models offered by Nvidia and Apple.
Oh that is interesting. Yeah AMD probably doesn't make any models (at least they haven't talked about any, from what I could find anyways) in-house.
 

Chuckeee

macrumors 68040
Aug 18, 2023
3,060
8,722
Southern California
I hoped it was clear from the context that I was talking about general-purpose GPU compute. Matrix stuff — yes, Nvidia is miles ahead in raw performance here, mainly because they have a matmul unit in the GPU itself. If Apple wants to compete at that front they would have to build one as well
They 100% use Nvidia GPUs. People I know working on Apple ML infrastructure use Amazon AWS. It would be entirely nonsensical for them to use their own machines for training. If you are a car company building family sedans you don't freight them over land using your own sedans, you charter trucks or trains
However, given right investments and some time, Apple Silicon might become an attractive target for local inference and model development for large models.
This this just illustrates a general issue with the McIntosh architecture that has existed since the beginning. Someone somewhere has to develop software to take full advantage of the hardware. It’s nothing new, but it often seems that the full utility of the Mac hardware is limited by available software.
 
  • Like
Reactions: tenthousandthings

leman

macrumors Core
Oct 14, 2008
19,518
19,669
This this just illustrates a general issue with the McIntosh architecture that has existed since the beginning. Someone somewhere has to develop software to take full advantage of the hardware. It’s nothing new, but it often seems that the full utility of the Mac hardware is limited by available software.

To be honest, I don’t think this argument applies to ML that much. Machine learning is done using frameworks, and frameworks can be updated to run on different hardware. What’s more, there is now a standard Python interface for array APIs that make different frameworks compatible with each other. This means you can write code in one framework and then run it using another one. And this cross-framework functionality will only be one more common as time goes by. Also, Macs are very popular with developers and researchers, so these things never really take too long to be one available.

The real issue holding Apple Silicon back on ML is hardware performance. If Apple were competitive on matmul with high-end Nvidia GPUs, we’d already have dozens of third-party implemented ML frameworks.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
The real issue holding Apple Silicon back on ML is hardware performance. If Apple were competitive on matmul with high-end Nvidia GPUs, we’d already have dozens of third-party implemented ML frameworks.
What about the power and ease of use of NVIDIA's CUDA vs. Apple's GP-GPU API? Remember that a lot of people doing ML are scientists rather than programmers.

Of course, given the size and history of the CUDA community, currently CUDA is going to be far in the lead when it comes to the collection of NVIDIA-provided pre-built application code, aftermarket pre-built application code (e.g., CUDA code downloadable from government and university websites that a scientist that can adapt for their own needs), as well as integration with ofther software systems, like CUDA Python.

But if Apple offered the hardware, and a great API, an Apple ML community, and the attendant code base, would accumulate for Apple as well.

EDIT: I just noticed your reponse to @chuckee. It sounds like you're saying CUDA's powerful and (relatively) user-friendly API, all the pre-built CUDA code, doesn't give NVIDA an advantage when it comes to ML.
 
Last edited:

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Seeing the latest earnings report from NVIDIA, it just shows how dominate NVIDIA currently is, nobody even comes close. Why not embrace the best technologies the industry has to offer for the Mac Pro?

Even Google who build their own custom GPU's, has embraced NVIDIA in the end. Amazon also build custom ARM chips, but they also heavily rely on NVIDIA, there is no way around it.
What if the Apple Silicon CPU architecture is superior to the Neoverse V2 design NVIDIA uses in the Grace CPU of its Grace Hopper superchip? By the above argument, shouldn't NVIDA be embracing Apple's approach for its CPU stage? [Yes, a the Grace CPU is more of a server-type design, so Apple's current CPU architecture wouldn't translate directly; but then NVIDIA's GPU cores aren't designed to accommodate Apple's UMA approach either.]

Of course, neither of the above are going to happen. Even when Apple could use discrete chips, they chose AMD instead. I don't know where the apportionment of the blame for this lies, but suffice it to say that Apple and NVIDIA don't play well together. Plus of course Apple has no interest in making chips for 3rd parties.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.