Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,492
4,052
Similar to how Apple jacked up the 2023 Mac Pro by $1k knowing that demand is softening due to changing use cases and most preferring the Mac Studio for a pro desktop.

The MP 2023 price is higher in order to adhere to Apple's rigid BTO component pricing.

i. The base RAM is up because the M2 Ultra 'floor' on RAM capacity is 64GB instead of the 32GB for MP 2019. ( Apple's 32GB -> 64GB pricing is + $400 ). [ + $400 ]

ii. The base SSD for the M2 Ultra is 1TB instead of the 0.5TB for MP 2019 . ( Apple's SSD capacity pricing is $400/TB, or $100/0.256TB. So 0.512TB is $200 ). [ + $600 ]

iii. the Base GPU for the M2 Ultra is better than a W5700X instead of the 580X or 5500X for MP 2019. That too was about $400 add-on. [ + $1,000 ]

The 'floor' of the SSD , GPU , and RAM are higher , so the price is higher. Has little to do with demand and FAR more to do with Apple being consistent with their sky high pricing for BTO options across all the Mac products (not just the Mac Pro); as if those prices are 'normal' (and a 'bargain').

The base line Xeon W-3225 listed for $1,319.
https://ark.intel.com/content/www/u...eon-w3225-processor-16-5m-cache-3-70-ghz.html

$5,999 - 1,319 = $4,680 the MP 2019 with no CPU was just plain expense. People are complaining about the $3K more for PCI-e slots. That has been basically true since 2019; it isn't really 'new' at all. It already had a 'low volume' tax on it.

The only chance Apple had with their pricing structure to keep the $5,999 pricing would have been to do a "max" Mac Pro. That probably had technical challenges like provisioning the PCI-e output ( no UltraFusion in the package ) to make the two input PCI-switch work right . It also a bit problematical to permanently stick the Mac Pro with 1/2 the RAM and smaller GPU for the long term. Also would have problems versus a Max Mac Studio in an xMac enclosures for just a couple of cards. (e.g., if stuck with just x8 PCI-e v4 over the slots ... that gap isn't as big, Also likely to draw even more 'hate'. ). Even the Thunderbolt sockets count would drop ( like Max MS).



Apple putting a higher RAM 'floor' on models is not a "response to lower demand". That is just Cupertino kool-aid.
If need a bigger GPU then also have to buy more RAM and CPU cores ... again not lower user demand driving that.

It is more so Apple targeting customers who have more 'up front' money. Not folks who are 'revenue poor' and looking to cobble together a 'bigger' system over a 3-4 years. Is the 'up front' market smaller than both of those two combined. Sure. Demand is really a matter of who they are actually targeting. Apple left most of the folks looking for an 'affordable box to fill later' back in 2019 , if not 2013. It has been a while. There is no huge new shift with the MP 2023.
 
  • Like
Reactions: CWallace

Longplays

Suspended
May 30, 2023
1,308
1,158
The MP 2023 price is higher in order to adhere to Apple's rigid BTO component pricing.

i. The base RAM is up because the M2 Ultra 'floor' on RAM capacity is 64GB instead of the 32GB for MP 2019. ( Apple's 32GB -> 64GB pricing is + $400 ). [ + $400 ]

ii. The base SSD for the M2 Ultra is 1TB instead of the 0.5TB for MP 2019 . ( Apple's SSD capacity pricing is $400/TB, or $100/0.256TB. So 0.512TB is $200 ). [ + $600 ]

iii. the Base GPU for the M2 Ultra is better than a W5700X instead of the 580X or 5500X for MP 2019. That too was about $400 add-on. [ + $1,000 ]

The 'floor' of the SSD , GPU , and RAM are higher , so the price is higher. Has little to do with demand and FAR more to do with Apple being consistent with their sky high pricing for BTO options across all the Mac products (not just the Mac Pro); as if those prices are 'normal' (and a 'bargain').

The base line Xeon W-3225 listed for $1,319.
https://ark.intel.com/content/www/u...eon-w3225-processor-16-5m-cache-3-70-ghz.html

$5,999 - 1,319 = $4,680 the MP 2019 with no CPU was just plain expense. People are complaining about the $3K more for PCI-e slots. That has been basically true since 2019; it isn't really 'new' at all. It already had a 'low volume' tax on it.

The only chance Apple had with their pricing structure to keep the $5,999 pricing would have been to do a "max" Mac Pro. That probably had technical challenges like provisioning the PCI-e output ( no UltraFusion in the package ) to make the two input PCI-switch work right . It also a bit problematical to permanently stick the Mac Pro with 1/2 the RAM and smaller GPU for the long term. Also would have problems versus a Max Mac Studio in an xMac enclosures for just a couple of cards. (e.g., if stuck with just x8 PCI-e v4 over the slots ... that gap isn't as big, Also likely to draw even more 'hate'. ). Even the Thunderbolt sockets count would drop ( like Max MS).



Apple putting a higher RAM 'floor' on models is not a "response to lower demand". That is just Cupertino kool-aid.
If need a bigger GPU then also have to buy more RAM and CPU cores ... again not lower user demand driving that.

It is more so Apple targeting customers who have more 'up front' money. Not folks who are 'revenue poor' and looking to cobble together a 'bigger' system over a 3-4 years. Is the 'up front' market smaller than both of those two combined. Sure. Demand is really a matter of who they are actually targeting. Apple left most of the folks looking for an 'affordable box to fill later' back in 2019 , if not 2013. It has been a while. There is no huge new shift with the MP 2023.


It's basic business practices. Claiming it is a "kool-aid" makes you sound condescending and impolite.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,492
4,052
No really and not to compete with others to be the compute king but to squeeze out a little more from existing SoCs in cases where there are less thermal restrictions (Max in the studio and Ultra in the Mac Pro). Is it worth to over clock a modest 10-30% assuming perfect scaling on all subsystems such as RAM? Marketing for sure would love it.

Likely not going to be perfect scaling on all subsystems. LPDDR isn't designed for 'over clocking' flexibility. Similar issues likely for the internal mesh, data interchange bus (which already puts CPU clusters on a bandwidth budget for QoS reasons ).

I think you miss one of the primary points of Apple's design approach. Apple doesn't try to 'save' the clock uplift just for upper 10% of the line up. When Apple picks a new fab process they push through a healthy chunk of the clock uplift there the whole line up. The plain M3 likely will get a 10+% uplift. If they took most of the uplift from the new design and fab process where does the Mac Pro yet another 25-30% piled on top of that. ( so now cumulatively 35-40% range.) If fab process is design to afford a 20% uplift and now looking for 40% then either way, way out on the dimninish returns curve ( which is BAD for vey high core count designs. ) or very significantly mutating the cores into bigger versions to deal with all the adverse side effects.

Marketing would have to be throughly in "monkey see, monkey do" mode to be excited about that. (let's mindless copy what Intel/AMD do). Apple can go to higher core counts and NOT loose single thread speed with a 'regular' die (as opposed to some 'princess and the pea die that just "happens to work" '. Which of their other competitors can do that? It is not AMD/Intel. If can do it with 'regular' dies also likely have a competitive advantage either on SoC cost or margin or both. If Marketing is spending most of the their time trying to match solutions to customers they have something. Imitating Intel product matrix exactly isn't going to buy them much. It is a widely different set of customers.

Any kind of customer app that is a blended 30-60% ST/MT or 60-30% ST/MT is going to pretty good on a SoC that is good at both. As opposed to a SoC that is just aiming at some outlier benchmark 'win'.



I read the post but can't find references to clocking existing SoCs higher. The putative Extreme chip is a 4X Max not an overclocked Ultra. The hypothesis is that a overclocked Ultra would be cheaper than completely new 4X Max design.

The 4x Max is far more likely 'putative' because it is , at best , dubious (if not bad) chiplet design in the first place, rather than some chronic problem of being clocked too slow.
 
  • Like
Reactions: CWallace

dmccloud

macrumors 68040
Sep 7, 2009
3,122
1,883
Anchorage, AK
My apologies for the confusion.

I meant was Nvidia saw the projections of desktop dGPU sales and adjusted prices to cover the falling demand.

Similar to how Apple jacked up the 2023 Mac Pro by $1k knowing that demand is softening due to changing use cases and most preferring the Mac Studio for a pro desktop.

Mining aint as profitable as before and China banned it so that impacted desktop dGPU sales.

Actually, when GPU prices spiked during the cryptomining boom, Nvidia did nothing to quell the massive spike in prices, then just kept that structure in place with subsequent cards. Nvidia is slowly pricing themselves out of the reach of most potential customers, and releasing cards at boneheaded price points like the 4060Ti (which isn't selling in significant quantities anywhere). While there is an overall drop in GPU sales, it's primarily from Nvidia since they control the majority of the GPU market. Intel's dGPUs barely make a dent in the overall market, and AMD is holding steady mainly because they're offering better price points than Nvidia.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
Actually, when GPU prices spiked during the cryptomining boom, Nvidia did nothing to quell the massive spike in prices, then just kept that structure in place with subsequent cards. Nvidia is slowly pricing themselves out of the reach of most potential customers, and releasing cards at boneheaded price points like the 4060Ti (which isn't selling in significant quantities anywhere). While there is an overall drop in GPU sales, it's primarily from Nvidia since they control the majority of the GPU market. Intel's dGPUs barely make a dent in the overall market, and AMD is holding steady mainly because they're offering better price points than Nvidia.
Even with Nvidia's high prices they & Intel are gaining market share at the expense of AMD.

ga4ysdRtWufiCuUx3tFmHD.png


Shipments of desktop dGPU is just softening overall with AMD & Nvidia taking a hit while Intel doubling their market share.

nA2JW8AVnDmcAq4RmJbnZD.png


This reflects a 20Y downward trend to a 20Y all time low. No wonder AMD/Intel/Nvidia re investing heavily on the growth market of A.I. chips

9hGBfdHQBWtrbYQKAfFZWD.png



To me this is just changing market where in "good enough" is being preferred when upgrading.

People who insist on the last 1% of performance will always opt for a separate CPU, GPU, RAM and SSD for "perfect".

That market appears to be shrinking. Their relevance approaching that of the mainframe market.

Apple's Mac SoC with GPU cores that the best performing among all iGPUs, not dGPU, is perfectly positioned to take advantage of this

For AMD/Intel/Nvidia in its place are A.I. chips that pushed Nvidia's market cap briefly to $1 trillion


“There’s an old Wayne Gretzky quote that I love. ‘I skate to where the puck is going to be, not where it has been.’ And we’ve always tried to do that at Apple. Since the very very beginning. And we always will.” - Steve Jobs
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,492
4,052
Actually, when GPU prices spiked during the cryptomining boom, Nvidia did nothing to quell the massive spike in prices, then just kept that structure in place with subsequent cards.

In the first case, "mania" driven demand with very high likelihood leads to a bust. "pet rocks" , "tickle me elmo doll" , etc. Demand highly fuel by FOMO usually gets to the point where most folks more on to the next FOMO fad or just come to their senses or both.

If Nvidia had tossed even a larger supply of GPU into the crypto craze then the 'bust glut' now would just that much more deeper and problematical to releasing new product.

As to keeping that price structure. Before the cryto craze were the GPU cards really all that profitable? A steady release of 'hacks of the month' to goose games x, y , and z faster or fix a bug from the driver side. '


Everyone and their mother chasing TSMC wafers isn't making them any cheaper. Nvidia 'do anything' approach of biggest possible die is only going to get more expensive for the top of the line up. That cost going up is going to pull the rest of the line up higher ( at least on initial list prices). It now becoming more roughly the same size die from a substantively more expensive wafer because most folks are clamoring for more performance at higher and/or faster resolutions.

The creep on GPU power consumption isn't going to keep board costs down either.



Nvidia is slowly pricing themselves out of the reach of most potential customers, and releasing cards at boneheaded price points like the 4060Ti (which isn't selling in significant quantities anywhere). While there is an overall drop in GPU sales, it's primarily from Nvidia since they control the majority of the GPU market.

Nvidia has the biggest 'bust supply glut' out there. As much as folks want to paint picture that they didn't do anything to try to match the crypto craze demand spike.... they did to some extent.

And Nvidia has way more profitiable products to spend N4 wafers on. Cryto died off , but the AI and data center stuff is all 'red hot'. So if make twice as much selling a H100 as selling 5-6 4060Ti then 4060Ti slow down doesn't matter as much. Nvidia may drop out of the bottom half of the consumer GPU card market. They have plenty of other revenue sources now. It is a company with a way more broader set of products now than there were 5-10 years ago.


Intel's dGPUs barely make a dent in the overall market, and AMD is holding steady mainly because they're offering better price points than Nvidia.

Intel's and AMD's iGPU (and Appl'es iGPU) will give them a leg up on also covering the bottom 1/3 of whatever is left over in the dGPU market. They'll have the volume to amortize lots of the fixed cost overhead there is in selling into that market. Nvidia doesn't have a huge driver , yet, for driving large volume in that space. They probably will retreat over time. ( and, for example, drop most of the 'Ti' offerings except toward the very top of the line up. the pricing of the 4060Ti isn't a 'mistake' as much as even offering a 4060Ti in the first place. That probably was not necessary. It was probably on a roadmap years ago so have commitments to release it, but it is dubious product outside of chronic shortages and increasing desktop placements. (i.e., market conditions from years and even more years ago.) ).
 
  • Like
Reactions: CWallace

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,249
I think you miss one of the primary points of Apple's design approach.
It not much to miss is it? There are very few SoCs version as compared to Intel/AMD/NVIDIA CPU and GPU version and no higher clock speeds. Good for the economy of scale but less good for the edge cases. I gather your answer to the question put by the OP is "no". You are likely correct. However, personally I would explore the possibilities to have varying clock speeds in different systems as a differentiator. None but Apple knows if it is a good approach.
 
  • Like
Reactions: BenRacicot

CWallace

macrumors G5
Aug 17, 2007
12,510
11,509
Seattle, WA
It not much to miss is it? There are very few SoCs version as compared to Intel/AMD/NVIDIA CPU and GPU version and no higher clock speeds. Good for the economy of scale but less good for the edge cases.

Apple does not seem to be interested in the "edge cases". Even if they had been able to put in an SoC with 48 compute cores and 152 GPU cores and access to 384GB of RAM into the 2023 Mac Pro, this forum would still be full of people complaining about no GPU cards, "only" 384GB of RAM and that the 152-core GPU can't run Call of Duty with as many FPS as a 4090 Ti. And don't forget the howling at whatever five-figure price Apple would be charging for such a configuration, even though the "real" workstations people would be comparing it to have six-figure price tags. :p

However, personally I would explore the possibilities to have varying clock speeds in different systems as a differentiator. None but Apple knows if it is a good approach.

I believe it is a good approach. Apple is not a CPU / GPU company. The M family of SoCs are designed solely to fit the needs and objectives Apple has for them. They are not trying to sell them to third party OEMs and BYO enthusiasts like Intel/AMD/Nvidia so they have no need to artificially differentiate them to create graduated marketing levels with graduated MSRPs.

Apple could have started with the Ultra and then successively crippled it in core counts, memory bandwidth, video processors, etc. like Intel/AMD/Nvidia do to make them "cheaper", but then the lower-end models suffer from compromises (look at the 4060Ti that is only a little better, and in a number of games, actually worse, than the 3060Ti thanks to the latter's wider memory bus).

As Mac consumers, this is a benefit to us since it means each SoC model is optimized for the machine it is going into and the use cases Apple expects to be generally run on those machines. Rather than being saddled with a sub-optimal SoC with serious tradeoffs to lower the sales price, each Mac is the "best" it can be. And as expensive as Macs are, think how much more expensive they would be if every one of them had an Ultra or Max SoC in them, just with much of its functionality disabled.
 
Last edited:

thenewperson

macrumors 6502a
Mar 27, 2011
990
908
It not much to miss is it? There are very few SoCs version as compared to Intel/AMD/NVIDIA CPU and GPU version and no higher clock speeds. Good for the economy of scale but less good for the edge cases. I gather your answer to the question put by the OP is "no". You are likely correct. However, personally I would explore the possibilities to have varying clock speeds in different systems as a differentiator. None but Apple knows if it is a good approach.
They seem to be going down that path anyway. The 16” Max runs at 3.8GHz instead of the typical 3.5 (I’m assuming it’s the same for the Studio & MP) and there were rumours of the MP Ultra running at 4.2GHz in tests but that doesn’t seem to have happened.
 

BenRacicot

macrumors member
Original poster
Aug 28, 2010
80
45
Providence, RI
They seem to be going down that path anyway. The 16” Max runs at 3.8GHz instead of the typical 3.5 (I’m assuming it’s the same for the Studio & MP) and there were rumours of the MP Ultra running at 4.2GHz in tests but that doesn’t seem to have happened.
Oh the M2 is running faster clocks than the M1?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.