Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

altaic

macrumors 6502a
Jan 26, 2004
711
484
It does but its also very expensive to get. For an M1 Ultra you need a $4000 Mac and it comes with a measly 64GB RAM and 1TB SSD and a binned 48 core GPU.

[Going on a tangent]
The 7950X is $699 and say a RX 7800 is $899 and the rest of the system is $1500, so total = $3,100.

That would be $900 less than a Mac Studio. Apple's value just got dropped very hard and the pref is not even top of the range as its lacking in ST.
I sincerely doubt that the rest of that system would cost $1500. Just the “measly” 64GB of DDR5 6400 costs $600-700, which is nearly half of your budget. And better hope that mobo has more than 4 memory slots or you’ll be stuck in measlytown waiting for higher density RAM while Mac users are having decadent 128GB parties just down the road.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
It does but its also very expensive to get. For an M1 Ultra you need a $4000 Mac and it comes with a measly 64GB RAM and 1TB SSD and a binned 48 core GPU.

[Going on a tangent]
The 7950X is $699 and say a RX 7800 is $899 and the rest of the system is $1500, so total = $3,100.

That would be $900 less than a Mac Studio. Apple's value just got dropped very hard and the pref is not even top of the range as its lacking in ST.

The AMD system also supports user upgrades like SSD and RAM and even the Motherboard will support Zen 5 for future CPU upgrades and also GPU upgrades that are much cheaper than Apple's. AMD supports 32 bit apps and still has all the baggage of x86 and is still showing excellent numbers.

So this begs the question, why is Apple soldering everything in their desktop Macs and what's the point of removing 32bit support when AMD showed its possible to still get great pref while maintaining backwards compatibility.

Apple needs to update the Mac Studio with faster CPU and GPU. It's already outdated as AMD's CPU has a very high ST and perf per watt is great too.
You are disregarding power draw, and that is part of the appeal of the Mac Studio, Apples flawed fan curve notwithstanding.

A more direct Mac Studio equivalent could be if AMD makes Phoenix Point available for socket AM5. We’ll see about that, previously they have been reluctant to make their nicer APUs available for desktop builders.
 

pastrychef

macrumors 601
Sep 15, 2006
4,754
1,453
New York City, NY
It can but most people don't want the hassle of running an OpenCore build. I haven't done a Hackintosh since July 2021 since I received my M1 Mac mini.

I have set up quite a few hackintoshes. (Please see my signature below. I'm still running Monterey on a Dell laptop.) But I will not waste any money investing in hardware that will no longer be supported by macOS in the very near future. It's a dead end. It makes much more sense to put the money towards a real Apple Silicon Mac now.
 
  • Like
Reactions: turk101

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
I have set up quite a few hackintoshes. But I will not waste any money investing in hardware that will no longer be supported by macOS in the very near future. It's a dead end. It makes much more sense to put the money towards a real Apple Silicon Mac now.

I think that's what most are doing as I see much less interesting in Hackintosh these days. An x86 system doesn't necessarily go to waste as you can just use it as a Windows or Linux system if you have the interest and need. I do need to run Windows from time to time and also like to sometimes play with Linux. My primary desktop, and laptops, though, are all macOS these days.
 

pastrychef

macrumors 601
Sep 15, 2006
4,754
1,453
New York City, NY
I think that's what most are doing as I see much less interesting in Hackintosh these days. An x86 system doesn't necessarily go to waste as you can just use it as a Windows or Linux system if you have the interest and need. I do need to run Windows from time to time and also like to sometimes play with Linux. My primary desktop, and laptops, though, are all macOS these days.

Personally speaking, I have never used Windows or Linux as my primary system and have no desire to ever do so. That's just not an option for me. I don't even miss Bootcamp. It took a little while but there are now Mac equivalents to all the apps that I need.

I do use Linux for server stuff but those are "set it and forget it" type stuff that I don't have to deal with daily. For that, I use Raspberry Pis for their low power consumption and passive cooling. I'm gradually leaving the X86/X64 ecosystem.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
... But is the I/O die doing anything? ...
Yeah, the I/O die contributes significantly to the power consumption of Zen3. It hosts the L3 cache and the memory controllers (as pointed out by R!TTER). You can clearly see it in power consumption tests here: https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8 - note the huge difference between core power and package power, almost 20-30 watts for lower loads. If AMD has managed to reduce this to some meaningful figure like 5-10watts, the entry level CPUs will gain a lot of breathing space within their TDP bracket.

That would be more tractable if AMD had not also added to the list of tasks the I/O chips has to do. I should have phrased what I said differently. If the I/O chip has 20 functions and it is only doing 3 of them is it relatively doing anything?

This is similar to if had a benchmark at simply just loaded memory , added 1 to the value and stored it. Would that present something that would give a robust illumination of the potential power usage of a CPU core? Not really.

If you are talking about this chart.

PerCore-1-5950X-Total_575px.png


that ~20W gap isn't just at low core usage , it is pretty much in that range all the way across. There is an large internal backhaul network in the I/O chip to shuttle high end PCI-e data, memory data, port data in/out of the chip. If try to turn off that internal network then the cores starve of data. To get consistent low latencies they probably leaving it running all that time at some decent speed. ( that probably won't change much in the new version. Same latency response to the disconnected chiplets if put too much to sleep. ) For this desktop loading it is likely running all the time. doing relatively longer distance I/O to the outside world is power expensive. That is one reason why Apple avoid it ( no DIMMS , less than a handful of PCI-e lanes, Thunderbolt on retimers to get to the edge, etc.). [ 7000 series chiplets are packed closer to the I/O chips. AMD is avoiding it a bit more to this time. ]

AMD is on a new fab process for the new I/O chip, but they also added to the heap of tasks. Faster DDR5 DIMMs to deal with, more automatic memory overclocking , PCI-e v5. , and over 100 ALUs of a GPU subsystem. The internal network is faster to deal with the higher bandwidth demands. If they power gate the GPU subsystem and put it to sleep then it won't add to the power consumption. But if it is lit up, is that really going to be a net power decrease?


Those 60W , 105, 170W geometric mean samplings AMD did was for a variety of CPU core focused workloads. There is a pretty good chance that they did not weave in a variety of loads that also 'lit up' the most of the I/O chip at the same time. That ~20W budget for the i/O chip could still be there. That improvement would be that AMD got better at putting certain parts going unused to sleep.
 
  • Like
Reactions: pshufd

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
Those 60W , 105, 170W geometric mean samplings AMD did was for a variety of CPU core focused workloads. There is a pretty good chance that they did not weave in a variety of loads that also 'lit up' the most of the I/O chip at the same time. That ~20W budget for the i/O chip could still be there. That improvement would be that AMD got better at putting certain parts going unused to sleep.

I hope that they have a long list of efficiency tricks in their bag as I like more efficient processors. I do not plan to get one of these as my 2020 Windows build is fine for the few times that I need Windows but it's a consideration for the future unless Windows on ARM gets really compelling.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
It does but its also very expensive to get. For an M1 Ultra you need a $4000 Mac and it comes with a measly 64GB RAM and 1TB SSD and a binned 48 core GPU.

[Going on a tangent]
The 7950X is $699 and say a RX 7800 is $899 and the rest of the system is $1500, so total = $3,100.

Ultra (20 cores) 23K / 20 == 1,150
7950X (16 cores) 24K / 16 == 1,500
13900K ( 24 cores ) 24/ 24 == 1,000




So this begs the question, why is Apple soldering everything in their desktop Macs and what's the point of removing 32bit support when AMD showed its possible to still get great pref while maintaining backwards compatibility.

But these + a dGPU + required thermal systems would not fit in a Studio. Or Mac Laptop .
Apple isn't treating the desktop substantially different than the laptops. 75+% of what Apple sales is laptops. So why would the desktops be driving the overall SoC design. That is the 'answer' to the question. Look at the desktop enclosures Apple has rolled out since October 2020 and see if they match the 7960X thermals envelope or not.

Apple has less than a 10% overall market share. So that puts desktops down in the less than 2.5% range. Skim out the basic Mini and iMac 24" (which are certainly good enough systems for average folks) and probably down to less than 1.5% . So does it make any financial sense at all for Apple to develop a mid range desktop SoC for such price point (600-800) range. [ In contrast to the overall PC market where 90% * 25% ( non apple , desktop) share is 2x the size of the whole Mac market ( 22.5% ). AMD and Intel can split that in half and still sell more SoC that Apple is in total. For them to spend the R&D to cover that isn't a big question.


Apple needs to update the Mac Studio with faster CPU and GPU. It's already outdated as AMD's CPU has a very high ST and perf per watt is great too.

Studio order are backlogged months. The "needs to" is well motivated. Even if sales cool off they'll still be selling every Studio they make. That isn't a "problem".

Can to Studio last 1.5 years on the market without an update? Probably not. Could it run until June 2023 without an update without incurring major sales issues? Sure.

It would be more than loopy looney toons for Apple to sync up their whole desktop SoC development schedule to AMD and Intel dog and pony shows. They should work on stuff and release stuff when it is ready. Same thing for Intel and AMD .. try to synch up their laptop SoC product introductions to Apple's . That is nuts.

One reason that AMD's dog and pony show is this week and the CPUs ship at the end September is that they have substantive BIOS bugs that are still being worked out. When companies get so caught up in "one upping" their competitors dog and pony show as opposed to just doing their product well, then it typically leads to more crufty , buggy products.

Apple's far , far , far bigger problem than the Mac Studio is the gap they are leaving on the Mac Pro level. The lower 'half' of the Mac Pro 2019 is going to get whipped by these systems ( especially if attach heavyweight cooling systems to it and the dGPUs. ) . The Threadripper WX5000 Pro is doling out a beat down at the "upper half" of the 2019 line up. That is only going to get worse as the the RX 7000 and RTX 4000 series roll out over next two Quarters. (and that is with Zen 3 cores on the WX5000 . They are essentially holding in reserve a WX7000 they they could drop at any time if really need to "answer" anyone's encroaching on their performance. That can make fatter profits allocating those chiplets to Epyc products, but can respond when they want too. )


The Studio doesn't have a huge problem. It delivers decent band-for-buck for what it is. The very high ST 'problem' is more an issues for the folks in these tech forums who get all wrapped up in "who is fastest" contests (and tech 'news' sites that need to generate excitement to drive ads views ) . The Ultra is plenty fast enough to get real work done for real clients.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I hope that they have a long list of efficiency tricks in their bag as I like more efficient processors. I do not plan to get one of these as my 2020 Windows build is fine for the few times that I need Windows but it's a consideration for the future unless Windows on ARM gets really compelling.

Windows on ARM is likely to remain primarily a laptop focused thing also. AMD's laptop solution likely will be monolithic (no infinity fabric power overhead ) and chop significantly down on the number of PCI-e v5 lanes provisioned.

These BIOS glitches at launch delays is one reason AMD doesn't get more traction in laptop design wins even though they are getting better at competition with Intel in that space on benchmarks. The vendor support maturity is still not there for folks to trust their major revenue earners on a hiccup.
 

exoticSpice

Suspended
Original poster
Jan 9, 2022
1,242
1,952
Ultra (20 cores) 23K / 20 == 1,150
7950X (16 cores) 24K / 16 == 1,500
13900K ( 24 cores ) 24/ 24 == 1,000
The ultra has 16P cores and 4E cores. Apple'e are actually e cores in the traditional sense. So apples M1 Ultra is a 16 core CPU with 4 not really powerful cores. Also the Ultra is clocked much lower at 3.2 GHz compared to >4.5Ghz on the 7950x
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
Can to Studio last 1.5 years on the market without an update? Probably not. Could it run until June 2023 without an update without incurring major sales issues? Sure.

The Studio is a pretty good value relative to the mini so it may be able to last 18 months with the current backlog. I think that they should have priced it a little higher as there's no room for an M1 Pro mini with the current pricing. Especially when the M2 mini comes out. I'm actually surprised that it is so popular - I didn't realize that so many people need that level of compute. I have an M1 mini and an M1 Pro MacBook Pro and the mini has the compute that I currently need. The M1 Pro is actually overkill but I wanted a 16 inch screen.
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
I think that they should have priced it a little higher as there's no room for an M1 Pro mini with the current pricing
There definitely is. 1099 (same price as higher mini) or 1299 (current + 200 premium for 16GB RAM) seem like good places to be for a base M2 Pro.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
AMD is on a new fab process for the new I/O chip, but they also added to the heap of tasks. Faster DDR5 DIMMs to deal with, more automatic memory overclocking , PCI-e v5. , and over 100 ALUs of a GPU subsystem. The internal network is faster to deal with the higher bandwidth demands. If they power gate the GPU subsystem and put it to sleep then it won't add to the power consumption. But if it is lit up, is that really going to be a net power decrease?

You make a good argument. I just think that lower I/O die power consumption is the simplest explanation of how AMD can extract more performance on lower TDPs. I just kind of doubt that they would manage to improve the CPU core efficiency by 50% in a single generation and a single node step.

Hopefully we will see some benchmarks next month. It's unfortunate that all the good reviewers stepped down...
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
You make a good argument. I just think that lower I/O die power consumption is the simplest explanation of how AMD can extract more performance on lower TDPs. I just kind of doubt that they would manage to improve the CPU core efficiency by 50% in a single generation and a single node step.

Hopefully we will see some benchmarks next month. It's unfortunate that all the good reviewers stepped down...

Gamer's Nexus already has one but I guess that they got it at the annoucement and I don't think that they've had time to run benchmarks on it yet.
 

R!TTER

macrumors member
Jun 7, 2022
58
44
AMD is on a new fab process for the new I/O chip, but they also added to the heap of tasks. Faster DDR5 DIMMs to deal with, more automatic memory overclocking , PCI-e v5. , and over 100 ALUs of a GPU subsystem. The internal network is faster to deal with the higher bandwidth demands. If they power gate the GPU subsystem and put it to sleep then it won't add to the power consumption. But if it is lit up, is that really going to be a net power decrease?
DDR5, PCIe 5.0 & everything power hungry is also accompanied by faster IF or did you miss the most important part of the I/O die equation? Not to mention they get over the "data taking longer to travel" issue with massive L3 or x3D cache. In fact IIRC 5800x3D is the most efficient chip in their lineup.
You make a good argument. I just think that lower I/O die power consumption is the simplest explanation of how AMD can extract more performance on lower TDPs. I just kind of doubt that they would manage to improve the CPU core efficiency by 50% in a single generation and a single node step.
At the right "TDP" they can easily achieve that & more!
Ryzen%207000%20Tech%20Day%20-%20Keynote%2029_575px.jpeg
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
I'm impressed on the pricing. 5xxx pricing was eye-popping though it may have been due to the pandemic and crypt-mining. It was impossible to get new parts at MSRP too. I will be looking for reviews and watching OpenCore progress in October.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
The ultra has 16P cores and 4E cores. Apple'e are actually e cores in the traditional sense. So apples M1 Ultra is a 16 core CPU with 4 not really powerful cores. Also the Ultra is clocked much lower at 3.2 GHz compared to >4.5Ghz on the 7950x

Geekbench uses them.... can't throw them out if going to use those scores.

The Ultra being clocked much lower is extremely likely going to be stay the same across the entire M-series line up going forward. Apple had wider execution pipelines and much wider memory bandwidth paths and bigger caches. They are loosing ground on the first ( the myth that x86-64 was permanently stuck at width should be more fading at this point). They still have wider paths to memory advantage. The caches are backsliding a bit (once on fab process parity) this iteration. ( that edge also will fade as go to new fab processes as memory density isn't moving at same rate as logic density increases. )

Pretty good chance that M-series doesn't retake the ST lead anymore going forward, but don't fall much further behind either. As long as there is incremental uplift over time, that is highly likely good enough. It isn't going to be a single threaded drag racing focused SoC. More like a "get work done, truck/car" like focus" than a "top fuel dragster", focus into a narrow niche.

As long as AMD/Intel are willing to toss Perf/Watt efficiency out the window to eek out some single digit percentage uplift in ST benchmark scores they will likely win.

The primary 'point' of an Ultra is the GPU , not the CPU. So not sure why there would be any expectation it was out to win some CPU ST benchmark title. Anymore than someone trying to put some GPU title expectations on the 2 CU RNDA2 GPU in the 7950X.
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
I had a look at Ryzen 5k parts and they are at very attractive prices. I recall lots of people buying 3k parts well after 5k parts were out. I still really love Apple Silicon and I just need the CPU of a mini. The attraction of a 7k system would be easy to add RAM, storage, and display but I'd want a 45 or 65 watt CPU.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
You make a good argument. I just think that lower I/O die power consumption is the simplest explanation of how AMD can extract more performance on lower TDPs. I just kind of doubt that they would manage to improve the CPU core efficiency by 50% in a single generation and a single node step.

CPU core execution increases are a small part of the overall story.



zen4-ipc-980x270.jpg


https://arstechnica.com/gadgets/202...icial-launching-september-27-starting-at-299/

this is a substantively 'wider' and bigger cache micro-architecture. ( so much for the myth that x86-64 was pragmatically stuck at the 4 wide instruction width).

You seem to positioning the I/O die power as some kind of magic bullet that got them most of the whole win. That probably isn't it. The IPC uplift is a substantive factor. The fab process allowing them to crank clocks higher is a substantive factor (base clocks running at 70-80% of 'old' turbo; ditto for the memory. ).

Pragmatically it is also not a single generation. Pretty good chance that some data from the Ryzen 6000 (Zen3+ ) optimizations (power gating , density library usage , etc. ) were folded back into Ryzen 7000 ( Zen4 ). Whereas the comparisons are being made original Zen3 to Zen4.

For the more mainstream desktops with Ryzen 7000 sold without a > $500-800 GPU in them and running on just the iGPU this magical silver bullet you are pointing at probably has substantively less of a benefit. The I/O chip system is tuned to run at a lower non-iGPU wattage at 60W because that is around where the chip package TDP will be. And will they need to fit in the iGPU consumption (which is highly likely not being measured here). However, with the iGPU turned on there is likely isn't as much "extra power flow" going to be delivered to the CPU chiplets at the GPU will consume it.

That 'extra' budget is there because ballon squeezing the power consumption elsewhere ( into a dGPU. ) not because the overall system dropped a large amount. Remove the ballon squeeze and pretty right back to just an incremental drop from the 'old' , less capable, I/O chip power consumption numbers.



Hopefully we will see some benchmarks next month. It's unfortunate that all the good reviewers stepped down...

Benchmarks are not going to find it if they are not looking for it. The ones constructed around keeping the dGPU constant across system configurations are not looking for it. Few will test the iGPU option. Probably even fewer test with a PCI-e v5 SSD at high loads.
 
Last edited:

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Pretty good chance that M-series doesn't retake the ST lead anymore going forward, but don't fall much further behind either.
Agreed, but note that it still retains the ST mobile crown and, precisely because of the tactics Intel and AMD need to use to get their desktop ST speeds to beat AS (lots of power), there's a good chance that won't change.

The mobile crown is significant -- worldwide laptop unit sales exceed those of desktops by 2:1 (79 million desktops versus 171 laptops million est. for 2023; source: https://www.makeuseof.com/tag/should-i-get-a-laptop-or-a-desktop-computer/ )

Further, this ST performance is available even in Apple's entry-level laptops.
 
Last edited:
  • Like
Reactions: hxlover904

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
DDR5, PCIe 5.0 & everything power hungry is also accompanied by faster IF or did you miss the most important part of the I/O die equation?

Faster Infinity Fabric will contribute to higher power consumption as there other off die I/O interfaces to. Lower rate contribution depending upon the chiplet interconnection technology used. ( old school 2D bump versus 3D micro bumps). That power raise is better offset by the fab "power savings" options can use if don't burn all of that up with the bandwidth processing overhead increase. ( Fab node increase giving X performance increase or Y power decrease is a trade off. If you grab all the performance uplift than tend not to get much power savings ( and vice versa). )


What I'm saying is that I/O chipset is probably getting bigger power savings from more power gating (e.g., dynamically turning off ports that are not being used. ) than some huge savings because the bandwidth throughtput demands stayed almost constant and just took almost all in fab process node power savings. The bisection bandwidth is up here; not stagnant. So some of the fab node improvement is going to need to go intro performance increase , not all power savings.

Similarly an "even bigger cache" isn't going to save power. If keep the cache size the same then perhaps could have gotten some power savings, but if grow the cache ( with smaller transistor elements) then that size growth is likely to eat into the potential power savings.

3D cache... comes with a tradeoff. So far that has meant also limiting the clocks on the CPU cores. It isn't a 'free lunch' .


Not to mention they get over the "data taking longer to travel" issue with massive L3 or x3D cache. In fact IIRC 5800x3D is the most efficient chip in their lineup.

The 3D cache overall power consumption goes down? Or the same power is ballon squeezed into different allocations... which gets to one particular efficiency metric going up?




At the right "TDP" they can easily achieve that & more!
Ryzen%207000%20Tech%20Day%20-%20Keynote%2029_575px.jpeg

But who is really going to buy a 7950X to hard cap it at 65W ?

And for the mid range Ryzen 7000 series that are suppose to run in the 65-99W range are you really going to see +74% uplift or something far close to the mid-30% on the middle-right of the diagram when using the likely configurations for that range.

There is a very good chance that 70% metric is a crafted, cherry-picked metric that is a bit contrived. It may work specifically for some configurations that the 7950X will often fall into, but the fallacy is stretching that into an overall Ryzen 7000 accomplishment. Pretty good chance that doesn't pan out. ( somewhat similar to the rather premature declarations of 'horrible' E cores for Intel Gen 12 (Alter Lake) solely tested on the highest end K package set ups with market skewed clock settings. Really bad demographic sampling (flawed experimental design) lead to lots of noise being percolated on these forums.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.