Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Matty_TypeR

macrumors 6502a
Oct 1, 2016
641
555
UK
The 4090 is a monster lol as the intel Xeon W9-3495X is. true workstation pro chip. Even the proposed M3 in 2 years will see 5090 performance drown its sorrows and will kick apples M3 out of the park.

The New Mac Pro 8.1 should drop the Pro part as its far from a pro machine.
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,793
Wow. If those numbers are accurate it makes the Mac ‘doh look like a real rip off sad sack. Those are for middling machines. Not high end! and that’s just for the regular cpu, forget the spanking it takes on its gpu… Wow. And they have the gall to charge $7k for this mess.
The more I think about these benchmarks the more shocked I am. The “ultra” version of this chip doesn’t even keep up with a middling intel desktop. Its graphics do not keep up with a woefully old and 6900xt. Yet they have the audacity to charge $7k for this pile of garbage.

Short headline version, apple woefully loses the desktop and cannot compete in any way.

Wow.

Apple is just dead at the desktop level and completely cannot compete. What is pathetic is their top of the line machine at $7k cannot even compete at the mid level desktop market…we don’t even get to the high end. Oh my goodness what a tragedy.

Further, that means *the only way* apple could even begin to hope to become competitive is to get an m3 extreme out as soon as possible and make it have 3rd party graphic card support, or they are out of the competition permantly.

Perhaps I’m misunderstanding something. Please someone correct me. I’m honestly in shock at how bad these numbers compare. I’m praying I’m missing something here. This is just too sad to be correct?!
 
Last edited:

goMac

Contributor
Apr 15, 2004
7,663
1,694
Im wondering how things would be looking if "M2 Extreme" had actually happened.

Expansion would still be a concern. ECC would still be a concern. But performance might have looked a lot better. And they'd have 48 lanes of PCIe Gen 4 instead of 16 free.
 
  • Like
Reactions: ZombiePhysicist

thunng8

macrumors 65816
Feb 8, 2006
1,032
417
The more I think about these benchmarks the more shocked I am. The “ultra” version of this chip doesn’t even keep up with a middling intel desktop. Its graphics do not keep up with a woefully old and 6900xt. Yet they have the audacity to charge $7k for this pile of garbage.

Short headline version, apple woefully loses the desktop and cannot compete in any way.

Wow.

Apple is just dead at the desktop level and completely cannot compete. What is pathetic is their top of the line machine at $7k cannot even compete at the mid level desktop market…we don’t even get to the high end. Oh my goodness what a tragedy.

Further, that means *the only way* apple could even begin to hope to become competitive is to get an m3 extreme out as soon as possible and make it have 3rd party graphic card support, or they are out of the competition permantly.

Perhaps I’m misunderstanding something. Please someone correct me. I’m honestly in shock at how bad these numbers compare. I’m praying I’m missing something here. This is just too sad to be correct?!
Where did you get its graphic do Not keep up with the 6900xt? More benchmarks are coming out it is much faster than the 6900xt. Eg. Blender. Almost 70% faster.


geekbench compute using metal, not Opencl as posted before


the 76 core ultra gets approx 280k which is significantly higher than the 6900xt.

when newer benchmarks comes out for video and photo processing, the ultra will destroy any intel CPU and nvidia or AMD gpu chip including the rtx4090.
 
Last edited:

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,793
Where did you get its graphic do Not keep up with the 6900xt? More benchmarks are coming out it is much faster than the 6900xt. Eg. Blender. Almost 70% faster.


geekbench compute using metal, not Opencl as posted before


I saw a report that the Ultra M2 was doing around 220000 somewhere earlier in one of these threads. And then saw that the 6900xt was doing about 235000.

I see what you link to shows the Ultra doing 267000! So what I said isnt correct. Someone else posted that Geekbench changed its metrics from 6.0 to 6.1 and you can no longer compare and must do runs on the same version (either 6.0 or 6.1), so maybe that has something to do with the different reports on the M2Ultra's scores we've seen in these threads around it.

Anyway, if what you link to is accurate, than yes, the M2 ultra beats out the old 6900XT by a fair amount, which is great (and still loses big time to a 7900... much less any Nvidia stuff).

I happily stand corrected, was wrong, and thank you for pointing it out, as it seems, at least based on your link, that the M2 is indeed faster than the old 6900.
 

avro707

macrumors 68020
Dec 13, 2010
2,247
1,628
The 4090 is a monster lol as the intel Xeon W9-3495X is. true workstation pro chip. Even the proposed M3 in 2 years will see 5090 performance drown its sorrows and will kick apples M3 out of the park.

The New Mac Pro 8.1 should drop the Pro part as it’s far from a pro machine.
You aren’t supposed to compare workstations with ECC memory because this isn’t a workstation, it’s a machine carefully crafted for the demands of its users. Lol. ;)

I agree with you they shouldn’t have bothered with updating the Mac Pro until they had something worthwhile of the name.
 

ikir

macrumors 68020
Sep 26, 2007
2,174
2,355
It’s still slower than a 6900xt. which I have. And is a sad old card at this point. But it is good progress and if the extreme M3 continues to scale well, that may bode well *if* they can put out an upgrade in 1 year.
In real life Apple Silicon is much more faster thanks to encoder / decoder and efficiency. Still I love Radeon GPU, but Apple GPU is a game changer with unified memory.
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,793
In real life Apple Silicon is much more faster thanks to encoder / decoder and efficiency. Still I love Radeon GPU, but Apple GPU is a game changer with unified memory.
What you say only holds for encoding.decoding video. There are many more things a gpu can do. Your encoder won’t make games frame rate better. It won’t do 3D faster. It won’t do mass AI training faster. It won’t do crypto faster. It won’t do scientific/engineering work faster. Etc etc.

That it works for you. That’s great. It still is not a Mac Pro.
 

Dopemaster

macrumors newbie
Original poster
Mar 9, 2022
29
17
Current assumption...?
  • 220K = 60-core GPU
  • 280K = 76-core GPU
That seems to be the early indication. if proven correct, Apple have solved the issue of scaling, and I’ll be putting in an order for the 76-core.

Ternus’ quote re dGPU’s seemed pretty final, so will be opting for the Studio over the pro.

Upside of the small form-factor is I’ll be able to take all that horsepower to international shoots, hopefully using the vision pro to screen/edit rushes on a personal IMAX screen!
 
  • Like
Reactions: ikir

Kronsteen

macrumors member
Nov 18, 2019
76
66
Impossible to say given GB compute doesn’t identify GPU core count in the Ultras, but seems as though the 220K may be the base Ultra models arriving early, and the 76 core configs may be hitting up to 280K. V impressive scaling if so:

View attachment 2216258

Where does this data come from?

I’ve just looked at the Metal benchmark table on the Geekbench website (in the ‘Benchmark charts’ section of the Geekbench browser); it is still showing 220551 for the M2 Ultra.

It’s curious, though, that 220551 * 76 /60 is approx. 280000 …. 🤠
 

Dopemaster

macrumors newbie
Original poster
Mar 9, 2022
29
17
Where does this data come from?

I’ve just looked at the Metal benchmark table on the Geekbench website (in the ‘Benchmark charts’ section of the Geekbench browser); it is still showing 220551 for the M2 Ultra.

It’s curious, though, that 220551 * 76 /60 is approx. 280000 …. 🤠
It was on the metal benchmark table, when just 5 machines had run benchmarks. Has now dropped down… But yes, the arithmetic as you mention, makes me think that perhaps some top specced machines in the hands of reviewers did benchmarks before the first wave of 60 core machines arrived to consumers (the 76 core configs are shipping with a slight delay). Anyways, we will know soon enough.
 

randy85

macrumors regular
Oct 3, 2020
150
136
Those scores are actually looking pretty good. What with all the optimisation yet to come, plus the way better h265 performance (over Intel), I'm sure it actually feels incredibly quick for a lot of work.

Obviously the big issue will always be you can get the same performance for £3k cheaper with the Studio...
 
  • Like
Reactions: ikir

Kronsteen

macrumors member
Nov 18, 2019
76
66
It was on the metal benchmark table, when just 5 machines had run benchmarks. Has now dropped down… But yes, the arithmetic as you mention, makes me think that perhaps some top specced machines in the hands of reviewers did benchmarks before the first wave of 60 core machines arrived to consumers (the 76 core configs are shipping with a slight delay). Anyways, we will know soon enough.
Thanks. That makes complete sense.

if our hypothesis, that this is showing the distinction between 60 and 76 core GPUs, is correct, it perhaps also implies a problem with the GB website — that it does not allow for the fact that a single processor (in this case, the M2 Ultra) actually has two GPU variants. I suppose the answer may simply be to use a name that identifies which variant is being used. Otherwise, it is going to end up with a number somewhere between 220 and 280 that is inaccurate for both.

If, by the way l this arithmetic really is correct, it suggests impressively linear scaling (of both the workload and the GPU’s performance).
 

Dopemaster

macrumors newbie
Original poster
Mar 9, 2022
29
17
Thanks. That makes complete sense.

if our hypothesis, that this is showing the distinction between 60 and 76 core GPUs, is correct, it perhaps also implies a problem with the GB website — that it does not allow for the fact that a single processor (in this case, the M2 Ultra) actually has two GPU variants. I suppose the answer may simply be to use a name that identifies which variant is being used. Otherwise, it is going to end up with a number somewhere between 220 and 280 that is inaccurate for both.

If, by the way l this arithmetic really is correct, it suggests impressively linear scaling (of both the workload and the GPU’s performance).
Hhhmm this seems to destroy the hypothesis, the reviewer confirms 220K on a 76 core: https://techcrunch.com/2023/06/12/apple-m2-ultra-mac-studio-same-shell-far-more-firepower/

Which, if that initial 280K was in error, now makes me wonder where all the 60 core scores are. The plot thickens….
 

Kronsteen

macrumors member
Nov 18, 2019
76
66
Hhhmm this seems to destroy the hypothesis, the reviewer confirms 220K on a 76 core: https://techcrunch.com/2023/06/12/apple-m2-ultra-mac-studio-same-shell-far-more-firepower/

Which, if that initial 280K was in error, now makes me wonder where all the 60 core scores are. The plot thickens….
Curiouser and curiouser … (Well spotted, anyway!)

The techcrunch article you found is interesting, although I’m not entirely sure about the comparison with the Nvidia 4070 and 4080. That is to say, the author is presumably using OpenCL figures for the 4070 and 4080 (I don’t think there can be any Geekbench6 Metal figures for the Nvidia GPUs?).

If so, the comparison seems to be predicated on an assumption that GB6 Metal and OpenCL figures are directly comparable. At one level, I think they are — I believe they are based on the same suite of tests (Gaussian blur, particle physics and so on). But surely there is also a implicit assumption that the Metal and OpenCL implementations being used are equally efficient.

That is what I’m unsure about. For instance, in the case of the M2 Ultra, the OpenCL figure is *much* lower than the Metal one. Hardly surprising, given Apple’s abandonment of OpenCL. So it’s surely possible that the OpenCL figures for Nvidia GPUs are not a true representation. Their CUDA results might (should?) be much better.

In fact, GB5 CUDA figures for the 4090 are mostly well over 400,000, but the OpenCL numbers are quite a bit lower. I realise that GB5 and 6 are not directly comparable but this may, nonetheless, indicate that the GB6 OpenCL figures materially understate the Nvidia GPUs’ true performance.
 
Last edited:
  • Like
Reactions: Dopemaster

Boil

macrumors 68040
Oct 23, 2018
3,466
3,157
Stargate Command
M2 Ultra is slower than the Intel and AMD CPUs. It's slaughtered in Handbrake, Cinebench, and Blender.

Slaughtered in two benchmarks that highly favor Intel/AMD, shocking...!

ASi/Metal has made great progress in Blender (a lot of that due to the work of Apple engineers working with Blender), but Nvidia still has Optix going for it; M3-series SoCs should bring stronger GPU cores and hardware ray-tracing, so maybe that gap will close...?
 

impulse462

macrumors 68020
Jun 3, 2009
2,097
2,878
Slaughtered in two benchmarks that highly favor Intel/AMD, shocking...!

ASi/Metal has made great progress in Blender (a lot of that due to the work of Apple engineers working with Blender), but Nvidia still has Optix going for it; M3-series SoCs should bring stronger GPU cores and hardware ray-tracing, so maybe that gap will close...?
None of this is relevant for people who buy this mac pro because........shocker..........this isn't upgradeable
 

Dopemaster

macrumors newbie
Original poster
Mar 9, 2022
29
17
Curiouser and curiouser … (Well spotted, anyway!)

The techcrunch article you found is interesting, although I’m not entirely sure about the comparison with the Nvidia 4070 and 4080. That is to say, the author is presumably using OpenCL figures for the 4070 and 4080 (I don’t think there can be any Geekbench6 Metal figures for the Nvidia GPUs?).

If so, the comparison seems to be predicated on an assumption that GB6 Metal and OpenCL figures are directly comparable. At one level, I think they are — I believe they are based on the same suite of tests (Gaussian blur, particle physics and so on). But surely there is also a implicit assumption that the Metal and OpenCL implementations being used are equally efficient.

That is what I’m unsure about. For instance, in the case of the M2 Ultra, the OpenCL figure is *much* lower than the Metal one. Hardly surprising, given Apple’s abandonment of OpenCL. So it’s surely possible that the OpenCL figures for Nvidia GPUs are not a true representation. Their CUDA results might (should?) be much better.

In fact, GB5 CUDA figures for the 4090 are mostly well over 400,000, but the OpenCL numbers are quite a bit lower. I realise that GB5 and 6 are not directly comparable but this may, nonetheless, indicate that the GB6 OpenCL figures materially understate the Nvidia GPUs’ true performance.
You're right, it's never going to be an apples to apples comparison (excuse the pun) between OpenCL and Metal testing. Helpful as a rough guide, but better to wait on real world testing. Another point, is that Geekbench compute benchmarking leans heavily into still image processing, which may not be relevant for many use cases.

Here's an interesting one in DaVinci Resolve, the M2 Ultra actually beats the RTX 4090 in render tests:

Screen Shot 2023-06-13 at 3.42.54 pm.png
 
Last edited:
  • Like
Reactions: Macintosh IIcx

thunng8

macrumors 65816
Feb 8, 2006
1,032
417
You're right, it's never going to be an apples to apples comparison (excuse the pun) between OpenCL and Metal testing. Helpful as a rough guide, but better to wait on real world testing. Another point, is that Geekbench compute benchmarking leans heavily into still image processing, which may not be relevant for many use cases.

Here's an interesting one in DaVinci Resolve, the M2 Ultra actually beats the RTX 4090 in render tests:

View attachment 2217272
I bet the ultra will also do extremely well at photo editing as well, most likely beating out the highest end intel and amd processor with nvidia rtx4090.

These are real world application tests that you cannot get from benchmarks
 
  • Like
Reactions: Dopemaster
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.