The more I think about these benchmarks the more shocked I am. The “ultra” version of this chip doesn’t even keep up with a middling intel desktop. Its graphics do not keep up with a woefully old and 6900xt. Yet they have the audacity to charge $7k for this pile of garbage.Wow. If those numbers are accurate it makes the Mac ‘doh look like a real rip off sad sack. Those are for middling machines. Not high end! and that’s just for the regular cpu, forget the spanking it takes on its gpu… Wow. And they have the gall to charge $7k for this mess.
Where did you get its graphic do Not keep up with the 6900xt? More benchmarks are coming out it is much faster than the 6900xt. Eg. Blender. Almost 70% faster.The more I think about these benchmarks the more shocked I am. The “ultra” version of this chip doesn’t even keep up with a middling intel desktop. Its graphics do not keep up with a woefully old and 6900xt. Yet they have the audacity to charge $7k for this pile of garbage.
Short headline version, apple woefully loses the desktop and cannot compete in any way.
Wow.
Apple is just dead at the desktop level and completely cannot compete. What is pathetic is their top of the line machine at $7k cannot even compete at the mid level desktop market…we don’t even get to the high end. Oh my goodness what a tragedy.
Further, that means *the only way* apple could even begin to hope to become competitive is to get an m3 extreme out as soon as possible and make it have 3rd party graphic card support, or they are out of the competition permantly.
Perhaps I’m misunderstanding something. Please someone correct me. I’m honestly in shock at how bad these numbers compare. I’m praying I’m missing something here. This is just too sad to be correct?!
Where did you get its graphic do Not keep up with the 6900xt? More benchmarks are coming out it is much faster than the 6900xt. Eg. Blender. Almost 70% faster.
Blender - Open Data
Blender Open Data is a platform to collect, display and query the results of hardware and software performance tests - provided by the public.opendata.blender.org
geekbench compute using metal, not Opencl as posted before
You aren’t supposed to compare workstations with ECC memory because this isn’t a workstation, it’s a machine carefully crafted for the demands of its users. Lol.The 4090 is a monster lol as the intel Xeon W9-3495X is. true workstation pro chip. Even the proposed M3 in 2 years will see 5090 performance drown its sorrows and will kick apples M3 out of the park.
The New Mac Pro 8.1 should drop the Pro part as it’s far from a pro machine.
Current assumption...?
- 220K = 60-core GPU
- 280K = 76-core GPU
In real life Apple Silicon is much more faster thanks to encoder / decoder and efficiency. Still I love Radeon GPU, but Apple GPU is a game changer with unified memory.It’s still slower than a 6900xt. which I have. And is a sad old card at this point. But it is good progress and if the extreme M3 continues to scale well, that may bode well *if* they can put out an upgrade in 1 year.
What you say only holds for encoding.decoding video. There are many more things a gpu can do. Your encoder won’t make games frame rate better. It won’t do 3D faster. It won’t do mass AI training faster. It won’t do crypto faster. It won’t do scientific/engineering work faster. Etc etc.In real life Apple Silicon is much more faster thanks to encoder / decoder and efficiency. Still I love Radeon GPU, but Apple GPU is a game changer with unified memory.
That seems to be the early indication. if proven correct, Apple have solved the issue of scaling, and I’ll be putting in an order for the 76-core.Current assumption...?
- 220K = 60-core GPU
- 280K = 76-core GPU
Impossible to say given GB compute doesn’t identify GPU core count in the Ultras, but seems as though the 220K may be the base Ultra models arriving early, and the 76 core configs may be hitting up to 280K. V impressive scaling if so:
View attachment 2216258
It was on the metal benchmark table, when just 5 machines had run benchmarks. Has now dropped down… But yes, the arithmetic as you mention, makes me think that perhaps some top specced machines in the hands of reviewers did benchmarks before the first wave of 60 core machines arrived to consumers (the 76 core configs are shipping with a slight delay). Anyways, we will know soon enough.Where does this data come from?
I’ve just looked at the Metal benchmark table on the Geekbench website (in the ‘Benchmark charts’ section of the Geekbench browser); it is still showing 220551 for the M2 Ultra.
It’s curious, though, that 220551 * 76 /60 is approx. 280000 …. 🤠
Thanks. That makes complete sense.It was on the metal benchmark table, when just 5 machines had run benchmarks. Has now dropped down… But yes, the arithmetic as you mention, makes me think that perhaps some top specced machines in the hands of reviewers did benchmarks before the first wave of 60 core machines arrived to consumers (the 76 core configs are shipping with a slight delay). Anyways, we will know soon enough.
Hhhmm this seems to destroy the hypothesis, the reviewer confirms 220K on a 76 core: https://techcrunch.com/2023/06/12/apple-m2-ultra-mac-studio-same-shell-far-more-firepower/Thanks. That makes complete sense.
if our hypothesis, that this is showing the distinction between 60 and 76 core GPUs, is correct, it perhaps also implies a problem with the GB website — that it does not allow for the fact that a single processor (in this case, the M2 Ultra) actually has two GPU variants. I suppose the answer may simply be to use a name that identifies which variant is being used. Otherwise, it is going to end up with a number somewhere between 220 and 280 that is inaccurate for both.
If, by the way l this arithmetic really is correct, it suggests impressively linear scaling (of both the workload and the GPU’s performance).
Curiouser and curiouser … (Well spotted, anyway!)Hhhmm this seems to destroy the hypothesis, the reviewer confirms 220K on a 76 core: https://techcrunch.com/2023/06/12/apple-m2-ultra-mac-studio-same-shell-far-more-firepower/
Which, if that initial 280K was in error, now makes me wonder where all the 60 core scores are. The plot thickens….
M2 Ultra is slower than the Intel and AMD CPUs. It's slaughtered in Handbrake, Cinebench, and Blender.
None of this is relevant for people who buy this mac pro because........shocker..........this isn't upgradeableSlaughtered in two benchmarks that highly favor Intel/AMD, shocking...!
ASi/Metal has made great progress in Blender (a lot of that due to the work of Apple engineers working with Blender), but Nvidia still has Optix going for it; M3-series SoCs should bring stronger GPU cores and hardware ray-tracing, so maybe that gap will close...?
You're right, it's never going to be an apples to apples comparison (excuse the pun) between OpenCL and Metal testing. Helpful as a rough guide, but better to wait on real world testing. Another point, is that Geekbench compute benchmarking leans heavily into still image processing, which may not be relevant for many use cases.Curiouser and curiouser … (Well spotted, anyway!)
The techcrunch article you found is interesting, although I’m not entirely sure about the comparison with the Nvidia 4070 and 4080. That is to say, the author is presumably using OpenCL figures for the 4070 and 4080 (I don’t think there can be any Geekbench6 Metal figures for the Nvidia GPUs?).
If so, the comparison seems to be predicated on an assumption that GB6 Metal and OpenCL figures are directly comparable. At one level, I think they are — I believe they are based on the same suite of tests (Gaussian blur, particle physics and so on). But surely there is also a implicit assumption that the Metal and OpenCL implementations being used are equally efficient.
That is what I’m unsure about. For instance, in the case of the M2 Ultra, the OpenCL figure is *much* lower than the Metal one. Hardly surprising, given Apple’s abandonment of OpenCL. So it’s surely possible that the OpenCL figures for Nvidia GPUs are not a true representation. Their CUDA results might (should?) be much better.
In fact, GB5 CUDA figures for the 4090 are mostly well over 400,000, but the OpenCL numbers are quite a bit lower. I realise that GB5 and 6 are not directly comparable but this may, nonetheless, indicate that the GB6 OpenCL figures materially understate the Nvidia GPUs’ true performance.
I bet the ultra will also do extremely well at photo editing as well, most likely beating out the highest end intel and amd processor with nvidia rtx4090.You're right, it's never going to be an apples to apples comparison (excuse the pun) between OpenCL and Metal testing. Helpful as a rough guide, but better to wait on real world testing. Another point, is that Geekbench compute benchmarking leans heavily into still image processing, which may not be relevant for many use cases.
Here's an interesting one in DaVinci Resolve, the M2 Ultra actually beats the RTX 4090 in render tests:
View attachment 2217272