Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,199
I guess over the weekend we will get a good insight where the Ultra is on real usage
At least i will on 24 of April and for those who are into Maya, i can help with some results
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I guess over the weekend we will get a good insight where the Ultra is on real usage
At least i will on 24 of April and for those who are into Maya, i can help with some results
Yeah looking forward to seeing where the numbers are over the next few days/weeks.
 

MauiPa

macrumors 68040
Apr 18, 2018
3,438
5,084
Isn't Geekbench compute a test of OpenCL, which has been deprecated on Macs for awhile, and therefore completely meaningless?
 

neuropsychguy

macrumors 68030
Sep 29, 2008
2,683
6,642
Isn't Geekbench compute a test of OpenCL, which has been deprecated on Macs for awhile, and therefore completely meaningless?
It supports OpenCL, CUDA, Metal, and Vulkan. What gets run for a cross-platform comparison depends on what APIs are available on a platform.

It’s not meaningless but people need to understand the limitations of a benchmark. So far, few people have acknowledged the limitations.
 

MauiPa

macrumors 68040
Apr 18, 2018
3,438
5,084
Why not accurate? Benchmarks are always just a specific metric for performance. If Geekbench 5 is computationally optimized for Apple silicon (which they said it is), then a low score just means it does not perform as well in this particular metric.
They might at some point want to update their method, so we will have a Geekbench 6, for those chips at the top end.
sorry dude, you sound great, but this is what Geekbench says:

Comparing Scores
Each Compute workload has an implementation for each supported Compute API. While it is possible to compare scores across APIs (e.g., a OpenCL score with a Metal score) it is important to keep in mind that due to the nature of Compute APIs, the performance difference can be due to more than differences in the underlying hardware (e.g., the GPU driver can have a huge impact on performance).

 

MauiPa

macrumors 68040
Apr 18, 2018
3,438
5,084
If companies were less marketing oriented, they would fix the benchmark mess immediately.

But, controversy makes headlines and I firmly believe this is part of Apple's marketing campaign.
Yah, producing great hardware, software, it's all marketing, all marketing. It's a strategy. and, no offense, but gamers don't count, because no gamer would ever buy a Mac in the first place. Everyone completely gets that
 

ikir

macrumors 68020
Sep 26, 2007
2,176
2,366
there i cant see the tests between Ultra and 3090...
Here i was telling my sight from what ive seen between these 2. And besides that im not into gaming, but some of my friends are especially with that WoW title
You will see test there, but they needs to received the new machines! Mac Studio is released today! Also there you can see many M1 Max benchmark, base Mac Studio will have similar performance, maybe a little faster thanks to bigger chassis.
 

PsykX

macrumors 68030
Sep 16, 2006
2,745
3,922
Sorry for my complete lack of technical understanding here.

1. Does it means the benchmark score should be higher ?

2. Why does it happen with M1 Max and M1 Ultra and not other graphic cards like M1 Pro or the 3090 ? (Benchmarks aren't made for that many GPU cores ? Benchmarks aren't made for such a high memory bandwidth with the processor ?)

3. Do real world games have the same problem as the way the benchmark was conceived ?

-

For me, the only real world test that counts at this point is World of Warcraft because it's probably the only native game. We can only hope devs will work more on bringing their AAA games on Macs, but it will take more Mac vs Windows market share, as well as a few more years of commitment to making the cream of the crop in terms of GPUs from Apple.

Also, Windows and all that surrounds it (Office, Visual Studio, Teams, BI...) has improved a lot in the last 5 years. Macs are starting to lag in a few areas. Apple Silicon can bring them much more market share of course, but right now the Mac's Achille's heel is macOS.
 
Last edited:

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
producing great hardware, software, it's all marketing, all marketing.
Marketing, like makeup, embellishes reality.

Can you imagine Apple engineers reporting to their bosses with such a mysterious graphic that we still don't know how to recreate?
 

Homy

macrumors 68030
Jan 14, 2006
2,507
2,459
Sweden
You will see test there, but they needs to received the new machines! Mac Studio is released today! Also there you can see many M1 Max benchmark, base Mac Studio will have similar performance, maybe a little faster thanks to bigger chassis.
I don't think we'll see any gaming benchmarks from MrMacRight or Andrew Tsai on Mac Studio. They're small channels with no sponsors and they put all their money on buying several MacBook Pros with M1 Pro/Max just a few months ago.

Update! Looks like I was wrong. Andrew Tsai has published videos of a Mac Studio 48c GPU 64 GB RAM benchmarking Tomb Raider and Metro Exodus.
 
Last edited:

clevins

macrumors 6502
Jul 26, 2014
413
651
Benchmarks are only ever indicators. A bursty benchmark is crap at telling you the performance of the device under sustained loads. And vice versa. Companies will always trumpet the ones that make them look good, but anyone looking to see what the performance really is will use several different benchmarks to get a picture of how the devices works in different scenarios.

All this is telling us is that for bursty workloads, the 3090 will be better. For sustained loads, the Ultra will be very close to it.

The gaming performance isn't really important though since so few AAA games exist on the Mac and even fewer will be AS optimized. WoW is one such but it's not graphically demanding. I can run it in Ultra (10 on a scale of 1-10) and get 60FPS... on my Air. So, yeah.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,199
yes, WoW is not demanding for 60fps, but what about 120fps, or 240fps on 4k or on 5k and so on...you can push an non demanding game to some levels
Again, WoW can be a factor of comparison to see on the ultra settings what fps can Ultra reach compared to the 3090 for example (no fps cap)
Higher resolution and higher fps better comparison. I dont know how WoW would work on an 8k resolution on ultra settings even with 3090, not that anyone would play at 8k, but just for the purpose of the benchmark
 
Last edited:

Juraj22

macrumors regular
Jun 29, 2020
179
208
Power management on M1 GPUs is what make those GPUs great. Can ever AMD or Nvidia have GPU with 15mW idle?
Testing on M1 Max 32core GPU.
Watching GPU clocks with powermetrics during Geekbench Metal test, I can see that max. frequency is ~600Mhz and power used is 6W. That is still too low, because that GPU can go 1200Mhz easily. Check attached pics.
Terminal only: Idle.
Check used power, and frequency. Geekbench Metal tests are too short to ramp up frequency to max. In other words, that test is not showing full M1 GPU potential.

idle.png

shadertoy.png


geekbench.png
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Sorry for my complete lack of technical understanding here.

1. Does it means the benchmark score should be higher ?

Yes

2. Why does it happen with M1 Max and M1 Ultra and not other graphic cards like M1 Pro or the 3090 ? (Benchmarks aren't made for that many GPU cores ? Benchmarks aren't made for such a high memory bandwidth with the processor ?)

M1 has more conservative power management because it needs to offer both high performance and uktra-low power consumption on demand. M1 GPU only ramps the clocks up if there is enough work requests to warrant it. The post above mine has power usage stats that illustrate this in great detail.

And this Geekbench issue is present with every single M1 GPU. All M1 scores are too low.

3. Do real world games have the same problem as the way the benchmark was

No, since games are not short-lived GPU work packages.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
M1 has more conservative power management because it needs to offer both high performance and uktra-low power consumption on demand. M1 GPU only ramps the clocks up if there is enough work requests to warrant it.
This doesn’t make the benchmark inaccurate though. Running under Rosetta 2 makes a benchmark inaccurate for comparing native performance. Comparing benchmarks that aren’t properly optimized for both architectures is inaccurate (as Blender and Cinebench appear to not be).

Not letting the GPU spool up its turbo chargers is representative of the performance of the device for workloads of this kind. I’d argue that the declining scalability of GB results across Pro, Max, Ultra reported in the benchmarks is educational.

This benchmark is accurate but needs to be taken in context with other benchmarks. This doesn’t say how other workloads will compare across devices, but this isn’t a bug or a failure to code it properly. This thread is fast heading towards the conclusion that there’s no benchmark we can trust because they all underrate M1 Ultra— that’s starting to feel a bit like saying “there is no test that can show how truly wonderful the M1 Ultra is, you have to take it on faith.”
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
This benchmark is accurate but needs to be taken in context with other benchmarks.
It seems that only Apple has the correct benchmark. I wonder if Apple has developed a benchmark that measures more accurately the performance of M1 based computers.

Hopefully, Apple can explain soon how they did the benchmarks.
 

Mr. Awesome

macrumors 65816
Feb 24, 2016
1,243
2,881
Idaho, USA
Sorry for my complete lack of technical understanding here.

1. Does it means the benchmark score should be higher ?

2. Why does it happen with M1 Max and M1 Ultra and not other graphic cards like M1 Pro or the 3090 ? (Benchmarks aren't made for that many GPU cores ? Benchmarks aren't made for such a high memory bandwidth with the processor ?)

3. Do real world games have the same problem as the way the benchmark was conceived ?

-

For me, the only real world test that counts at this point is World of Warcraft because it's probably the only native game. We can only hope devs will work more on bringing their AAA games on Macs, but it will take more Mac vs Windows market share, as well as a few more years of commitment to making the cream of the crop in terms of GPUs from Apple.

Also, Windows and all that surrounds it (Office, Visual Studio, Teams, BI...) has improved a lot in the last 5 years. Macs are starting to lag in a few areas. Apple Silicon can bring them much more market share of course, but right now the Mac's Achille's heel is macOS.
  1. Yes
  2. It does happen with lower-model M1 chips, but to a less noticeable degree. Essentially, the problem is that Apple has designed the M1 family GPUs with a certain power curve, meaning they don’t ramp up to full performance unless they’re doing something power-hungry for a longer period of time. (This is designed to make the chips more energy-efficient.) Because this benchmark is relatively quick, the M1 Max/Ultra don’t ramp up to full speed before the benchmark is over. It’s like trying to measure the top speed of a car, but cutting it off after just a few seconds. There’s not enough time for it to hit full speed.
  3. No. The processor will gradually increase its power to accommodate the game, so you’ll have full performance.
 
  • Like
Reactions: ikir and PsykX

jeanlain

macrumors 68020
Mar 14, 2009
2,460
954
I'd be curious to know if powermetrics can give insights into why the M1 Ultra is slower than the M1 Max in certain GFXBench tests.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
This doesn’t make the benchmark inaccurate though.

It may not make it inaccurate in the narrow sense of the word, but it does make it uninteresting. GB5 compute is essentially measuring GPU performance on very short workloads. That’s not how GPUs are used. For example, according to GB compute all M1 series GPUs are slower for image processing compared to some dGPUs, but somehow they end up being some of the fastest devices for benchmarks done using real-world image processing tools. And yet everyone takes CB5 compute to make quantifying statements about GPU performance.

Running under Rosetta 2 makes a benchmark inaccurate for comparing native performance.

Running stuff under Rosetta 2 is fair in my book - a lot of apps are still Intel and knowing how fast they run on AS is an important metric. Depends on what you are after though. I agree that running some sort of CPU test using x86 code and then making conclusions about M1 CPU performance is dumb.

Not letting the GPU spool up its turbo chargers is representative of the performance of the device for workloads of this kind. I’d argue that the declining scalability of GB results across Pro, Max, Ultra reported in the benchmarks is educational.

Again, this is not reflected in real-world usage and this is not how GPUs are utilized. It’s educational in the sense that you don’t program a GPU benchmark this way :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.