Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
Not really. Those machines are likely to perform much better on real world tasks. For video editing, even the base M1 manages to compare very favorably to much faster hardware.

I would still like to know what's the problem with Geekbench, and why Apple and AMD score lower than expected, while Nvidia is doing so well. Frankly, I start thinking that all of these benchmarks should post the kernel and setup code, because otherwise we have no idea what they are doing.
That’s fair. I also am interested to see the result with High Power mode.
 
  • Like
Reactions: ElfinHilon

hefeglass

macrumors 6502a
Apr 21, 2009
760
423
It’s unfortunate because it undermines Apple’s claims.
no i doesnt..apple never posted any graphs with geekbench results or even alluded to geekbench in any way. for some reason in this thread its the end all be all..its a synthetic benchmark!
 

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
no i doesnt..apple never posted any graphs with geekbench results or even alluded to geekbench in any way. for some reason in this thread its the end all be all..its a synthetic benchmark!
Well Apple claimed a 4x improvement from m1 to Max. It scales from 8 to 16 perfectly and then much less from 16 to 32. That is strange. So much so that we’ll respectEd and knowledgeable people have questioned the results.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,666
OBX
That’s fair. I also am interested to see the result with High Power mode.
Do we think it is a thermal throttle that the high performance mode unlocks? Seems weird, cause GB doesn't run long enough to thermally throttle (at least not that I have seen).
 

EugW

macrumors G5
Jun 18, 2017
14,917
12,889
Well Apple claimed a 4x improvement from m1 to Max. It scales from 8 to 16 perfectly and then much less from 16 to 32. That is strange. So much so that we’ll respectEd and knowledgeable people have questioned the results.
If you actually read Apple's test description for individual GPU tests in their press release, the biggest scaling they themselves show for M1 Max is 1.7X as compared to M1 Pro. (The range was 1.4X to 1.7X.) AFAIK, nowhere do they show a real world test with 2X scaling.
 
  • Like
Reactions: hefeglass

l0stl0rd

macrumors 6502
Jul 25, 2009
483
420
Do we think it is a thermal throttle that the high performance mode unlocks? Seems weird, cause GB doesn't run long enough to thermally throttle (at least not that I have seen).
Na still if the opinion it it either a geekbench issue or bad scaling from 16 to 32 cores.

If it is throttling that their cooling is not as good as they make us believe.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
Very likely that it's a 32-core machine, but not 100% confirmed. However based on the number of results, I'm confused that they are all so similar. If "High Performance Mode" turns a 1GHz 32-core GPU into a 1.25GHz GPU, that should result in a clearly noticeable difference. I'm looking at what appears to be an 8% spread, however I would be expecting more like a 20-30% spread, assuming that if it is available, SOMEONE would have tried running in normal, and SOMEONE ELSE would have tried running in High Performance.

So I'm wondering if it is still all on Normal, not High Performance, which would provide a perfectly reasonable justification for the slightly lower than expected results that are leaking out. High Performance might not be in OS 12.0.1 yet.
Yep, I think it's likely this is the 32 core now machine my self. I was thinking about this when I woke up.

I HIGHLY doubt they are testing these machines with High Performance Mode on. Doesn't Apple typically ship reviewer devices with the current RELEASED OS? Not the beta OS? If that's the case, there's no way reviewers are testing with high performance mode. I'm spectulating out loud here, but I wonder if the "regular" mode normally runs at around 30-40W, with high performance mode running at the 50-60Ws they showed when comparing it to the laptop 3080's. I have no evidence to suggest this is actually the case, but could be really interesting if it was true!
My guess is still the same. M1 Max will match 3080 Laptop in rendering tasks, not in compute. And it'll do it with battery.
Yep, there's no doubt this will happen. M1 Max 32 core GPU has less compute execution units than the 3080 laptop.
In many ways it's a shame they compared it to a 3080.

Someone mentioned the XDR and how Apple sabotaged themselves with the comparison to the $30k Sony monitor and I totally agree.

It's psychologically like offering someone $100, but then only giving them $80. They should be overjoyed that you gave them $80, but by "anchoring" them at $100 the only thing they're thinking about is the missing $20.
This primarily sums up my feelings on the matter. That all being said, we still haven't seen the full results, so I'm going to wait and reserve judgement until we actually see it all.
Do we think it is a thermal throttle that the high performance mode unlocks? Seems weird, cause GB doesn't run long enough to thermally throttle (at least not that I have seen).
Like you said, Geekbench doesn't run long enough to thermal throttle (in 90% of cases). There's NO chance this is happening to the new thermal design on either the 14 or 16. I'm also doubtful that these benchmarks are using High Performance Mode, for the reasons I stated above.
 
  • Like
Reactions: julesme

hefeglass

macrumors 6502a
Apr 21, 2009
760
423
Well Apple claimed a 4x improvement from m1 to Max. It scales from 8 to 16 perfectly and then much less from 16 to 32. That is strange. So much so that we’ll respectEd and knowledgeable people have questioned the results.
what are you not understanding about "synthetic benchmark"
...there are many variables that could explain why this isnt scaling properly on geekbench..BUT WHO CARES, ITS GEEKBENCH. Seriously why is this whole thread hung up on this one stupid benchmark number.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
what are you not understanding about "synthetic benchmark"
...there are many variables that could explain why this isnt scaling properly on geekbench..BUT WHO CARES, ITS GEEKBENCH. Seriously why is this whole thread hung up on this one stupid benchmark number.
Higher benchmark numbers = bigger e-peen.

Until your own numbers are lower, then “benchmarks don’t matter”

This is a universal law of tech forums
 

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
People should compare the sub-tests - there are a few tests that are dragging the M1 Max down, it looks like histogram equalization and face detection are only about doubled vs the M1 but most other things are between 4 and 3.5x faster than the M1.

M1 Mini vs M1 Max
Yes, and you can even compare the M1 Max to the M1 Pro. One subtest does not show any improvement.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Higher benchmark numbers = bigger e-peen.

Until your own numbers are lower, then “benchmarks don’t matter”

This is a universal law of tech forums

Pretty much this. Benchmarks are weird. They are an ok approximation in absence of real data, and some of them are even good approximations. But I would still like to know why AMD GPUs are slower in Geekbench compute than Nvidia GPUs with the same nominal peak throughput...
 
  • Like
Reactions: Zhang

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
Pretty much this. Benchmarks are weird. They are an ok approximation in absence of real data, and some of them are even good approximations. But I would still like to know why AMD GPUs are slower in Geekbench compute than Nvidia GPUs with the same nominal peak throughput...
There are also a huge difference between Vulkan and openCL versions of the compute test on Windows. It makes me wonder if it performs the same task with two different APIs, or if it's just two different tests.
 

EugW

macrumors G5
Jun 18, 2017
14,917
12,889
Yep, I think it's likely this is the 32 core now machine my self. I was thinking about this when I woke up.

I HIGHLY doubt they are testing these machines with High Performance Mode on. Doesn't Apple typically ship reviewer devices with the current RELEASED OS? Not the beta OS? If that's the case, there's no way reviewers are testing with high performance mode.
No. It appears all review units are running Monterey. This makes sense, because I would expect Apple won't ship any of these machines with Big Sur. Remember, Monterey comes out officially next week anyway.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Remember: Geekbench GPU is not a graphics rendering benchmark. If we don't see (and it remains to be tested stringently) the expected scaling from doubling GPU resources and bandwidth, it could be connected to other limitations, such as SLC bandwidth not doubling with capacity (since the benchmark predominantly runs from cache), thermal/power throttling, et cetera. We simply can't tell from this data.
 

Irishman

macrumors 68040
Nov 2, 2006
3,449
859
They are not useful at all. Simply put nVidia doesn't support Metal and M1 GPU doesn't supoprt CUDA nor DX12. The only middle ground is OpenCL but neither of them excel in OpenCL like AMD does so once again it's useless. On the other hand comparing AMD GPU to M1 GPU is not fair at all since we don't know what's going on behind the scenes. Who can tell us if Metal drivers for AMD GPUs are not gimped on purpose or just half-arsed so M1 GPU can look a lot better option? Apple certainly won't.

The only valid cross-platform comparison in GPU rendering arena will be Octane Render Benchmark once they properly update it. And then there is a question between Linux and Windows since Windows reserves VRAM like it's the only thing it needs.

So get ready to do it yourself https://render.otoy.com/octanebench/


That’s not entirely true. I have no experience with Octane Rendering, but I do use Blender 3D and it is very much cross-platform, and it supports M1.

The Blender Foundation is working on a native M1 version, that’s in beta.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
No. It appears all review units are running Monterey. This makes sense, because I would expect Apple won't ship any of these machines with Big Sur. Remember, Monterey comes out officially next week anyway.
Ah yeah, just looked at it. They all are.
 

terminator-jq

macrumors 6502a
Nov 25, 2012
720
1,515
Just a reminder Geekbench is just 1 tool and as other have mentioned, Geekbench is more tuned for Nvidia GPUs.

The GFXBench numbers are showing the M1 Max closer to (and in some things actually surpassing) the 3080m. Still no way of telling if this is in high power mode so maybe these numbers will be even better.
 

dugbug

macrumors 68000
Aug 23, 2008
1,929
2,147
Somewhere in Florida
Just a reminder Geekbench is just 1 tool and as other have mentioned, Geekbench is more tuned for Nvidia GPUs.

The GFXBench numbers are showing the M1 Max closer to (and in some things actually surpassing) the 3080m. Still no way of telling if this is in high power mode so maybe these numbers will be even better.

GFXBench shows M1Max neck and neck with the RTX 3080 laptop GPU. Its actually a bit ahead.

If you filter on GFXBench to only show DESKTOP, and MacOS-Metal and cpu=ARM you get all the M1 family there now. still no breakout by model or GPU counts.


-d
 
  • Like
Reactions: anticipate

ElfinHilon

macrumors regular
May 18, 2012
142
48
Serious question, because I don't actually know, but what workloads are compute intensive that requires the GPU? Can people provide examples for me please?
 

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
To answer the question “why does geekbench matter?”. Simply because it’s pretty much all we have right now.

These are new computers with new processors and gpus. None of us have used them and there aren’t any reviews. All we can do is “read the tea leaves” about performance. Especially gpu performance. Some of us are trying to understand what will be a good value purchase. It’s strange how the scores are scaling, and that could help people decide what to buy.

Once we know more about real world performance, then benchmarks are less important, but for now they are the best approximation of real world performance.

Finally, for many of us, discovering and pondering about this stuff is fun.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,231
If you actually read Apple's test description for individual GPU tests in their press release, the biggest scaling they themselves show for M1 Max is 1.7X as compared to M1 Pro. (The range was 1.4X to 1.7X.) AFAIK, nowhere do they show a real world test with 2X scaling.

Apple rates the GPU at 10.4 TFLOPS and explicitly says it’s 4x the M1 and 2x the M1Pro. No not every application will see that level of scaling, but in general compute should and does between the Pro and M1. So we’re trying to figure out where the bottleneck is.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,231
Serious question, because I don't actually know, but what workloads are compute intensive that requires the GPU? Can people provide examples for me please?

For consumer/prosumer GPUs? There are scientific applications that use such GPUs for 32bit compute (many make heavy use of 64-bit compute on professional GPUs but not all), some cryptocurrency algorithms unfortunately, AI (Apple has a separate NPU in addition to AMX blocks on the CPU, but most AI research is GPU accelerated with the tensor cores built into the GPU), and rendering. Probably more but those are off the top of my head. Basically anything compute intensive where the algorithm ranges from “really” parallel to “embarrassingly” parallel (especially the latter) is a good fit for the GPU with caveats.
 
Last edited:
  • Like
Reactions: Irishman

ElfinHilon

macrumors regular
May 18, 2012
142
48
For consumer/prosumer GPUs? There are scientific applications that use such GPUs for 32bit compute (many make heavy use of 64-bit compute on professional GPUs but not all), some cryptocurrency algorithms unfortunately, and rendering. Probably more but those are off the top of my head. Basically anything compute intensive where the algorithm ranges from “really” parallel to “embarrassingly” parallel (especially the latter) is a good fit for the GPU with caveats.
More specifically, could you tell me tasks that people actually do, i.e, export a video, apply effects to a video, uhhh... dataset number crunching or something?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.