Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ElfinHilon

macrumors regular
May 18, 2012
142
48
The Iris must be much slower than the 5600m. How can the same gpu be 4x both? The 14" must be clocked lower.
The left side is a different test against a different GPU than the right side.

Left is doing some sort of render, comparing what's found in the 14 inch to the old intel iris graphics.
Right side is doing Redshift in the 16 inch, comparing to the 5600M.

It's a weird comparison, but it makes sense.
 

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
The left side is a different test against a different GPU than the right side.

Left is doing some sort of render, comparing what's found in the 14 inch to the old intel iris graphics.
Right side is doing Redshift in the 16 inch, comparing to the 5600M.

It's a weird comparison, but it makes sense.
Thanks, I see that now.
 
  • Like
Reactions: ElfinHilon

Slartibart

macrumors 68040
Aug 19, 2020
3,145
2,819
First of all, you ignored that I was talking about the entire GPU. 3090's bandwidth is way higher than that. Not only that, having a high bandwidth does NOT prove anything. This is over all performance to deal with.
It’s telling that you need comparisons with 300 watt desktop GPUs to make M1 look mediocre.
 
  • Haha
Reactions: diamond.g

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
If this is accurate, three reasons I can think of off the top of my head:

1) Clock speeds are lowered in the Max model to compensate for more cores. Pretty standard to keep heat down.
2) Andrei mentioned the Max GPU model actually looks like two separate GPUs on the same die with some shared logic. Could also affect things depending on interconnect.
3) Memory bottlenecks, keeping big GPUs fed is hard. However, they did double bandwidth for the bigger GPUs with double the cores relative to the smaller ones. So this should be less of an issue than the others. Latency probably went up though with double the bandwidth, so not entirely implausible that it had an effect.

I should add, that similar to #3, I don't think #2 is why the scaling would be off by this much. The reason most multi GPU systems have difficulty scaling is due to the need to communicate across memory for each. Obviously we have unified memory so that doesn't apply.

That leaves number #1, lower clocks as the most likely reason. If we want to try to rescue the contradictions in Apple's marketing, it could be that some of these are with the high power mode on in the 16", and are some are not.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
Here’s a Mac Pro (2019) with a W5700 getting the same score as the M1 Max https://browser.geekbench.com/v5/compute/3564214

Geekbench is weird.
That is interesting. Still, that's some good performance for a god damn laptop. Man this is exciting. This also tells us that compute isn't the end of the world. The W5700 was really quite good when it came out, even though that compute score is miles behind nvidia cards.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,665
OBX
I should add, that similar to #3, I don't think #2 is why the scaling would be off by this much. The reason most multi GPU systems have difficulty scaling is due to the need to communicate across memory for each. Obviously we have unified memory so that doesn't apply.

That leaves number #1, lower clocks as the most likely reason. If we want to try to rescue the contradictions in Apple's marketing, it could be that some of these are with the high power mode on in the 16", and are some are not.
Can't have lower clocks and get the quoted TFLOPS though right?
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Can't have lower clocks and get the quoted TFLOPS though right?

You can if the quoted TFLOPS are in high power mode and the other metrics are not. That why I say there are a lot of contradictions here. Monday or whenever the really good reviews come out, should be interesting.
 

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
That is interesting. Still, that's some good performance for a god damn laptop. Man this is exciting. This also tells us that compute isn't the end of the world. The W5700 was really quite good when it came out, even though that compute score is miles behind nvidia cards.
I think the W5700 is still a good card.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
I should add, that similar to #3, I don't think #2 is why the scaling would be off by this much. The reason most multi GPU systems have difficulty scaling is due to the need to communicate across memory for each. Obviously we have unified memory so that doesn't apply.

That leaves number #1, lower clocks as the most likely reason. If we want to try to rescue the contradictions in Apple's marketing, it could be that some of these are with the high power mode on in the 16", and are some are not.
Shouldn't there be a fourth option that the issue has to do with Geekbench? At this point though, I'm inclined to agree that it's the first option you listed.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
I think the W5700 is still a good card.
It is still a good card, make no mistake. What I'm getting at is that I think like someone said on here (I think it was leman), macOS GPU's tend to get stiffed when it comes to compute in geekbench for whatever reason. Basically, anything not nvidia/CUDA does rather poorly in geekbench compute tests.


Case in point, the 6900XT on par with a 2080 Ti lol.

Also, look at the Metal scores vs the CUDA scores. Comparing same tiered cards, the Metal scores significantly lower than the CUDA scores. Can we retire geekbench compute? XD
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Shouldn't there be a fourth option that the issue has to do with Geekbench? At this point though, I'm inclined to agree that it's the first option you listed.

Sorry this list was assuming the compute results so far are accurate to the 32-core GPU's actual compute performance.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,665
OBX
You can if the quoted TFLOPS are in high power mode and the other metrics are not. That why I say there are a lot of contradictions here. Monday or whenever the really good reviews come out, should be interesting.
That is fair. Maybe Apple should have implemented a boost mode based on thermals? Though having a high power mode is an good compromise.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Hahahaha I more of meant, how likely do you think that is the case?

Truthfully I wouldn’t care to guess.

Don’t get me wrong: it’s plausible not just “angels dancing on pinheads level of possible”. I just don’t feel comfortable specifying the likelihood beyond that.
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Something to consider with all of these scores is if they are using High Power Mode found in Monterey (only available for the 16” Max chip).


;)


 
  • Like
Reactions: Jorbanead

ElfinHilon

macrumors regular
May 18, 2012
142
48
Something to consider with all of these scores is if they are using High Power Mode found in Monterey (only available for the 16” Max chip).


;)


Do keep in mind for both of these, we KNOW that 14 sizes are being tested. We also know that the one with the metal score is a 16 inch.

Also, we know that the high power feature isn't out yet. It's in the beta for the new OS. I don't know if the review laptops would ship with the the beta OS or not. I am not knowledgeable on those things.

Given that the differences between the Metal and OpenCL scores are about what we'd expect (~15%), I'm doubtful that any of these are being tested with high powered mode on. It'll be extremely interesting to see how much that can boost performance. Even a modest 10-15% performance bump from where we are now gets us significantly closer the 4x performance over the 8 core M1 we thought we were going to get.
 
  • Like
Reactions: Jorbanead

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,298
It is still a good card, make no mistake. What I'm getting at is that I think like someone said on here (I think it was leman), macOS GPU's tend to get stiffed when it comes to compute in geekbench for whatever reason. Basically, anything not nvidia/CUDA does rather poorly in geekbench compute tests.


Case in point, the 6900XT on par with a 2080 Ti lol.

Also, look at the Metal scores vs the CUDA scores. Comparing same tiered cards, the Metal scores significantly lower than the CUDA scores. Can we retire geekbench compute? XD

That 6900xt score didn't look right compared to 6800 so I ran it on a lowly reference 6900xt so non-AIB. Vulkan > OpenCL but proves the point that API doesn't make a huge difference and in this case ~12%.

6900xt Vulkan
https://browser.geekbench.com/v5/compute/3564577
1634852450570.png


6900xt OpenCL
https://browser.geekbench.com/v5/compute/3564587
1634852528035.png
 

EugW

macrumors G5
Jun 18, 2017
14,912
12,883
Do keep in mind for both of these, we KNOW that 14 sizes are being tested. We also know that the one with the metal score is a 16 inch.

Also, we know that the high power feature isn't out yet. It's in the beta for the new OS. I don't know if the review laptops would ship with the the beta OS or not. I am not knowledgeable on those things.

Given that the differences between the Metal and OpenCL scores are about what we'd expect (~15%), I'm doubtful that any of these are being tested with high powered mode on. It'll be extremely interesting to see how much that can boost performance. Even a modest 10-15% performance bump from where we are now gets us significantly closer the 4x performance over the 8 core M1 we thought we were going to get.
Some of you guys are making, way, way too many excuses.

The bottom line is that there isn't perfect linear scaling. Not a single one of Apple's own tests advertised shows linear scaling of the GPU performance when comparing 32-core vs 16-core, either from the 14" or from the 16". And we shouldn't be expecting perfect scaling with real world actions anyway. Plus, I'm sure Apple knows how to activate its own 16" beast mode.

I went through their press release, and none of the GPU benchmarks they showed had even close to 2X scaling going from M1 Pro 16 to M1 Max 32. It ranged from 1.4X to 1.7X scaling, despite the doubling of cores.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
Some of you guys are making, way, way too many excuses.

The bottom line is that there isn't perfect linear scaling. Not a single one of Apple's own tests advertised shows linear scaling of the GPU performance when comparing 32-core vs 16-core, either from the 14" or from the 16". And we shouldn't be expecting perfect scaling with real world actions anyway. Plus, I'm sure Apple knows how to activate its own 16" beast mode.

I went through their press release, and none of the GPU benchmarks they showed had even close to 2X scaling going from M1 Pro 16 to M1 Max 32. It ranged from 1.4X to 1.7X scaling, despite the doubling of cores.
The issue here is that without getting close to that 4x performance whatever, there's no way we are in range of a 3080. That's simply false marketing then. At this point, we will need to wait and see what happens, but I suspect we be surprised in multiple ways: Some bad, a lot good.
 

EugW

macrumors G5
Jun 18, 2017
14,912
12,883
The issue here is that without getting close to that 4x performance whatever, there's no way we are in range of a 3080. That's simply false marketing then. At this point, we will need to wait and see what happens, but I suspect we be surprised in multiple ways: Some bad, a lot good.
Apple is well known to pick and choose its benchmarks. I'm sure for several REAL WORLD actions, it is comparable to a 3080. But I'm also sure that for many others, it isn't. However, the key point here is that the benchmarks Apple chooses aren't going to be Geekbench or GFXbench.
 
  • Like
Reactions: PlainBelliedSneetch
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.