Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Serban55

Suspended
Oct 18, 2020
2,153
4,344
So, this gpu score should go into the bigger imac The current intel one , next year , has(will have) an 2 years old gpu Radeon Pro 5700XT...so, if this is the 32 gpu cores indeed than , next year, for the first time their apple silicon gpu will be slower than their old device that replaces it (again that has an 2 years old gpu if im remember correctly )
Hope this is the 24gpu this way, this could solve the next year imac as well
 

gl3lan

macrumors newbie
May 11, 2020
19
22
Then there's also the fact that the 32core has 2/3rd's of the compute units that the 3080 does (which they were using to compare in their own keynote), which if the given numbers are true, genuinely doesn't make sense. This would mean there's some sort of severe bottleneck in geekbench for compute. If we assume the score of ~80K for the actual 32 core, this would be in line with the numbers given for the razor 3080 laptop (which Apple directly quoted) and has a score of ~120K.

I genuinely don't believe this to be the 32 core. I think if this was the case, Apple wouldn't have been so adamant about their presentation. Granted, this IS just one benchmark, but the numbers are so far off for the 32 core, I have a hard time believing this isn't the 24 core.

Also, in the Metal Score posted above, the core count isn't listed. So we don't actually know for a fact if this is either or.
I think those benchmarks correspond to the 32-core version for three reasons :
  • On the graphs presented during the keynote, 16-core peak power consumption estimation is around 30W while being around 60W for 32-core version. Considering the scaling is not linear (e.g. twice the consumption usually gives you less than twice the performance), we cannot expect 2x 16-core performance (or 4x M1 performance).
  • I have a hard time imagining Apple sending 24-core units to reviewers.
  • If you have a look at the way GPU performances are advertised on the Apple/MacbookPro website (the part where performances are compared against the 2019 16 inch version with 5600M on several softwares), the most favorable comparison for the 32-core version against the 16-core version is for the Final Cut Pro comparison, which advertises the 32-core as ~70% faster than the 16-core (2.9x/1.7x).
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
I think those benchmarks correspond to the 32-core version for three reasons :
  • On the graphs presented during the keynote, 16-core peak power consumption estimation is around 30W while being around 60W for 32-core version. Considering the scaling is not linear (e.g. twice the consumption usually gives you less than twice the performance), we cannot expect 2x 16-core performance (or 4x M1 performance).
  • I have a hard time imagining Apple sending 24-core units to reviewers.
  • If you have a look at the way GPU performances are advertised on the Apple/MacbookPro website (the part where performances are compared against the 2019 16 inch version with 5600M on several softwares), the most favorable comparison for the 32-core version against the 16-core version is for the Final Cut Pro comparison, which advertises the 32-core as ~70% faster than the 16-core (2.9x/1.7x).
It might be 32-core on the 14 inch and thermal throttled ? What you do think?
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
As I wrote before, Apple GPUs don’t do too hot on geekbench compute. Neither does AMD. It strongly favors Nvidia Ampere for some reason. It’s a shame these tests are not open sourced, one could have analyzed and profiled the code to see if there are any issues.
 
  • Like
Reactions: Irishman

Zhang

macrumors newbie
Oct 20, 2021
13
10
As I wrote before, Apple GPUs don’t do too hot on geekbench compute. Neither does AMD. It strongly favors Nvidia Ampere for some reason. It’s a shame these tests are not open sourced, one could have analyzed and profiled the code to see if there are any issues.
waiting for 3dmark wild life score.
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
Unlikely, Apple usually sends the top configs to reviewers to "show off".
the 32 gpu core is the top config, right? choosing the display size , for me, the 16" it doesnt make it the top config anymore, its just about just choosing the size, and of course more battery life...so yea..you could be right if im thinking
 

gl3lan

macrumors newbie
May 11, 2020
19
22
As I wrote before, Apple GPUs don’t do too hot on geekbench compute. Neither does AMD. It strongly favors Nvidia Ampere for some reason. It’s a shame these tests are not open sourced, one could have analyzed and profiled the code to see if there are any issues.
All I am saying is that I would be surprised to see benchmarks showing 4x improvement over M1 (or 2x over M1 Pro), whatever the benchmark.
 

gl3lan

macrumors newbie
May 11, 2020
19
22
the 32 gpu core is the top config, right? choosing the display size , for me, the 16" it doesnt make it the top config anymore
Yeah you might have a point here. Historically, 13 and 15/16 version were rarely (?) released at the same time, reviewers never had to choose between screen sizes.
 
  • Like
Reactions: Serban55

EugW

macrumors G5
Jun 18, 2017
14,904
12,880
The embargo for these MPB lifts Monday 9Am ET so hoping some will do this
I don’t know if things will be different this time or not, but typically Apple doesn’t allow reviewers to publish results from these types of benchmarks.

But no matter. Delivery date for some orders is Tuesday.
 

EugW

macrumors G5
Jun 18, 2017
14,904
12,880
Unlikely, Apple usually sends the top configs to reviewers to "show off".
In this case, with one of the most important Mac releases of all time, that is what I would expect.

However, I note that for the 12” MacBooks, for the 2015 and 2016 they sent the Core i7, but for the 2017 they did not. I think it’s because the 2017 chips did way better than previous years.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
The metal score is fairly disappointing. I think however that the architectures of the M1 and a 3080 are so different that synthetic benchmarks are not incredibly useful. At the same time non-synthetic benchmarks are far more likely to be better optimised for NVIDIA cards / Windows, so...

It feels like trying to say whether or not a Tesla is as fast as a Porsche 911. In some respects it is, but in other respects it definitely isn't.

Then again Apple brought this on themselves by comparing the M1 to a 3080, and if the metal score truly is only 68k then that was probably a mistake.
 

Bandaman

Cancelled
Aug 28, 2019
2,005
4,091
The metal score is fairly disappointing. I think however that the architectures of the M1 and a 3080 are so different that synthetic benchmarks are not incredibly useful. At the same time non-synthetic benchmarks are far more likely to be better optimised for NVIDIA cards / Windows, so...

It feels like trying to say whether or not a Tesla is as fast as a Porsche 911. In some respects it is, but in other respects it definitely isn't.

Then again Apple brought this on themselves by comparing the M1 to a 3080, and if the metal score truly is only 68k then that was probably a mistake.
Unless you are talking about a Tesla Roadster, in which case it makes the Porsche 911 seem like a Mazda Miata.
 

Natrium

macrumors regular
Aug 7, 2021
125
246
The metal score is fairly disappointing. I think however that the architectures of the M1 and a 3080 are so different that synthetic benchmarks are not incredibly useful. At the same time non-synthetic benchmarks are far more likely to be better optimised for NVIDIA cards / Windows, so...

It feels like trying to say whether or not a Tesla is as fast as a Porsche 911. In some respects it is, but in other respects it definitely isn't.

Then again Apple brought this on themselves by comparing the M1 to a 3080, and if the metal score truly is only 68k then that was probably a mistake.

Agree. This raw score is 62% better than the 5600M in the last gen 16 inch. Not the 4 times repeated over and over by Apple. And the M1 Max GPU uses more power than the 5600M: 60 vs 50 Watts. When fully utilized, battery actually will drain faster.
 
  • Like
Reactions: ElfinHilon

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
To everyone that hopes this is the 24c and not the 32c version: no. The benchmark clearly states M1 Max. Also this benchmark is likely run on a laptop given to reviewers, who obviously receive the most powerful version.
The 24c chip is called M1 Max as well.

1634803719916.png
 
  • Like
Reactions: l0stl0rd

edfoo

macrumors 6502
Oct 31, 2013
394
264
Australia
Then again Apple brought this on themselves by comparing the M1 to a 3080, and if the metal score truly is only 68k then that was probably a mistake.
Or maybe Apple is just comparing them for non-gaming tasks such as photo and video editing as similar in capability, but if to compare them playing some AAA games, the M1 Max would fall way behind.
 

Zhang

macrumors newbie
Oct 20, 2021
13
10
Or maybe Apple is just comparing them for non-gaming tasks such as photo and video editing as similar in capability, but if to compare them playing some AAA games, the M1 Max would fall way behind.
"Performance measured using select industry‑standard benchmarks."that's what apple said.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Or maybe Apple is just comparing them for non-gaming tasks such as photo and video editing as similar in capability, but if to compare them playing some AAA games, the M1 Max would fall way behind.

Can't say I agree with this sentiment. M1 GPU is a very competent rasterizer, especially for its low power consumption. With the very high bandwidths and cache sizes Nvidia can only dream about, I expect these chips to be very good for games — of course, provided the game has a first-class macOS version (which not many games do). The initial benchmarks in GFXbench already show this much.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
Or maybe Apple is just comparing them for non-gaming tasks such as photo and video editing as similar in capability, but if to compare them playing some AAA games, the M1 Max would fall way behind.

The result that has people most worried isn’t a gaming benchmark but a compute one. Compute tends to be a weakness of the GPU design Apple uses because it utilizes fewer compute shaders to achieve its results. Having said that, this does seem low given other M-GPU results and Apple’s own claimed Tflops. But maybe that’s simply how the GPU scales. We’ll find out when full reviews hit.

Graphics benchmarks can differ between professional and gaming scenarios for some GPUs but that’s not what we’re seeing here. Apple’s GPU should be better on all graphics scenarios including gaming than its compute score.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.