Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I
So we really don't know what's going on then yet? Like everything else from today? XD
The 16” MBP has a high-performance mode. I guess they forgot to enable it when doing the benchmarks.

I suppose Apple has this lower power mode to prevent the 14” from overheating.
 
I

The 16” MBP has a high-performance mode. I guess they forgot to enable it when doing the benchmarks.

I suppose Apple has this lower power mode to prevent the 14” from overheating.
Could be. I'm of the opinion that it's not an option that even reviewers can use atm, but I don't know. It's just a guess.
 
Oh is it? Today I learned. I thought the on screen was more important.

So then does that mean when it comes to gaming, the M1 Max is actually better than the 3080 laptop???? ?????

EDIT: I'm guessing the thumbs up that crazy dave gave you indicates you're correct :p
That would be in line with Apple's claimed performance numbers.

That said, a Mobile RTX 3080 has much more performance than a Mobile RTX 3080. No, I didn't make a mistake there.

Nvidia's Mobile GPU lineup is so fragmented it's ridiculous. When you have Max P and Max Q variants, then different TDPs, it's almost impossible to compare. In Apple's presentation, they describe a 160W 3080, which according to Apple's unknown benchmark suite, narrowly pips the 32-core M1 Max. It also shows a 100W 3080, which gets trounced by the M1 Max.

If you went into a store and saw two laptops with 160W and 100W 3080s side by side, they would not be branded any differently.

The result is we will have benchmarks showing the M1 Max being beaten by 3080s, right alongside the exact same benchmark showing the M1 Max thrashing the 3080.

Even before we get to the vast architectural differences between NVIDIA's Ampere and Apple's custom GPU architectures, meaning that for specific tests, the relative differences between the two may easily swing to ridiculous levels.

I think it's clear that in this case, we might be best to ignore benchmarks, except for the lowest level ones (which are useful because you know EXACTLY what they are doing) or the highest level ones like 3DMark, or other specific rendering ones (because they are clearly representative of real workloads).
 
That would be in line with Apple's claimed performance numbers.

That said, a Mobile RTX 3080 has much more performance than a Mobile RTX 3080. No, I didn't make a mistake there.

Nvidia's Mobile GPU lineup is so fragmented it's ridiculous. When you have Max P and Max Q variants, then different TDPs, it's almost impossible to compare. In Apple's presentation, they describe a 160W 3080, which according to Apple's unknown benchmark suite, narrowly pips the 32-core M1 Max. It also shows a 100W 3080, which gets trounced by the M1 Max.

If you went into a store and saw two laptops with 160W and 100W 3080s side by side, they would not be branded any differently.

The result is we will have benchmarks showing the M1 Max being beaten by 3080s, right alongside the exact same benchmark showing the M1 Max thrashing the 3080.

Even before we get to the vast architectural differences between NVIDIA's Ampere and Apple's custom GPU architectures, meaning that for specific tests, the relative differences between the two may easily swing to ridiculous levels.

I think it's clear that in this case, we might be best to ignore benchmarks, except for the lowest level ones (which are useful because you know EXACTLY what they are doing) or the highest level ones like 3DMark, or other specific rendering ones (because they are clearly representative of real workloads).
Yeah, depends on the wattage. More specifically, they showed the Razer Blade's 3080. I don't remember which wattage that is, but I know that one doesn't perform as well as the other one they showed (MSI one) which is ACTUALLY good.
 
So we really don't know what's going on then yet? Like everything else from today? XD
My predictions vs RTX3080 Mobile (high wattage version)

Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.
 
My predictions.

Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.

Well, the RTX 3080 is a gaming GPU, it is not optimized and probably intentionally limited by NVIDIA for workstation related tasks. Else nobody would buy their Professional / Workstation GPU’s.

So I wouldn’t be surprised if the M1 Max can beat the RTX 3080 in non-gaming related stuff, despite having less raw horsepower.
 
  • Like
Reactions: anticipate
My predictions.

Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.
Yeah, I'm betting this will be somewhere where we end up. Also, regarding the encoding/decoding, don't forget about the dedicate hardware in the M1 Max ;)
 
Yeah, I'm betting this will be somewhere where we end up. Also, regarding the encoding/decoding, don't forget about the dedicate hardware in the M1 Max ;)
Yes, that's what I was hinting at. Its a combination of the media engine, CPU and GPU all within a unified memory architecture. It will have advantages in many situations - especially for application performance.
 
  • Like
Reactions: ElfinHilon
Yeah, depends on the wattage. More specifically, they showed the Razer Blade's 3080. I don't remember which wattage that is, but I know that one doesn't perform as well as the other one they showed (MSI one) which is ACTUALLY good.
If we think Wildlife is reasonable, the M1 iPad 12 gets ~17105, which is ~68420 with maths. The average score for the 3080 Laptop GPU is 53417. I am not sure if anyone has a way of viewing Mac results on the web page, they hide the iOS results.

Wildlife Extreme results should be ~20164 (M1 Max) to 23798 (3080). I don't see Unlimited results online.
My predictions vs RTX3080 Mobile (high wattage version)

Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.
Sounds reasonable.
 
Well, the RTX 3080 is a gaming GPU, it is not optimized and probably intentionally limited by NVIDIA for workstation related tasks. Else nobody would buy their Professional / Workstation GPU’s.

So I wouldn’t be surprised if the M1 Max can beat the RTX 3080 in non-gaming related stuff, despite having less raw horsepower.

Actually the opposite (unless you meant the M1 Max’s dedicated hardware for things like Pro Res? and Apple’s NPU?): the 3080 will crush the M1 Max GPU in FP32. It won’t be close. Even taking Apple’s 4x claims at face value, that’s at best ~10TFLOPS. A full fat 3080 is 3x that, though obviously mobile low watt versions are less. The biggest differences between the 3080 and the pro Nvidia GPUs is FP64 and Tensor Ops. But the M1 Max GPU is unlikely good for that either at least I’ve never even seen FP64 benchmarks for the M1. Apple’s TBDR design means for less compute they can get more FPS. But it does mean compute is less emphasized wrt to graphics performance.
 
Last edited:
  • Like
Reactions: Macintosh IIcx
I went through their press release, and none of the GPU benchmarks they showed had even close to 2X scaling going from M1 Pro 16 to M1 Max 32. It ranged from 1.4X to 1.7X scaling, despite the doubling of cores.
Those charts were showing scaling based on performance over watt, not scaling by cores.
 
Those charts were showing scaling based on performance over watt, not scaling by cores.
It wasn't from charts. It was from their press release. They were specifically talking about performance - actual render time speedups compared to the previous Intel MacBook Pros.

eg. M1 Pro was say 3.6X the speed of the Intel Mac, and M1 Max was 5X the speed. That means the scaling from M1 Pro to M1 Max is 5/3.6 = 1.39X.

They had a whole bunch of different application GPU tests like this so I went through all of them, The range was 1.39X to 1.71X. Nothing was even close to 2X scaling.
 
Game over. 15 new scores have been posted. All below 70,000. I’d believe apple would give out some 24 core versions, but not all. These scores are for the 32…unfortunately.
 
  • Like
Reactions: Roode
I personally wouldn't be surprised if the 32-core GPU is clocked lower. That could explain the non-linear performance increase but we will know soon.

First post, possibly last. Intriguing thread, I’m interest because I’m expecting these new chips will be in the new iMacs next.

Pressure’s comment, along with the High Power Mode in Monterey, would explain some of the discrepancies between Apple’s M1 Max linear (4x) marketing and the numbers from the new MacBook Pro models, if:

- The M1 Pro’s GFX run at full clock, i.e. the benchmarks scale in a linear way.

- The M1 Max’s GFX are down-clocked about 25% for reduced power and heat, i.e. the results are (currently) less than linear.

- Monterey will provide the 16 inch M1 Max model with a full-clock mode + extra cooling (probably while plugged in) that the 14 inch will not get; which possibly explains why there’s a small weight adjustment between the M1 Max and M1 Pro on the 16 inch model that’s not in the 14 inch model.

If right, this would work out well for the new 27” iMacs with the M1 Max chips running full clock GFX, a good bump over the existing 27” iMac, while still being cooler, quieter and more power efficient in a new design.
 
First post, possibly last. Intriguing thread, I’m interest because I’m expecting these new chips will be in the new iMacs next.

Pressure’s comment, along with the High Power Mode in Monterey, would explain some of the discrepancies between Apple’s M1 Max linear (4x) marketing and the numbers from the new MacBook Pro models, if:

- The M1 Pro’s GFX run at full clock, i.e. the benchmarks scale in a linear way.

- The M1 Max’s GFX are down-clocked about 25% for reduced power and heat, i.e. the results are (currently) less than linear.

- Monterey will provide the 16 inch M1 Max model with a full-clock mode + extra cooling (probably while plugged in) that the 14 inch will not get; which possibly explains why there’s a small weight adjustment between the M1 Max and M1 Pro on the 16 inch model that’s not in the 14 inch model.

If right, this would work out well for the new 27” iMacs with the M1 Max chips running full clock GFX, a good bump over the existing 27” iMac, while still being cooler, quieter and more power efficient in a new design.
Could well be correct.
 
Game over. 15 new scores have been posted. All below 70,000. I’d believe apple would give out some 24 core versions, but not all. These scores are for the 32…unfortunately.
Many more results now. All around the same scores. No way Apple handed out 24c versions to ALL reviewers. Means I was right and was attacked by you and others for stating my logical argument and conclusion.
 
Last edited:
  • Like
Reactions: Roode
Game over. 15 new scores have been posted. All below 70,000. I’d believe apple would give out some 24 core versions, but not all. These scores are for the 32…unfortunately.

Not yet. The 16” also has a high performance mode which they probably did not use.

They have discovered a reference to this in a beta and Apple has confirmed the 16” MBP gets this capability.

So the scores will go up. It will probably be 4x M1, so not a RTX 3070 or RTX 3080 still.
 
Many more results now. All around the same scores. No way Apple handed out 24c versions to ALL reviewers. Means I was right and was attacked by you and others for stating my logical argument and conclusion.
Very likely that it's a 32-core machine, but not 100% confirmed. However based on the number of results, I'm confused that they are all so similar. If "High Performance Mode" turns a 1GHz 32-core GPU into a 1.25GHz GPU, that should result in a clearly noticeable difference. I'm looking at what appears to be an 8% spread, however I would be expecting more like a 20-30% spread, assuming that if it is available, SOMEONE would have tried running in normal, and SOMEONE ELSE would have tried running in High Performance.

So I'm wondering if it is still all on Normal, not High Performance, which would provide a perfectly reasonable justification for the slightly lower than expected results that are leaking out. High Performance might not be in OS 12.0.1 yet.
 
Not yet. The 16” also has a high performance mode which they probably did not use.

They have discovered a reference to this in a beta and Apple has confirmed the 16” MBP gets this capability.

So the scores will go up. It will probably be 4x M1, so not a RTX 3070 or RTX 3080 still.
... So not a RTX 3070 or RTX 3080 still IN GEEKBENCH...

Seems to do just fine in GFXBench against the mobile 3080.
 
  • Like
Reactions: anticipate
Here's the issue though. Look at the on-screen FPS, not off-screen. You'll never play game with stuff off the screen in that manner.

Looking at that, it's closer to a 3060, which is oof.

That link compares it to the desktop 3080, which is a 250+Watts GPU in itself. It's entirely silly to expect a 2kg laptop to outperform a desktop that uses 10 times more power. I mean, Apple's tech is much more power efficient but it's not made of pixy dust. Here is M1 Max vs. 3080 mobile:

 
Here's the issue though. Look at the on-screen FPS, not off-screen. You'll never play game with stuff off the screen in that manner.

Looking at that, it's closer to a 3060, which is oof.
that RTX 3080 is the desktop class gpu , so its pretty incredible
nvm leman already posted
 
GB is more garbage that I thought. Should only compare GB scores if it's the exact same minor version so 5.4.1 with 5.4.1 and not 5.3.1 or even 5.4.0.
A test is as good as the person performing it. Who know what other tasks were running?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.