So we really don't know what's going on then yet? Like everything else from today? XDThe comparison uses OGL which is the slowest of the available API's. DX performance is better, and for some stupid reason DX12 and Vulkan results are missing.
So we really don't know what's going on then yet? Like everything else from today? XDThe comparison uses OGL which is the slowest of the available API's. DX performance is better, and for some stupid reason DX12 and Vulkan results are missing.
The 16” MBP has a high-performance mode. I guess they forgot to enable it when doing the benchmarks.So we really don't know what's going on then yet? Like everything else from today? XD
Could be. I'm of the opinion that it's not an option that even reviewers can use atm, but I don't know. It's just a guess.I
The 16” MBP has a high-performance mode. I guess they forgot to enable it when doing the benchmarks.
I suppose Apple has this lower power mode to prevent the 14” from overheating.
That would be in line with Apple's claimed performance numbers.Oh is it? Today I learned. I thought the on screen was more important.
So then does that mean when it comes to gaming, the M1 Max is actually better than the 3080 laptop???? ?????
EDIT: I'm guessing the thumbs up that crazy dave gave you indicates you're correct
Yeah, depends on the wattage. More specifically, they showed the Razer Blade's 3080. I don't remember which wattage that is, but I know that one doesn't perform as well as the other one they showed (MSI one) which is ACTUALLY good.That would be in line with Apple's claimed performance numbers.
That said, a Mobile RTX 3080 has much more performance than a Mobile RTX 3080. No, I didn't make a mistake there.
Nvidia's Mobile GPU lineup is so fragmented it's ridiculous. When you have Max P and Max Q variants, then different TDPs, it's almost impossible to compare. In Apple's presentation, they describe a 160W 3080, which according to Apple's unknown benchmark suite, narrowly pips the 32-core M1 Max. It also shows a 100W 3080, which gets trounced by the M1 Max.
If you went into a store and saw two laptops with 160W and 100W 3080s side by side, they would not be branded any differently.
The result is we will have benchmarks showing the M1 Max being beaten by 3080s, right alongside the exact same benchmark showing the M1 Max thrashing the 3080.
Even before we get to the vast architectural differences between NVIDIA's Ampere and Apple's custom GPU architectures, meaning that for specific tests, the relative differences between the two may easily swing to ridiculous levels.
I think it's clear that in this case, we might be best to ignore benchmarks, except for the lowest level ones (which are useful because you know EXACTLY what they are doing) or the highest level ones like 3DMark, or other specific rendering ones (because they are clearly representative of real workloads).
My predictions vs RTX3080 Mobile (high wattage version)So we really don't know what's going on then yet? Like everything else from today? XD
My predictions.
Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.
Yeah, I'm betting this will be somewhere where we end up. Also, regarding the encoding/decoding, don't forget about the dedicate hardware in the M1 MaxMy predictions.
Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.
Yes, that's what I was hinting at. Its a combination of the media engine, CPU and GPU all within a unified memory architecture. It will have advantages in many situations - especially for application performance.Yeah, I'm betting this will be somewhere where we end up. Also, regarding the encoding/decoding, don't forget about the dedicate hardware in the M1 Max
If we think Wildlife is reasonable, the M1 iPad 12 gets ~17105, which is ~68420 with maths. The average score for the 3080 Laptop GPU is 53417. I am not sure if anyone has a way of viewing Mac results on the web page, they hide the iOS results.Yeah, depends on the wattage. More specifically, they showed the Razer Blade's 3080. I don't remember which wattage that is, but I know that one doesn't perform as well as the other one they showed (MSI one) which is ACTUALLY good.
Sounds reasonable.My predictions vs RTX3080 Mobile (high wattage version)
Pure GPU Compute - significantly underperforms the RTX3080
Pure GPU rendering - slightly slower than RTX3080 but within ballpark
Video encoding/decode and editing - will be a revelation, significantly outperforming the RTX3080. And no video does not just depend on the media encoding engines. Video editing relies on the GPU to render the effects, transition, colour grading and many other things. Because of the unified arch. data can be kept in memory and manipulated by the media encoding engine, CPU and GPU without needing to have multiple copies.
Application performance for rendering - will be better than RTX3080 because most application relies on both CPU and GPU and the unified architecture has advantages here. For example Apple claims cinema 4d redshift is 4x faster than 5600M, while Geekbench compute is less than 1.7x faster.
Well, the RTX 3080 is a gaming GPU, it is not optimized and probably intentionally limited by NVIDIA for workstation related tasks. Else nobody would buy their Professional / Workstation GPU’s.
So I wouldn’t be surprised if the M1 Max can beat the RTX 3080 in non-gaming related stuff, despite having less raw horsepower.
Ha. That's so true.That said, a Mobile RTX 3080 has much more performance than a Mobile RTX 3080. No, I didn't make a mistake there.
Those charts were showing scaling based on performance over watt, not scaling by cores.I went through their press release, and none of the GPU benchmarks they showed had even close to 2X scaling going from M1 Pro 16 to M1 Max 32. It ranged from 1.4X to 1.7X scaling, despite the doubling of cores.
It wasn't from charts. It was from their press release. They were specifically talking about performance - actual render time speedups compared to the previous Intel MacBook Pros.Those charts were showing scaling based on performance over watt, not scaling by cores.
I personally wouldn't be surprised if the 32-core GPU is clocked lower. That could explain the non-linear performance increase but we will know soon.
Could well be correct.First post, possibly last. Intriguing thread, I’m interest because I’m expecting these new chips will be in the new iMacs next.
Pressure’s comment, along with the High Power Mode in Monterey, would explain some of the discrepancies between Apple’s M1 Max linear (4x) marketing and the numbers from the new MacBook Pro models, if:
- The M1 Pro’s GFX run at full clock, i.e. the benchmarks scale in a linear way.
- The M1 Max’s GFX are down-clocked about 25% for reduced power and heat, i.e. the results are (currently) less than linear.
- Monterey will provide the 16 inch M1 Max model with a full-clock mode + extra cooling (probably while plugged in) that the 14 inch will not get; which possibly explains why there’s a small weight adjustment between the M1 Max and M1 Pro on the 16 inch model that’s not in the 14 inch model.
If right, this would work out well for the new 27” iMacs with the M1 Max chips running full clock GFX, a good bump over the existing 27” iMac, while still being cooler, quieter and more power efficient in a new design.
Many more results now. All around the same scores. No way Apple handed out 24c versions to ALL reviewers. Means I was right and was attacked by you and others for stating my logical argument and conclusion.Game over. 15 new scores have been posted. All below 70,000. I’d believe apple would give out some 24 core versions, but not all. These scores are for the 32…unfortunately.
Game over. 15 new scores have been posted. All below 70,000. I’d believe apple would give out some 24 core versions, but not all. These scores are for the 32…unfortunately.
Very likely that it's a 32-core machine, but not 100% confirmed. However based on the number of results, I'm confused that they are all so similar. If "High Performance Mode" turns a 1GHz 32-core GPU into a 1.25GHz GPU, that should result in a clearly noticeable difference. I'm looking at what appears to be an 8% spread, however I would be expecting more like a 20-30% spread, assuming that if it is available, SOMEONE would have tried running in normal, and SOMEONE ELSE would have tried running in High Performance.Many more results now. All around the same scores. No way Apple handed out 24c versions to ALL reviewers. Means I was right and was attacked by you and others for stating my logical argument and conclusion.
... So not a RTX 3070 or RTX 3080 still IN GEEKBENCH...Not yet. The 16” also has a high performance mode which they probably did not use.
They have discovered a reference to this in a beta and Apple has confirmed the 16” MBP gets this capability.
So the scores will go up. It will probably be 4x M1, so not a RTX 3070 or RTX 3080 still.
Here's the issue though. Look at the on-screen FPS, not off-screen. You'll never play game with stuff off the screen in that manner.
Looking at that, it's closer to a 3060, which is oof.
that RTX 3080 is the desktop class gpu , so its pretty incredibleHere's the issue though. Look at the on-screen FPS, not off-screen. You'll never play game with stuff off the screen in that manner.
Looking at that, it's closer to a 3060, which is oof.
A test is as good as the person performing it. Who know what other tasks were running?GB is more garbage that I thought. Should only compare GB scores if it's the exact same minor version so 5.4.1 with 5.4.1 and not 5.3.1 or even 5.4.0.
yes for both encoding/decoding prores i think its first of its kindYeah, I'm betting this will be somewhere where we end up. Also, regarding the encoding/decoding, don't forget about the dedicate hardware in the M1 Max