Can we just all tweet known reviewers to run some geekbench tests?! There is an annoying lack of runs there.
It's not just less than double, it's SIGNIFICANTLY less than double.I get that you’re saying the difference between 16 and 32 cores is smaller than double, but that’s due to a massive jump for 16 core one. That isn’t reflected geekbench which makes me wary of comparing different benchmarks.
The 5600m score is irrelevant because it's on a different architecture. We just use it as a measuring stick to measure the relative difference between 16 and 32 core M1 GPUs.In that case though wouldn’t we see a 2.5x increase in geekbench? Instead the 16 core is more or less even with the amd gpu. I don’t think we can draw conclusions.
I thought there were other tests where the difference was more pronounced between the two. Given we can’t compare two benchmarks, I’m very reluctant to compare something like Redshift with geekbench.The 5600m score is irrelevant because it's on a different architecture. We just use it as a measuring stick to measure the relative difference between 16 and 32 core M1 GPUs.
Geekbench is doing incredibly simple tasks over and over again, while Redshift is going to (probably) be rendering a multi-gigabyte scene. This probably explains why the M1 with its faster memory and larger cache is going to crush the 5600M in real world tasks.
So basically scenario three I laid out earlier is being realized? See this comment I made earlier for what I mean.The 5600m score is irrelevant because it's on a different architecture. We just use it as a measuring stick to measure the relative difference between 16 and 32 core M1 GPUs.
Geekbench is doing incredibly simple tasks over and over again, while Redshift is going to (probably) be rendering a multi-gigabyte scene. This probably explains why the M1 with its faster memory and larger cache is going to crush the 5600M in real world tasks.
But this doesn't explain why the M1 Max is only 1.5 times better than the M1 Pro at this test.As I wrote before, Apple GPUs don’t do too hot on geekbench compute.
I noticed that on the Apple MacBook site if you go down to "GPU performance" you can actually toggle between different tasks and they have Redshift there, which is basically a 100% pure test of GPU horsepower.
They claim the 16 core is 2.5x faster than the old machine, and the 32 core is 4x faster. That basically confirms these GeekBench scores for me.
4x the previous amd gpu. Not the M1.4x faster than M1 (37.21 mins / 4 = 9.3 minutes) would still put it off the chart relative to their dGPU performance comparison claims.
View attachment 1873815
View attachment 1873816
If you look at the breakdown of tests in Geekbench you can see the incredible variance.I thought there were other tests where the difference was more pronounced between the two. Given we can’t compare two benchmarks, I’m very reluctant to compare something like Redshift with geekbench.
This is why I'm hopeful if the first scenario I described isn't real, I'm leaning towards the third one. I just can't believe that Apple would compare it to a 3080 and say "Yep, basically beats or matches a lower powered 3080!" and then have it come out that it doesn't even come remotely close to it lmao.If you look at the breakdown of tests in Geekbench you can see the incredible variance.
On the SFFT test the 5600M actually beats out the M1 Max with 593 Gflops to 585, also they're fairly close on other tests like the Gaussian Blur, Stereo Matching, and Histogram Equalisation tests.
But the M1 Max absolutely stomps it in Particle Physics by over 3x, and a bunch of other tests where the M1 Max is over twice as fast.
I still think there must be a reason why Apple compared the M1 Max to a 3080, and we're going to see that the Geekbench scores don't reflect real world performance.
From the page you posted:4x faster than M1 (37.21 mins / 4 = 9.3 minutes) would still put it off the chart relative to their dGPU performance comparison claims.
https://www.cgdirector.com/redshift-benchmark-results/
View attachment 1873815
View attachment 1873816
Redshift Metal isn’t as mature as Redshift CUDA yet, and the benchmark runs were done on eGPUs and/or beta macOS versions. Take these scores with a grain of salt. They’ll stabilize and improve over time.
True but without dedicated raytracing hardware the M1 Max is never going to get anywhere near the NVIDIA 20 / 30 series cards at raytracing (unless the scene is larger than the NVIDIA card's VRAM limit).From the page you posted:
The 5600m score is irrelevant because it's on a different architecture. We just use it as a measuring stick to measure the relative difference between 16 and 32 core M1 GPUs.
Geekbench is doing incredibly simple tasks over and over again, while Redshift is going to (probably) be rendering a multi-gigabyte scene. This probably explains why the M1 with its faster memory and larger cache is going to crush the 5600M in real world tasks.
Well put. I'm personally hoping these are good at gaming (gaming on a mac, I know), video editing and picture editing. Those are my main use cases for this machine, which is why I'm particularly concerned with the GPU.A task that disrupts GPU rendering workflows is initially transferring the 3D data from memory to the GPU. Often this will choke or significantly stall commencement of the render - geometry, lights, textures, volumes, everything has to get processed by the renderer before it can start creating an image. This is a reason why GPU rendering is still uncommon in feature movie production pipelines where CPU renderers such as Arnold are still favoured.
I'm optimistic that this bottleneck will be significantly eased with the Pro and Max's unified memory, and this is where the Mac will have an advantage over traditional CPU/GPU rendering configs. As a 3D artist you are constantly optimising and thinking ahead, trying to design a clean path for the renderer to not choke or crash (hello Octane users) and it appears that Apple's SoC will go a long way to making this part of the process more interactive with fewer hurdles.
So when we're looking at GPU performance, it's important to consider the day-to-day life of an artist using these machines, and the ways workflows will benefit from the new architecture - it's not just about Tflops and benchmarks.
Of course I hope they are fast as all hell and destroy everything we're used to, but there are other benefits to be had as well.
From the page you posted
Redshift Metal isn’t as mature as Redshift CUDA yet, and the benchmark runs were done on eGPUs and/or beta macOS versions. Take these scores with a grain of salt. They’ll stabilize and improve over time.
Metal is an API that is designed to sit as close to the "metal" as possible, and as such it creates everything in the perfect and optimal format to run at high speed on Apple hardware.
CUDA works really well on NVIDIA cards because it's essentially NVIDIA's version of Metal, designed specifically for their hardware.That's a new one. Thought someone said Metal is already perfect as a reason we don't need other APIs.
That's a new one. Thought someone said Metal is already perfect as a reason we don't need other APIs.
That's a new one. Thought someone said Metal is already perfect as a reason we don't need other APIs.
Yeah, I know that, but that would still give us larger numbers since the previous GPU was better than than the M1, right?
EDIT: Ok yeah, per Apple's site:
"16-inch MacBook Pro with Radeon Pro 5600M and 8GB HBM2"
And the 5600M score:
So now I'm even MORE confused by what's going on lmao.
AMD Radeon Pro 5600M 42510
Well yes, the scaling should be 2x, in other words, right?It's really the scaling between the 16 and 32 that is 1.6x that we should focus on as that is the scaling that appears to be broken in GB but the redshift results are comparable. So the GB scaling may not be broken and the true scaling between Pro and Max is about 1.6, not 2 at least for compute heavy tasks. *** However *** there could still be other reasons for this and we should still wait for reviews. But that's why @jmho brought it up.
Well yes, the scaling should be 2x, in other words, right?
That's what I'm confused by, and why I'm thinking this is the 24 core. I'm really eager to see what the actual reviews will be though.
I don't know the score of the 5600M in redshift, but given the 5500M results, it should complete the test in about 21 minutes. 4x faster than that gives 5.3 minutes, about the same as a RTX 3060 with RTX on.4x the previous amd gpu. Not the M1.
4x the previous amd gpu. Not the M1.
The GPU in M1 Pro is up to 2x faster than M1, while M1 Max is up to an astonishing 4x faster than M1, allowing pro users to fly through the most demanding graphics workflows.
Oh because of the 2.5X to 4X. Got it. So then yeah, that arguably solves that then. This is almost certainly the 32 core then. That's rather disappointing. Seems like Scenario 3 is going to happen.I'm confused. Yes, theoretically we should be seeing scaling of 2x given Apple's claims and stated the core count if frequency was the same. However, both Redshift and GB show actual scaling to be closer to 1.6x, not 2x. And the redshift results are from Apple's own website about the 32-core Max. Not a 3rd party reviewer.