Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Zhang

macrumors newbie
Oct 20, 2021
13
10
Can't say I agree with this sentiment. M1 GPU is a very competent rasterizer, especially for its low power consumption. With the very high bandwidths and cache sizes Nvidia can only dream about, I expect these chips to be very good for games — of course, provided the game has a first-class macOS version (which not many games do). The initial benchmarks in GFXbench already show this much.
3080 max q in GFXbench have just test Dx11 and openCL they should go test Dx12 and vulkan so it can show its real performance
 
  • Like
Reactions: Irishman

jmho

macrumors 6502a
Jun 11, 2021
502
996
Or maybe Apple is just comparing them for non-gaming tasks such as photo and video editing as similar in capability, but if to compare them playing some AAA games, the M1 Max would fall way behind.
That's why the comparison is so difficult. Photo editing barely needs the GPU, and Video editing will be using the hardware encoders / decoders.

Rendering with stuff like Octane is going to require pure GPU horsepower, which is probably not the M1's strong point.

As leman says, the M1's strong point is actually going to be gaming.
 
  • Like
Reactions: Irishman

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Right, but it is still very likely Apple would provide the maxed out 32c version to reviewers
Ok. But you're wrong that this is definitely not the 24c version because it's labeled as "M1 Max". We don't know for sure.

Saying that Apple would only provide the 32c version to reviewers is completely different.
 
Last edited:
  • Like
Reactions: ElfinHilon

Serban55

Suspended
Oct 18, 2020
2,153
4,344
I don’t know if things will be different this time or not, but typically Apple doesn’t allow reviewers to publish results from these types of benchmarks.

But no matter. Delivery date for some orders is Tuesday.
remember, is not just the reviewrs that got the mbp from Apple...but also our devices are coming Monday night
S very strange embargo :)) i mean embargo even for homepods mini...just for the new colours ?!?! :))))
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
To everyone that hopes this is the 24c and not the 32c version: no. The benchmark clearly states M1 Max. Also this benchmark is likely run on a laptop given to reviewers, who obviously receive the most powerful version.
The M1 Max has 24 or 32 GPU cores, but I agree with your point that reviewers would probably receive the best option
 
  • Like
Reactions: Irishman

vladi

macrumors 65816
Jan 30, 2010
1,008
617
The metal score is fairly disappointing. I think however that the architectures of the M1 and a 3080 are so different that synthetic benchmarks are not incredibly useful. At the same time non-synthetic benchmarks are far more likely to be better optimised for NVIDIA cards / Windows, so...

It feels like trying to say whether or not a Tesla is as fast as a Porsche 911. In some respects it is, but in other respects it definitely isn't.

Then again Apple brought this on themselves by comparing the M1 to a 3080, and if the metal score truly is only 68k then that was probably a mistake.

They are not useful at all. Simply put nVidia doesn't support Metal and M1 GPU doesn't supoprt CUDA nor DX12. The only middle ground is OpenCL but neither of them excel in OpenCL like AMD does so once again it's useless. On the other hand comparing AMD GPU to M1 GPU is not fair at all since we don't know what's going on behind the scenes. Who can tell us if Metal drivers for AMD GPUs are not gimped on purpose or just half-arsed so M1 GPU can look a lot better option? Apple certainly won't.

The only valid cross-platform comparison in GPU rendering arena will be Octane Render Benchmark once they properly update it. And then there is a question between Linux and Windows since Windows reserves VRAM like it's the only thing it needs.

So get ready to do it yourself https://render.otoy.com/octanebench/
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Rendering with stuff like Octane is going to require pure GPU horsepower, which is probably not the M1's strong point.

... and bandwidth and cache, which M1 has more then plenty :)

I think that everyone is really underestimating the cache. M1 Max has 64MB of cache + low latency memory access. That's 10 times more than Nvidia flagships. So for workloads that involve complex memory access patterns (like rendering) it's likely to perform very well indeed.
 
  • Like
Reactions: hefeglass and Rashy

locust76

macrumors 6502a
Jan 23, 2009
689
90
Imagine using metrics from the 90's to compare GPUs ??

Every GPU can push massive amounts of pixels around, that's not the issue. It's how they process those pixels. How's the M1's shader performance? How's the M1's real time ray tracing performance?

Which is faster? A Honda Civic with ~ 170 Horsepower or the Titanic with ~ 50,000 Horsepower? Hint: One of those two has a top speed of 120+ MPH and gets from 0-60 in 6.8 seconds.

There's a lot more to GPU performance than theoretical TFLOPS and fill rates.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
They are not useful at all. Simply put nVidia doesn't support Metal and M1 GPU doesn't supoprt CUDA nor DX12. The only middle ground is OpenCL but neither of them excel in OpenCL like AMD does so once again it's useless. On the other hand comparing AMD GPU to M1 GPU is not fair at all since we don't know what's going on behind the scenes. Who can tell us if Metal drivers for AMD GPUs are not gimped on purpose or just half-arsed so M1 GPU can look a lot better option? Apple certainly won't.

I think one should use the best available score among various APIs available for the platform, assuming that the results are the same. Isn't that what counts in the end, the ability to do the work as quickly as possibly using the best way? You can (and should) compare CUDA to Metal if the kernels perform the same work.

The only valid cross-platform comparison in GPU rendering arena will be Octane Render Benchmark once they properly update it. And then there is a question between Linux and Windows since Windows reserves VRAM like it's the only thing it needs.

Octane also uses different APIs on different platforms...

Isn't there also Redshift? I think those folks were featured in Apple's keynote along with Octane...
 

vel0city

macrumors 6502
Dec 23, 2017
347
510
I think one should use the best available score among various APIs available for the platform, assuming that the results are the same. Isn't that what counts in the end, the ability to do the work as quickly as possibly using the best way? You can (and should) compare CUDA to Metal if the kernels perform the same work.



Octane also uses different APIs on different platforms...

Isn't there also Redshift? I think those folks were featured in Apple's keynote along with Octane...

Yes, Redshift is a GPU renderer running natively on Apple Silicon. I use it with Cinema 4D. It's very reliable and robust and the preferred GPU renderer over Octane for its stability.

According to a Redshift dev on their official forum with regard to the M1 Max "The GPUs in those machines have access to 40GB+ out of that 64GB."

I ordered a 64GB Max specifically for working with C4D+Redshift and will report back with results as soon as it arrives.
 

Zhang

macrumors newbie
Oct 20, 2021
13
10
Yes, Redshift is a GPU renderer running natively on Apple Silicon. I use it with Cinema 4D. It's very reliable and robust and the preferred GPU renderer over Octane for its stability.

According to a Redshift dev on their official forum with regard to the M1 Max "The GPUs in those machines have access to 40GB+ out of that 64GB."

I ordered a 64GB Max specifically for working with C4D+Redshift and will report back with results as soon as it arrives.
do u have link for that
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344

edit: it went down...but it was an geekbench score with 72.220 score with the first/second run and last one around 61k
 
Last edited:

Serban55

Suspended
Oct 18, 2020
2,153
4,344
Says "video unavailable" to me. What's that?
edit: it went down...but it was an geekbench score with 72.220 score with the first/second run and last one around 61k
...i wait again for the video to come up again...in the description was saying 14" m1 max 64 gb ram 1Tssd
Probably it is fake?!
 
Last edited:

jujoje

macrumors regular
May 17, 2009
247
288
Yes, Redshift is a GPU renderer running natively on Apple Silicon. I use it with Cinema 4D. It's very reliable and robust and the preferred GPU renderer over Octane for its stability.

According to a Redshift dev on their official forum with regard to the M1 Max "The GPUs in those machines have access to 40GB+ out of that 64GB."

I ordered a 64GB Max specifically for working with C4D+Redshift and will report back with results as soon as it arrives.

Really curious as to how well Redshift and Octane handle reasonable size data sets - got to put that memory to good use :) Something like the Disneys cloud data set or the Animal Logic Lab test scene. The latter would be interesting given that it uses 14Gb textures so should prove a reasonable workout for the card.

If you get the Redshift working it'd be curious to know how new gpus compares in terms of performance; will def be keeping an eye on that thread in the Maxon forums :)

The only valid cross-platform comparison in GPU rendering arena will be Octane Render Benchmark once they properly update it. And then there is a question between Linux and Windows since Windows reserves VRAM like it's the only thing it needs.

So get ready to do it yourself https://render.otoy.com/octanebench/

Unless I'm missing something that doesn't work on Metal; downloaded it and it it gave me a "no compatible devices found" message on my mba.
 
  • Like
Reactions: vel0city

leman

macrumors Core
Oct 14, 2008
19,522
19,679
edit: it went down...but it was an geekbench score with 72.220 score with the first/second run and last one around 61k
...i wait again for the video to come up again...in the description was saying 14" m1 max 64 gb ram 1Tssd
Probably it is fake?!

If it is a 14" the GPU likely cannot reach its maximal potential.
 

jujoje

macrumors regular
May 17, 2009
247
288
I think it's only Octane X that supports M1, while OctaneBench is built on OctaneRender which only works on Macs with AMD cards.

Ah cheers for that; Find Otoy's branding super confusing. No octane benchmark for now I guess :)
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,298
This benchmark is likely run on a laptop given to reviewers, who obviously receive the most powerful version.

Correct. Reviewers are getting the highest configuration M1 Max with 32 iGPU cores, 10 CPU cores and max 64GB RAM. Standard industry practice for decades.

https://browser.geekbench.com/v5/compute/3551790

1634825876750.png
 
Last edited:

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
edit: it went down...but it was an geekbench score with 72.220 score with the first/second run and last one around 61k
...i wait again for the video to come up again...in the description was saying 14" m1 max 64 gb ram 1Tssd
Probably it is fake?!
...or a reviewer who realized they were in breach of their NDA and got scared, so removed it.
 

EugW

macrumors G5
Jun 18, 2017
14,907
12,880
--- dead YouTube link ---

edit: it went down...but it was an geekbench score with 72.220 score with the first/second run and last one around 61k
What score? Geekbench 5 Metal?

If so, then 72220 means about 3.3X to 3.4X scaling vs M1, which would be consistent with a 32-core GPU M1 Max.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
I think those benchmarks correspond to the 32-core version for three reasons :
  • On the graphs presented during the keynote, 16-core peak power consumption estimation is around 30W while being around 60W for 32-core version. Considering the scaling is not linear (e.g. twice the consumption usually gives you less than twice the performance), we cannot expect 2x 16-core performance (or 4x M1 performance).
  • I have a hard time imagining Apple sending 24-core units to reviewers.
  • If you have a look at the way GPU performances are advertised on the Apple/MacbookPro website (the part where performances are compared against the 2019 16 inch version with 5600M on several softwares), the most favorable comparison for the 32-core version against the 16-core version is for the Final Cut Pro comparison, which advertises the 32-core as ~70% faster than the 16-core (2.9x/1.7x).
If they do, then I suspect something is wrong for geekbench, as we should should be seeing much higher scores than that.
 

ElfinHilon

macrumors regular
May 18, 2012
142
48
The result that has people most worried isn’t a gaming benchmark but a compute one. Compute tends to be a weakness of the GPU design Apple uses because it utilizes fewer compute shaders to achieve its results. Having said that, this does seem low given other M-GPU results and Apple’s own claimed Tflops. But maybe that’s simply how the GPU scales. We’ll find out when full reviews hit.

Graphics benchmarks can differ between professional and gaming scenarios for some GPUs but that’s not what we’re seeing here. Apple’s GPU should be better on all graphics scenarios including gaming than its compute score.
It's also quite low due to having around 2/3rd's of the compute units of the 3080, of which, Apple directly compared it to.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.