Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Benchmark results doesn't really represent the actual performance. M1 Max itself is close to 3060 laptop with gaming performance so I would say that's meaningless when you actually work with software. Apple Silicon GPU's performance is weak, forget it everyone.
I mean, forget that a MacBook Air can render video faster than the i9 iMac, the only thing that ever matters in the eyes of some is “My Games!”.
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
But that isn't what was claimed when 'weak' was brought up. You're redefining weak in the context of the conversation to be more about optimization and real world performance than theoretical performance. The comparison the TFlops of the GPUs and the benchmark results show that Apple's GPUs can hang well with their competitors in well optimized software. Bad optimization doesn't make the GPUs themselves weak where they are positioned in the overall market.
Nope, Mac is slow on both benchmark AND actual software. Dont forget that Nvidia has CUDA and many other factors to increase its performance.

Not quite. There are applications or operations that are certainly faster on AS GPUs. And in some applications, price per perf are very much in favor of AS.
Not in 3D world. Performance is more important than performance by watt. Are you gonna wait for a whole day just to render a project? I doubt it. If benchmarks actually represent actual performance, then we shouldn't even use software to test with.
 

aytan

macrumors regular
Dec 20, 2022
161
110
Yeah recently on my M1 Max I’ve been using scenes with over 50 million polygons and around 50 subtools with no issues. Even when I have an 8k document size rendering is nice and quick too (but rendering is less than 10% of my overall 3d work time, the rest is modelling + texturing etc). ZBrush is so well optimised I don’t understand how it does it - I can only imagine how good it will become now it’s under the Maxon umbrella.
ZBrush kind of uniq software even it's not '3D'. I have been familiar with Zbrush since first version, I guess it was 2004-2005 but maybe I m wrong about the year. Maxon starts to transfer master pieces of Zbrush to C4D, I think this is why Maxon purchased ZBrush, also its not a good idea build up sculpting and texturing implementations from scratch as a new part of the C4D. They push some new executions to Zbrush some of them are really nice. ZBrush came from a long way year after year software become more mature and now it works nearly perfect. I can suggest an Ultra besides Max. I have use both of them and Zbruh works way better on Ultra. Kind of full maxed out Thredreaper performance maybe better, if there is some users who owns 32/64 cores CPU's could give us an insight.
 
  • Like
Reactions: singhs.apps

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
It does so because of better encoders and decoders. The M1/M2 MBA GPU is much weaker than a 2020 iMac 5700 16GB GPU
Yes, I fully understand that the additional hardware specifically for that purpose is a *massive* breakthrough in capability for mobile video editing. Great for that profession.

What other tasks is the GPU miles behind in…gaming?
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
It does so because of better encoders and decoders. The M1/M2 MBA GPU is much weaker than a 2020 iMac 5700 16GB GPU
But also this is a silly comparison, because the 5700 in that iMac has a TDP of 130W for just the GPU and the entire M1 or M2 MBA computer is maybe 35W peak, 15W sustained. They're fundamentally different product categories.

The computer which actually replaced that high end 2020 iMac is the Mac Studio, and its GPU is at minimum 4x faster than a base M1 GPU (32c M1 Max GPU vs 7c M1 Air GPU).
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
But also this is a silly comparison, because the 5700 in that iMac has a TDP of 130W for just the GPU and the entire M1 or M2 MBA computer is maybe 35W peak, 15W sustained. They're fundamentally different product categories.

The computer which actually replaced that high end 2020 iMac is the Mac Studio, and its GPU is at minimum 4x faster than a base M1 GPU (32c M1 Max GPU vs 7c M1 Air GPU).
hey its apple that's using laptop parts in desktops. The M1 Max GPU could be even faster in the Mac Studio but they clock it so low. why?
There is ample cooling and using more power on a desktop is fine because it won't effect battery life.

Instead Apple clocks it the same as a 14"/16" MBP M1 Max. This shows that Apple needs to improve in desktop.
 
  • Like
Reactions: sunny5

sirio76

macrumors 6502a
Mar 28, 2013
578
416
You asked someone earlier to show you their art, so show me your code.
That’s a good one:)
Unlike other ignorant coward here I’ll not hide me;) I don’t code, I’m a 3D archiviz and after all this is a 3D topic.
I do know personally quite a bit of people that code for high end 3D software and your statement that all are using a 4090 is false in my experience. Many of them are using Windows laptop, a few of them Mac laptop, of course if you are developing specifically for a GPU engine you will benefit from a fast graphic card but as said many time not everything in 3D is GPU bounded, actually is the opposite since every 3D DCC app code is developed for CPU and uses GPU only for a few selected tasks.
You always start from the wrong assumption that GPUs are essential for every single thing you do in 3D, most of the code do not use Metal or CUDA at all.
Take Cinema4D, Maya, Max, Modo, Rhino, Houdini, Blender, Solidwork, how much of the code is using GPU in percentage?
 

aeronatis

macrumors regular
Sep 9, 2015
198
152
It does so because of better encoders and decoders. The M1/M2 MBA GPU is much weaker than a 2020 iMac 5700 16GB GPU

This is no different than Blender rendering extremely fast due to OptiX using the Tensor cores and having hardware RT.

M1 Max easily goes toe on toe with RTX 3060Ti desktop or RTX3080 laptop playing Baldur's Gate 3. I agree that this means nothing in practice though.

hey its apple that's using laptop parts in desktops. The M1 Max GPU could be even faster in the Mac Studio but they clock it so low. why?
There is ample cooling and using more power on a desktop is fine because it won't effect battery life.

Instead Apple clocks it the same as a 14"/16" MBP M1 Max. This shows that Apple needs to improve in desktop.

Probably they couldn't scale it more or they detected too much diminishing returns when giving more power envelope. They should definitely improve on desktop.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
Take Cinema4D, Maya, Max, Modo, Rhino, Houdini, Blender, Solidwork, how much of the code is using GPU in percentage?
I think we agree but the confusion is that when you hear "cutting edge 3D" you're thinking cutting edge software for artists, which I agree doesn't always use 100% GPU, while when I say "cutting edge 3D" I literally mean researching cutting edge techniques which almost by definition will be pushing the GPU to 100%.

I still develop on Mac so obviously I don't think everybody should be using a 4090. My only point was that the people who work on the kind of algorithms you see on Two Minute Papers or at Siggraph, or the people at Unreal working on cutting edge stuff like Lumen + Nanite generally need the fastest and most feature-rich GPUs and will be pushing them to 100% most of the time, so for them more hardware == better.

I think we both agree that if you aren't doing that, and most people aren't, then you don't need a 4090.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
I still do not agree that cutting edge means exclusively GPU;) graphic cards may perform many fancy new stuff, but in the end today softwares still rely massively on CPU code. I may sound sceptic but (just an example) I still remember clearly 10 years ago when GPU enthusiast doomed CPU engines because graphic cards propaganda promised 10, 100, 1000x faster render etc.
After 10 years:
-CPU rendering is nowhere near to be dead and is the tool of choice for any serious production
-GPU engines were never that much faster on average (I mean some scene may benefit by using a GPU a lot, other not so much, other will be faster on CPU)
-coding for GPU engines is still more problematic because of API, drivers etc (I’m not a coder as said, this comes from a coder that won academy award for rendering tech, so I tend to believe him)
Again, this do not mean that GPU are not important, but they are only a part of the hardware, just like the rendering is only a part of any 3D workflow.
We live in a world where many young people (not talking about you) grow up using video games and the marketing propaganda convinced them to buy always the greatest and fastest GPU, so I’m not that surprised that after years of brain washing they think the GPU is everything that matter.
Take this forum for example, some “3D expert” happens to have a gaming rig, they download Blender (not because is the best production tool, but because is free), perform a couple task, a few benchmark, and they end up thinking that you strictly need an Nvidia GPU, well professional 3D world is quite a bit different.
 
Last edited:
  • Like
Reactions: tomO2013 and aytan

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
I’m not in the graphics or movie industry, but I think almost all production renders are done using CPUs instead of GPUs. The main reason being that although GPUs are massively parallel and fast they are hobbled by the limited VRAM which is fed via a tiny pipe (PCIe) making large scene renders with massive assets impractical or impossible. This is why final renders are mainly done via large CPU clusters.

GPUs are still mainly used for games.

If the upcoming Mac Pros allows terabytes of UMA memory, it will be a game changer. Imagine 1TB of GPU memory. Even if it is 3 times slower than the 4090, it will still massively accelerate large scene massive asset renders compared to CPU.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
When I was most active in programming graphics (when OpenGL still was hot, 10 years ago) I used a decently specced macpro with an nvidia card. It was fast enough for work and the code generally was portable to high end quadro cards. It lacked some features (like quad buffer stereo rendering) but all in all quite all right. That mac pro cost 2500$. And was maybe 500$ more expensive than a home built pc. At that time it was a hard sell to by superior. 500$ extra? For what?
Today I am amazed at the progress and that you can actually have about 200 TF of compute in a desktop machine for under 10000$ But if you are a mac user you can at best have 20TF these days and that gap is now 10x instead of maybe at worst 2x that was the situation before. Sure we can get by for now but it’s not a good change. It makes me worried. How would anybody justify to buy a system that is a magnitude worse in perf for similar money? No sane businessminded person at least. There must be systems that beat PCs if anyone on the other side of the fence ever would move. Some kind of value proposition. It could be that the software is super stable, the vram is high enough to solve some problems etc like we like to think but the perf needs to be in the right ballpark. We are clearly not there yet, but I still hold some hope for the combo of sw optimization and new HW. That’s why it is so important that a new mac pro really kick butt.
Now , time for some xmas. (And maybe test that metal viewport 😂)
 
  • Like
Reactions: jmho

leman

macrumors Core
Oct 14, 2008
19,521
19,678
When I was most active in programming graphics (when OpenGL still was hot, 10 years ago) I used a decently specced macpro with an nvidia card. It was fast enough for work and the code generally was portable to high end quadro cards. It lacked some features (like quad buffer stereo rendering) but all in all quite all right. That mac pro cost 2500$. And was maybe 500$ more expensive than a home built pc. At that time it was a hard sell to by superior. 500$ extra? For what?
Today I am amazed at the progress and that you can actually have about 200 TF of compute in a desktop machine for under 10000$ But if you are a mac user you can at best have 20TF these days and that gap is now 10x instead of maybe at worst 2x that was the situation before. Sure we can get by for now but it’s not a good change. It makes me worried. How would anybody justify to buy a system that is a magnitude worse in perf for similar money? No sane businessminded person at least. There must be systems that beat PCs if anyone on the other side of the fence ever would move. Some kind of value proposition. It could be that the software is super stable, the vram is high enough to solve some problems etc like we like to think but the perf needs to be in the right ballpark. We are clearly not there yet, but I still hold some hope for the combo of sw optimization and new HW. That’s why it is so important that a new mac pro really kick butt.
Now , time for some xmas. (And maybe test that metal viewport 😂)

It’s a bit more complicated than that though. First, the GPUs in the recent years experienced a massive power inflation. We went from a 200 watts for the ultimate high end GPU to 400-450 watts. Second, the peak FLOPs also got inflated. For example, new AMD GPUs have doubled the amount of compute units, doubling the theoretical performance. But achieving that performance in practice became much more difficult as there are limitations which operations can execute simultaneously.

Having 10 TFLOps GPU in a compact laptop (M1 Max) is not too bad. So no, I can’t agree that the situation got worse over time. Desktop, yes, to a certain degree, but not mobile.

P.S. And no, you are not getting 200 TFLOPs of compute in a sub 10k machine.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
It’s a bit more complicated than that though. First, the GPUs in the recent years experienced a massive power inflation. We went from a 200 watts for the ultimate high end GPU to 400-450 watts. Second, the peak FLOPs also got inflated. For example, new AMD GPUs have doubled the amount of compute units, doubling the theoretical performance. But achieving that performance in practice became much more difficult as there are limitations which operations can execute simultaneously.

Having 10 TFLOps GPU in a compact laptop (M1 Max) is not too bad. So no, I can’t agree that the situation got worse over time. Desktop, yes, to a certain degree, but not mobile.

P.S. And no, you are not getting 200 TFLOPs of compute in a sub 10k machine.
Please, it’s not about the exact details but ballpark numbers. And I think no-one have suggested that apple mobile parts are bad. And inflated flops numbers goes both ways. My point being that in 2010 a mac desktop with similar perf as a PC (desktop, gpu perf) wa at most 2x as expensive. These days the gap is bigger and that worries me for the future of the platform. Apple has made nice strides supporting devs to support mac so things move in the right direction in some ways at least. (Edit removed some frustrated finger pointing)
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Please, it’s not about the exact details but ballpark numbers. And I think no-one have suggested that apple mobile parts are bad. And inflated flops numbers goes both ways. My point being that in 2010 a mac desktop with similar perf as a PC (desktop, gpu perf) wa at most 2x as expensive. These days the gap is bigger and that worries me for the future of the platform.

When you talk about desktop, yes. Then again, Apple hasn’t yet released what can be considered a classical desktop on Apple Silicon (and maybe they never will, who knows). Mac Studio is half the size of smaller mini-its cases for example and draws less power than an average gaming laptop. Of course, this doesn’t invalidate your argument.

Re “And inflated flops numbers goes both ways”, hardly. Apple GPUs are much simpler and execute a single SIMD operation per clock. Not the case for Nvidia or AMD which use more complex execution schemas and offer fewer opportunities to saturate the hardware.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
I’m not in the graphics or movie industry, but I think almost all production renders are done using CPUs instead of GPUs. The main reason being that although GPUs are massively parallel and fast they are hobbled by the limited VRAM which is fed via a tiny pipe (PCIe) making large scene renders with massive assets impractical or impossible. This is why final renders are mainly done via large CPU clusters.

GPUs are still mainly used for games.

If the upcoming Mac Pros allows terabytes of UMA memory, it will be a game changer. Imagine 1TB of GPU memory. Even if it is 3 times slower than the 4090, it will still massively accelerate large scene massive asset renders compared to CPU.
While this is all very true, the gap in quality between real-time and offline is closing massively that imo a lot of the most exciting stuff is happening in the real-time domain.

Yes, Unreal is technically a games engine, but it's also being used more and more in film and tv, archvis, and a huge number of creative fields.

Obviously you're not going to make the next Pixar movie in Unreal, but for lower budget stuff Unreal is incredibly exciting and is democratising creativity to people who might not be able to afford huge workstations and expensive software licenses.

Again this isn't to imply anything about Macs etc. but more to point out that it's more than just gamers who can benefit from good GPUs.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
FYI: blender 3.5 alpha using metal viewport is right now (in my extremely limited testing) about 40% faster in general but in some cases (more complex ?) up tp 100% faster. cool.
 
  • Like
Reactions: Xiao_Xi

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I’m not in the graphics or movie industry, but I think almost all production renders are done using CPUs instead of GPUs. The main reason being that although GPUs are massively parallel and fast they are hobbled by the limited VRAM which is fed via a tiny pipe (PCIe) making large scene renders with massive assets impractical or impossible. This is why final renders are mainly done via large CPU clusters.

And the other reason that I can imagine is that CPUs are much easier to program and debug, so you can implement things faster and have more flexibility.

That said, RT frameworks like Optix and Metal have the potential of revolutionalizing production renderers. And in combination with large UMA setups, might even disrupt the industry. But this won't be a fast process.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.