Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
The Apple Silicon GPUs are in fact very powerful, your confusing GPU power with software support, even a really powerful GPU can run a graphics app slowly if the app is poorly written for that GPU, developers have spent decades following the Nvidia way of graphics, you just need patience to wait for developers to slowly get used to Apple Silicon and Metal and eventually apps and games will run very well.
Even that, it's still slower. Beside, GPU intensive software aren't really supporting Mac anyway.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Are there any technical, commercial or marketing hurdles preventing Apple from achieving the performance of nVidia GPUs?

Good question! At fundamental level, an Nvidia SM is fairly similar to an Apple GPU core. Both feature four execution partitions (each executing a 32-wide group of threads, with 32 ALUs per partition), even though the execution model itself is subtly different.

There are two main reasons why Nvidia GPUs are faster. First is that Nvidia targets higher power consumption levels and higher clocks. M2 family GPU runs at a very conservative 1.4Ghz, while Nvidia runs its GPUs at over 2Ghz. Second is that Nvidia GPUs has more SM's than Apple has GPU cores. A 4090 has whopping 128 SMs, which means 512 32-wide execution units (partitions). An M2 Ultra (if it ever arrives) instead would have 76 cores or 304 32-wide execution units. And that's just the difference in execution units, not taking into account the massive difference in clock.

If Apple builds a larger GPU and clocks it higher, there is no reason why it wouldn't be able to match Nvidia. This is of course where the question of money comes into play. Apple tech is substantially more expensive, as they build SoCs and rely on wide and energy-efficient RAM interfaces. Nvidia has also the advantage in marketing, as they can afford to make moves that Nvidia can't. For example, the 4090 with its massive amount of shader units only has RAM bandwidth of 1TB/s which is not that far off M1 Ultra's 800GB/s. The bandwidth per SM is pretty much terrible. Now, that's ok for games since they often have more predictable cache behaviour (and to be frank, that GPU is an overkill anyway, so the hardware utilisation will be poor, reducing both the power consumption and the bandwidth needs). But many professions workloads need more bandwidth. Funnily enough the professional Nvidia GPU (Hopper) ships with lower clocks and nominally lower compute performance than the 4090 (at least according tot eh specs Iv'e seen), but they give it much more RAM bandwidth via HBM2. Of course, that GPU costs between $30k and 40k. Hardly the niche Apple should go after.

What could be Apple's next step? Hardware-based ray tracing? Hardware-based path tracing?

I'm sure we will see hardware RT with hardware ray compacting in the next generation of Apple GPUs. The relevant patents have been published since late 2022. As to the rest, well, it depends on what kind of level of performance Apple is pursuing. On laptop, they are already in a fairly good spot. On desktop, they need bigger or at least faster GPUs. If I were them I would pursue a) bigger GPUs across the board with b) higher dynamic clock range so that they can be clocked faster on the desktop. But then again, I am a hobbyist posting nonsense on forums while Apple employs actual industry veterans :)
 

Rafterman

Contributor
Apr 23, 2010
7,267
8,809
The Apple Silicon GPUs are in fact very powerful, your confusing GPU power with software support, even a really powerful GPU can run a graphics app slowly if the app is poorly written for that GPU, developers have spent decades following the Nvidia way of graphics, you just need patience to wait for developers to slowly get used to Apple Silicon and Metal and eventually apps and games will run very well.

The 38 core Apple GPU is equal to about an Nvidia 4050 - around 13 teraflops. Not bad for a laptop, but the 4070 does 23 tflops and the high end 4090 can do over 90.
 

Homy

macrumors 68030
Jan 14, 2006
2,509
2,459
Sweden
Interesting but not surprising (again) that sunny5 only talks about M1 and totally ignores M2, not to mention M3 coming this year. M2 Max 38c is already faster in both Blender and games than M1 Ultra 64c so imagine how much faster M3 Max/Ultra will be this year. M2 Max is almost as fast as AMD Radeon PRO W6800X Duo in Blender GPU test. Not bad for a mobile GPU compared to a dual workstation GPU with a TDP of 400W.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,677
The 38 core Apple GPU is equal to about an Nvidia 4050 - around 13 teraflops. Not bad for a laptop, but the 4070 does 23 tflops and the high end 4090 can do over 90.

With the caveat that Nvidia relies on simultaneous issue of independent instructions to reach those 13 TFLOPs. As real-world code will contain dependent instructions the hardware utilization using this approach will tend to be lower. In fact, I am wondering why Nvidia decided to do things this way…
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
Interesting but not surprising (again) that OP only talks about M1 and totally ignores M2, not to mention M3 coming this year. M2 Max 38c is already faster in both Blender and games than M1 Ultra 64c so imagine how much faster M3 Max/Ultra will be this year. M2 Max is almost as fast as AMD Radeon PRO W6800X Duo in Blender GPU test. Not bad for a mobile GPU compared to a dual workstation GPU with a TDP of 400W.
Not really interested in AMD because Nvidia is dominating the graphic card market and therefore, we should compare with Nvidia. So how much faster compared to RTX 30 series?
 

Rafterman

Contributor
Apr 23, 2010
7,267
8,809
Not really interested in AMD because Nvidia is dominating the graphic card market and therefore, we should compare with Nvidia. So how much faster compared to RTX 30 series?

The 3050 ti is popular in laptops. The 38 core M2 is 3 times faster.
 

StellarVixen

macrumors 68040
Mar 1, 2018
3,254
5,779
Somewhere between 0 and 1
The history of Apple desktop and laptop computers is history of mediocre graphics. From Motorola, to PowerPC to x86 era (Nvidia fiasco, not sure if Apple is to blame) and here we are today.

I don’t think they will care more than they care now. At least they used to give us eGPU option (but then again, you could only run AMD cards), now we don’t even have that.
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
The history of Apple desktop and laptop computers is history of mediocre graphics. From Motorola, to PowerPC to x86 era (Nvidia fiasco, not sure if Apple is to blame) and here we are today.

I don’t think they will care more than they care now. At least they used to give us eGPU option (but then again, you could only run AMD cards), now we don’t even have that.
I do aware for that but at least they need to try again especially since they are making their own CPU and GPU. Do or do not, there is no try. Beside, the GPU market is now more important than ever before and I dont think Apple can ignore that such as AR/VR, ray tracing/path tracing, AI/Machine learning, 3D rendering, and more. Also, dont they have NPU? Doing so will attract quite a lot of people to Mac system. But if people are so serious about rendering or AI, they probably would use a rendering farm instead which makes a workstation useless...

I really think Apple should at least stop making excuses and start focusing on GPU intensive software. An unified memory is a great advantage which is hard to ignore.
 

Rafterman

Contributor
Apr 23, 2010
7,267
8,809
So is it now comparable to mobile RTX 3080?

The 3080 is 29 tflops, the 3080 ti is 34 tflops. The 38 core Macbook Pro is 23 tflops.

While teraflops is not the end all be all of graphics performance (a lot of factors go into it), it is one of the more popular ways to gage different GPUs.
 

unrigestered

Suspended
Jun 17, 2022
879
840
to me, Apple Silicon is the best thing ever that happened to portable computers, EVER!

of course it can not compete with some of the high(er) end nVidias of this world, but then again: while i think that i now even prefer some Linux distro over macOS (which is still great and a joy to use too though), x86 imo just sucks in laptops.
i reluctantly bought a new laptop recently as i've always hoped that in the meanwhile by some miracle the PC world could/would catch up in some timely manner.
i purposely "only" bought an i5 instead of i7 or i9 with just integrated graphics (yeah, i know i am one of those people who doesn't even remotely require those super high end 600+ watts of graphics performances 😜)
and while the system performance is good, the fan noise is just plain BAD even for very simple tasks.
my M1 MBA flies (naturally without any noise at all) and even when i'm pushing my 16" MBP, the fan noise is staying WAY lower (mostly barely audible, if at all) than the 16" Dell which is becoming a jet engine when i just move some files around.
admittedly, the fan curves on that Dell must have been programmed by some complete idiot.
battery life can also only be described as pretty average compared to my M1s
 

Homy

macrumors 68030
Jan 14, 2006
2,509
2,459
Sweden
Not really interested in AMD because Nvidia is dominating the graphic card market and therefore, we should compare with Nvidia. So how much faster compared to RTX 30 series?

Why would you or anyone else care? You have already stated many times since M1 was released that Apple's GPUs "suck" for gaming, 3D and more. So yeah, they suck and it is as it is. Nothing you or others write here is going to change Apple's strategy. You don't even own a Mac. If you think they suck just choose another tool like a PC with Nvidia for your needs, because unless you weren't aware it's just a tool like any other tool. I don't know what your are trying to prove here once again. It's like you personally are offended buy people thinking different and not agreeing with you and once in a while feel the urge to start new long repetitive threads where you have to remind us about the following:

"It's very common and normal seeing posts about Mac GPU's poor performance from many other PC forums while Mac users clearly ignoring the fact. "..
"admit that Apple GPU is still poorer than what people are talking about"
"M1 Ultra is a joke to me"
"Apple GPU's performance isn't really great at all"
"Apple GPU is NOT great"
"Apple GPU's performance for me is very disappointing"
"Apple GPU itself is slow, optimization is very difficult because of Metal, time and budget issue, macOS itself, OS market share, rendering method, and more..."

At least you admit that "It's very common and normal seeing posts about Mac GPU's poor performance from many other PC forums". They seem to be so interested in Macs more than their own systems that like yourself they have to come here and point out their superiority for people around them instead of focusing on productivity and enjoying their PCs. Seems for sure PC People are so bored despite all that Intel/Nvidia horsepower.

And if you're interested in Nvidia GPUs then what are you doing on a Mac forum? Apple stopped using Nvidia years ago and obviously has nothing that can interest you.
 
Last edited:
  • Like
Reactions: aytan and sirio76

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Nvidia has also the advantage in marketing, as they can afford to make moves that Nvidia can't.

???

...the professional Nvidia GPU (Hopper) ships with lower clocks and nominally lower compute performance than the 4090...

Are not virtually all the "Pro" cards (AMD & Nvidia) set up the same, lower clocks for ensured stability...?

On desktop, they need bigger or at least faster GPUs. If I were them I would pursue a) bigger GPUs across the board with b) higher dynamic clock range so that they can be clocked faster on the desktop. But then again, I am a hobbyist posting nonsense on forums while Apple employs actual industry veterans :)

Bigger & faster GPUs for the ASi Mac Pro, yes please...

Do or do not, there is no try.

Thanks Yoda...

Apple Silicon is not good to rendering 3D. All 3D rendering benchmarks indicate a big advantage for Intel CPU.

Cinebench is known to highly favor Intel CPUs over all others, and thus deliver skewed results...
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
Is there any chance that M3 series will get ray tracing chips or something related to 3D rendering?
 

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Interesting but not surprising (again) that sunny5 only talks about M1 and totally ignores M2, not to mention M3 coming this year. M2 Max 38c is already faster in both Blender and games than M1 Ultra 64c so imagine how much faster M3 Max/Ultra will be this year. M2 Max is almost as fast as AMD Radeon PRO W6800X Duo in Blender GPU test. Not bad for a mobile GPU compared to a dual workstation GPU with a TDP of 400W.
Screenshot 2023-03-19 at 11.19.01 PM.png
Screenshot 2023-03-19 at 11.18.21 PM.png
Screenshot 2023-03-19 at 11.18.47 PM.png


It is obvious by these numbers that in Blender only 1 of the GPUs is being used on the W6800X Duo. Go get me some Octane benchmarks on the M2 Max. Pretty sure the W6800X Duo will be crushing it.

Which is my biggest gripe of pretty much all Apple graphics software; the inability to utilize multiple GPUs when the top tier machine that most people would use for said jobs can have 4 of them. Adobe is probably the biggest culprit. Although honestly this should be handled by the Metal API.
 

olimerkido2

macrumors newbie
Feb 23, 2023
19
27
Even that, it's still slower. Beside, GPU intensive software aren't really supporting Mac anyway.
Ha ha nah, RE village runs at 4K 60fps with all maximum graphics settings with upscaling turned completely off and with Apple’s DLSS equivalent (MetalFX) that goes up to 130fps.
 
  • Like
Reactions: Homy

olimerkido2

macrumors newbie
Feb 23, 2023
19
27
The 38 core Apple GPU is equal to about an Nvidia 4050 - around 13 teraflops. Not bad for a laptop, but the 4070 does 23 tflops and the high end 4090 can do over 90.
Comparing teraflops between Apple Silicon or AMD and Nvidia doesn’t work, I learnt that when Apple’s old 16 inch MacBook Pro with AMD graphics ran 3x slower than Nvidia even when optimised when only had 1.5x less teraflops than Nvidia, obviously the 4090 is faster but the M2 Max was actually not far off the 4080 laptop in real world Vulkan performance in properly optimised tests.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Is there any chance that M3 series will get ray tracing chips or something related to 3D rendering?

Apple has many hardware raytracing patents published last years, pointing to an innovative, energy-efficient hardware RT implementation. It was probably supposed to land in A16 but was delayed due 3nm.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Comparing teraflops between Apple Silicon or AMD and Nvidia doesn’t work, I learnt that when Apple’s old 16 inch MacBook Pro with AMD graphics ran 3x slower than Nvidia even when optimised when only had 1.5x less teraflops than Nvidia, obviously the 4090 is faster but the M2 Max was actually not far off the 4080 laptop in real world Vulkan performance in properly optimised tests.

Depends on the workload. If you are compute-bound, TFLOPs throughput is a good predictor of the overall performance.
 

olimerkido2

macrumors newbie
Feb 23, 2023
19
27
Depends on the workload. If you are compute-bound, TFLOPs throughput is a good predictor of the overall performance.
Im talking about gaming performance because that’s what almost everyone uses graphics for, tflops predicts compute yeah but also depends on the gpu actually being actively used at all times, not waiting for the CPU to encode instructions and transferring between RAM and VRAM, which Nvidia has a much bigger problem with compared to Apple Silicon.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.