Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Damian83

macrumors 6502a
Jul 20, 2011
508
285
Efficiency. The RTX 3090 requires far too much power to do what it does. There are only select few tasks where it outshines the M1 Ultra.
Yes, and one of these “few” tasks its a task that can let u earn money by doing nothing (mining). So what’s more efficient then?
Anyway, please stop talking of efficiency as here it’s a debate between lambo (3090) and tesla s (m1 ultra) and no one that buys one of them cares about efficiency
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
At the risk of “whataboutism” I’d like to point out that this kind of benchmarking is endemic to the industry. NVidia and Intel in particular use benchmarks that run specific tests that leverage their own special hardware or software in their graphis.

A couple that come to mind:
NVidia using raytracing in their benchmarks to show performance improvement of the 20-series over the 10-series. (With “relative performance” as an axis I might add)

Intel using their own gcc compiler in their Alder Lake vs M1 graphs.

And many others I’m sure.
 
  • Like
Reactions: satcomer

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
At the risk of “whataboutism” I’d like to point out that this kind of benchmarking is endemic to the industry. NVidia and Intel in particular use benchmarks that run specific tests that leverage their own special hardware or software in their graphis.

A couple that come to mind:
NVidia using raytracing in their benchmarks to show performance improvement of the 20-series over the 10-series. (With “relative performance” as an axis I might add)

Intel using their own gcc compiler in their Alder Lake vs M1 graphs.

And many others I’m sure.
Yeah, and these are all valid as people who have done those things with those products previously would like to have some idea of how much better these new things are than the old things are at doing the same OLD things. I don’t think it’s whataboutism, it’s valuable information for anyone going from one version of a technology to the next.

The comparison against of Apple Silicon to Nvidia or AMD or Intel is more about appeasing the comparers of their choice of platform. :) And, in that, you’ll find results that aligns with what the comparer wants to see!
 

Yebubbleman

macrumors 603
May 20, 2010
6,024
2,616
Los Angeles, CA
Of course they are. In fact, in comparative terms M1 does better against Nvidia in gaming than in pure compute.

Still, gaming is not the M1/M1 Pro/M1 Max/M1 Ultra/M2 GPU's flagship intended application. Even still M1 Max 16" MacBook Pros are decent at gaming; but it's obvious to anyone out there that they're not the computer you get if gaming is a priority to you.

I'm not sure the comparison between Nvidia's workstation GPUs and Apple's Ultra GPU is fair because Nvidia's GPUs are more expensive.

The RTX A6000, Nvidia's most powerful workstation GPU, is as powerful as the RTX 3090 Ti and consumes 50% less power.

I'm not speaking of specific Quadro cards. I'm saying that the Quadro market is what Apple is trying to attack with the M1 Max/Ultra's GPUs.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Still, gaming is not the M1/M1 Pro/M1 Max/M1 Ultra/M2 GPU's flagship intended application. Even still M1 Max 16" MacBook Pros are decent at gaming; but it's obvious to anyone out there that they're not the computer you get if gaming is a priority to you.

Apple definitely designs their GPUs, APIs and systems with gaming in mind. M-series machines are capable all-rounders that target all kinds of usage, gaming included. The question of "intended application" is entirely up to the user. The problem with gaming is first and foremost the lack of games, but if you want to buy an M2 Air exclusively to play something Baldur's Gates 3, why not? It's certainly going to be a better laptop for this kind of use than many others in the same price category.
 
  • Like
Reactions: Xiao_Xi

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Quadro market is what Apple is trying to attack with the M1 Max/Ultra's GPUs.
What makes you think that? Apple compares its GPUs to Nvidia's gaming GPUs, not to the workstation GPUs.

What advantage does Apple's GPU have over Nvidia's workstation GPU? Efficiency? I doubt Apple's GPUs are more efficient than Nvidia's workstation GPUs.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
What makes you think that? Apple compares its GPUs to Nvidia's gaming GPUs, not to the workstation GPUs.

I think these questions are ultimately meaningless because they try to apply definition outside of the relevant context. There is no formal set of criteria of what constitutes a "workstation GPU". In the end, it's just a fairly artificial concept that only exists in a specific market. Something like "workstation GPUs are a certain brand of GPU products offered by Nvidia and AMD that are marketed towards professionals and are priced considerably higher". Apple's marketing doesn't really work like that, they don't differentiate their GPUs by functionality or targeted market, so I don't think that describing them in these terms makes sense. In comparative terms, Apple GPUs have properties of both classical gaming and workstation GPUs, but what do we get from this kind of insight? Very little, I think.


I believe it makes much more sense to discuss these products in terms of their suitability to specific domains of interest rather than trying to pigeonhole them into a set of narrow preexisting categories. M-series chips don't cater to one specific niche. They are all-round products that are capable of fulfilling many different roles, even if some might consider these roles to be contradictory. It is entirely possible to build a GPU that is equally good for gaming, for rendering and for video editing. In particular, Apple achieves this by focusing on a compute-centric architecture with large caches, unified memory and bandwidth+compute efficient rasterisation.

What advantage does Apple's GPU have over Nvidia's workstation GPU?

For typical "workstation applications"? At this moment much larger memory pools as well as larger caches. Apple is likely to perform better on complex workloads that use huge datasets. For example, rendering of very large complex scenes.

I doubt Apple's GPUs are more efficient than Nvidia's workstation GPUs.

In terms of perf/watt, Apple is much more efficient (factor of 4x-5x), unless you talk about ML applications, where Nvidia benefits from their large dedicated accelerators. For those things, the efficiency is comparable, it's just that Nvidia will be much faster. Nvidia gaming GPUs are likely to be even faster btw. since they are usually clocked higher, but that will cost them some efficiency.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I can't find a good benchmark to compare M1 Ultra vs RTX A6000. Which benchmark did you consider?

I am looking at specs. A6000 is just a slightly lower clocked 3090 Ti with a slower RAM. It delivers peak ~40 TFLOPS at 300W , M1 Ultra delivers peak ~20 TFLOPS at 80W. Both GPUs have the same RAM bandwidth of ~800GB/s. So on pure compute workloads with all other things being equal and assuming similar levels of software optimisation on similar workloads (and ignoring Nvidia's advantage in raytracing etc.), I'd expect the A6000 to be around 2x faster while consuming around 3-4x more power.

So yeah, my 3-4x efficiency estimate is likely wrong here (that would rather apply to gaming models that are clocked more aggressively and have worse efficiency), it's more like 1.5x-2x.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
I am looking at specs. A6000 is just a slightly lower clocked 3090 Ti with a slower RAM. It delivers peak ~40 TFLOPS at 300W , M1 Ultra delivers peak ~20 TFLOPS at 80W. Both GPUs have the same RAM bandwidth of ~800GB/s. So on pure compute workloads with all other things being equal and assuming similar levels of software optimisation on similar workloads (and ignoring Nvidia's advantage in raytracing etc.), I'd expect the A6000 to be around 2x faster while consuming around 3-4x more power.
It is unfair to leave out Nvidia's tensor cores and RT cores in the comparison, but not that Apple's GPU uses a much better node. We should compare them as they are now using specific benchmarks. Good on paper is not the same as good in reality and AMD GPUs are proof of that. While AMD GPUs may go toe to toe against Nvidia GPUs in gaming, they are almost useless in anything else.

Phoronix has made a comparison between Nvidia OptiX, Nvidia CUDA and AMD HIP on Blender 3.2
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,677
It is unfair to leave out Nvidia's tensor cores and RT cores in the comparison, but not that Apple's GPU uses a much better node. We should compare them as they are now using specific benchmarks.

Depends what you want to evaluate. We were talking about efficiency so I focused at the basic compute architecture. And sure, Apples node advantage plays a significant role in their GPUs being more efficient.

Good on paper is not the same as good in reality and AMD GPUs are proof of that. While AMD GPUs may go toe to toe against Nvidia GPUs in gaming, they are almost useless in anything else.

Quite on contrary. AMD GPUs are significantly weaker than top Nvidia GPUs in the compute department when you look at the specs (fewer shader ALUs). So it’s not surprising they are slower in compute workloads. Your example just shows that compute and gaming are two different workloads that do not have to correlate. When you want to look at gaming efficiency, well, Apple will be off the charts. An Ultra scores 35k in Wild Life Extreme, a 3090 around 43k. M1 is around 3.5x more efficient here.


Phoronix has made a comparison between Nvidia OptiX, Nvidia CUDA and AMD HIP on Blender 3.2

Just as expected. Compute throughtput of 6800xt is the same of 3070, so you see these GPU perform very similarly if you don’t take into account Nvidia RT technology. Again, no surprises here. The peak FLOPS metric is a good predictor for overall performance in these results.
 
  • Like
Reactions: Xiao_Xi

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
We were talking about efficiency so I focused at the basic compute architecture.
Do RT and tensor cores increase the consumption of Nvidia GPUs? If so, you can't neglect them for performance, but add them for consumption.

Depends what you want to evaluate.
GPUs do not perform consistently across all tasks, you need to choose a benchmark to compare power consumption and performance.

Apple has shown how it has improved rendering in Blender by 30%. Software optimizations play an important role in GPU performance.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Do RT and tensor cores increase the consumption of Nvidia GPUs? If so, you can't neglect them for performance, but add them for consumption. [...]

GPUs do not perform consistently across all tasks, you need to choose a benchmark to compare power consumption and performance.

I am not aware of any detailed breakdown of power consumption of Nvidia GPUs based on the hardware units involved. I know the detailed power consumption of Apple GPUs, since I have tested them, for Nvidia I just take the TDP since it's a good enough proxy. I am also not neglecting Nvidia's specialist hardware, I just don't think they are relevant when discussing general-purpose compute performance of the GPU.

Apple has shown how it has improved rendering in Blender by 30%. Software optimizations play an important role in GPU performance.

Of course. Which is why I don't talk about performance for specific software bur rather about expected performance across a range of workloads in optimal conditions.

As far as I am concerned, there are two relevant metrics when evaluating a product. One (which you seem to be focusing on) is the actual effective performance is real existing software. This is arguably the most relevant metric since it directly affects your usability of the machine. But for the purpose of the current discussion, which is talking about the technology stack in itself, I was looking at the second metric, which is the ideal performance under optimal circumstances. The thing is, while I understand that an artist might want to know how good M1 is for rendering with Blender today, for me the more interesting thing is how good M1 can be for rendering with Blender one day, after it is sufficiently optimised. The software ecosystem around M1 is going to mature, and Apple is not going to stop innovating. If one wants to understand the state of technology, I think it is shortsighted to say something like "M1 Ultra is 3 times slower than a 3090, so it's 3 times worse" (like many people do, just look at the ridicules chess threads). The much more interesting question is "why is it three times worse?" and "is it the best it can do?". Understanding these things can help us to better predict where this entire thing is going.
 

Yebubbleman

macrumors 603
May 20, 2010
6,024
2,616
Los Angeles, CA
What makes you think that? Apple compares its GPUs to Nvidia's gaming GPUs, not to the workstation GPUs.

Apple is marketing an advantage over gaming GPUs even though every thing they market their graphics for isn't gaming, but the kinds of things that you'd prefer a workstation GPU over a gaming GPU for if you were configuring a PC for those tasks.

What advantage does Apple's GPU have over Nvidia's workstation GPU? Efficiency? I doubt Apple's GPUs are more efficient than Nvidia's workstation GPUs.
Power efficient, maybe. Again, their marketing is silly because they are comparing to gaming GPUs which are inefficient compared to workstation GPUs for things that aren't gaming. Even sillier, in the PowerPC and Intel eras of Macintosh, Apple preferred supplying gaming GPUs over workstation GPUs while marketing their machines more towards workstation tasks. At this point in time, you're buying anything beefier than an M1 or M2 (i.e. M1 Pro or M1 Max) for things that aren't gaming because it would make far more sense to buy a PC for gaming, especially now that you can't run x86-64 versions of Windows on these new Macs anymore.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Apple is marketing an advantage over gaming GPUs even though every thing they market their graphics for isn't gaming, but the kinds of things that you'd prefer a workstation GPU over a gaming GPU for if you were configuring a PC for those tasks.

Power efficient, maybe. Again, their marketing is silly because they are comparing to gaming GPUs which are inefficient compared to workstation GPUs for things that aren't gaming. Even sillier, in the PowerPC and Intel eras of Macintosh, Apple preferred supplying gaming GPUs over workstation GPUs while marketing their machines more towards workstation tasks. At this point in time, you're buying anything beefier than an M1 or M2 (i.e. M1 Pro or M1 Max) for things that aren't gaming because it would make far more sense to buy a PC for gaming, especially now that you can't run x86-64 versions of Windows on these new Macs anymore.
Back in that time there was rarely any real difference between workstation and consumer branded GPUs. Just markup, how close they ran them to the edge of practical voltage/frequency/power limits (workstation models were generally downclocked a bit to reduce power and make them more reliable), and drivers. Since Apple always opted to keep clocks and power on the low side, and never had special drivers just for workstation apps, there wasn't much point in Apple using so-called workstation GPUs.

The thing you have to understand here is that Nvidia's traditional workstation GPU dominance was built on the back of software - their driver stack - not hardware. Not that the hardware was bad, but NVidia's crown jewels were the fastest and best tested OpenGL driver when it came to certain legacy GL features seldom used outside of dusty CAD software originally written when those GL features weren't essentially obsolete. NVidia tied these drivers to higher margin hardware by making them insist on detecting a GPU with a special fuse bit set identifying it as a workstation (Quadro) GPU model. There was seldom any real difference outside that fuse bit, they just wanted to make higher profit margins off deep-pocket CAD workstation customers. People looking to do CAD on the cheap used to hack the drivers to bypass this check so they could use consumer cards. Worked fine.

AMD did much the same thing with FireGL, just much less successfully.
 
  • Like
Reactions: altaic and leman

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Power efficient, maybe. Again, their marketing is silly because they are comparing to gaming GPUs which are inefficient compared to workstation GPUs for things that aren't gaming.

Where do you get this notion from? Gaming and workstation GPUs use the same exact chip, with some minor differences in clock and memory speeds (+ things like FP64 throughput that is artificially limited in the consumer chip). According to benchmarks, the gaming 3090 outperforms the workstation A6000 in rendering by 30%. The A6000 might be faster for some scientific code that heavily relies on FP64 but that's about it.

And sure, workstation GPUs are more efficient — since they are often designed for stability, which means lower clocks. Guess what, that also makes them more efficient for gaming ;)
 

altaic

Suspended
Jan 26, 2004
712
484
The concept of a “workstation GPU” is from the late 90s, when eye-bleedingly expensive programs (that no one in their right mind would buy unless they were using it for profitable work) had special support for certain GPUs. Shortly thereafter, OpenGL was a revolution for having a common API that even games could take advantage of. Needless to say, times have changed and the term is meaningless.

Likewise, “gaming PC” does currently mean something, because there are games that require a Windows box with certain hardware. If that changes, then that term aught to sunset as well.

Marketing will never let the terms die, though, so expect vacuous linguistic discussions ad infinitum (or ad nauseam).
 
Last edited:
  • Like
Reactions: leman

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Back in that time there was rarely any real difference between workstation and consumer branded GPUs. Just markup, how close they ran them to the edge of practical voltage/frequency/power limits (workstation models were generally downclocked a bit to reduce power and make them more reliable), and drivers. Since Apple always opted to keep clocks and power on the low side, and never had special drivers just for workstation apps, there wasn't much point in Apple using so-called workstation GPUs.

What's interesting is that in the recent years we did get some hardware differentiation between consumer and professional models (e.g. Turing vs. Volta) in the previous generation. And Nvidia again has two lines of upcoming chips — consumer Lovelace and professional Hopper. But I doubt that "regular" workstation GPU will be Hopper-based, that will probably be reserved to high-end datacenter and supercomputer applications.
 
  • Like
Reactions: JMacHack

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
The concept of a “workstation GPU” is from the late 90s, when eye-bleedingly expensive programs (that no one in their right mind would buy unless they were using it for profitable work) had special support for certain GPUs. Shortly thereafter, OpenGL was a revolution for having a common API that even games could take advantage of. Needless to say, times have changed and the term is meaningless.
Hardly, there’s other factors in workstation gpu circuitry like dedicated fp64 and ecc.


The Titan black for example still has better fp64 performance than current gen nvidia gpus.
Likewise, “gaming PC” does currently mean something, because there are games that require a Windows box with certain hardware. If that changes, then that term aught to sunset as well.
“Gaming PC” is a dubious term I’d argue. Any PC can play games (although at many different levels of performance), and even PCs marketed as “gaming” have wildly varying specs.

The only commonality I can think of is marketing “performance per dollar”, which usually involves cost cutting elsewhere in the PC (in things which the gaming market doesn’t value).
Marketing will never let the terms die, though, so expect vacuous linguistic discussions ad infinitum (or ad nauseam).
Okay I feel called out here.
 

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
I think Apple's comparison to 3090s and similar is dumb, in that those high-end GPUs are used mostly by people who want bleeding-edge raytracing or 4K or 144+Hz gaming on the most demanding titles, and Apple is not interested in competing in that AAA gaming category (where the most basic requirement is probably removable/upgradable GPUs, followed by customizable rigs so you can show off your lack of aesthetic taste with RGB lighting.) Professional use cases often heavily leverage GPUs too, but there are actual useful real-world and synethic benchmarks for directly comparing power there.

The point they're making that their GPUs punch above their weight class while consuming much less power (which is the right tradeoff to make in almost all situations besides stuff like high-end gaming) can probably be better made comparing to products their customers would be more likely to buy than some gaming PC (because if they really care about that stuff, they're buying an Apple laptop and then still having a game console or gaming PC on the side expressly for those purposes. Doing that is a much more sane and cost-effective strategy than hoping Apple might get its act together with gaming one of these decades.)
 
  • Like
Reactions: JMacHack

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
I think Apple's comparison to 3090s and similar is dumb, in that those high-end GPUs are used mostly by people who want bleeding-edge raytracing or 4K or 144+Hz gaming on the most demanding titles
An "affordable" PC workstation is a gaming PC without tacky lighting. If you wanted to buy a custom PC for 3D rendering, you would buy the same components you would buy for a gaming PC. Who is going to buy an RTX A6000 ($4,000) instead of an RTX 3090 ($2,000) if they perform about the same?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Hardly, there’s other factors in workstation gpu circuitry like dedicated fp64 and ecc.

Sure, but are those really necessary criteria to define a “workstation” GPU? FP64 is pretty much irrelevant to the majority of GPU applications (which is why Apple skipped it altogether) and ECC is nice but that’s about it.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Sure, but are those really necessary criteria to define a “workstation” GPU? FP64 is pretty much irrelevant to the majority of GPU applications (which is why Apple skipped it altogether) and ECC is nice but that’s about it.
I have no real answer, but they clearly have some use to justify their existence. Nobody would waste expensive development time on features that have zero use.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I have no real answer, but they clearly have some use to justify their existence. Nobody would waste expensive development time on features that have zero use.

The real answer is "legacy code support". Attaching a FP64 ALU to a special ops unit doesn't cost much and old code that relies on double precision can still run. But it's clear that nobody cares about the performance of such code nowadays.
 

LinkRS

macrumors 6502
Oct 16, 2014
402
331
Texas, USA
I think these questions are ultimately meaningless because they try to apply definition outside of the relevant context. There is no formal set of criteria of what constitutes a "workstation GPU". In the end, it's just a fairly artificial concept that only exists in a specific market. Something like "workstation GPUs are a certain brand of GPU products offered by Nvidia and AMD that are marketed towards professionals and are priced considerably higher". Apple's marketing doesn't really work like that, they don't differentiate their GPUs by functionality or targeted market, so I don't think that describing them in these terms makes sense. In comparative terms, Apple GPUs have properties of both classical gaming and workstation GPUs, but what do we get from this kind of insight? Very little, I think.
In the PC/Windows/*nix realm, a "Workstation" vs a "Consumer" (or Gaming if you prefer) GPU has nothing to do with the GPU itself, and everything to do with the packaging (of the components), drivers, and support. Workstation class GPUs often have more RAM, and use specific drivers. The drivers used for workstation GPUs tend to be slower for games than the consumer ones, but are faster and most importantly more accurate for use in apps like CAD. Both nVidia and AMD optimize the drivers differently between "Workstation" and "Gaming." I would expect that Apple leans more towards 'Workstation' tuning in this context, than 'Gaming.' As far as I know, Apple does not have different drivers based on intended usage, just one universal driver.

Additional Info:

Incidentally, Apple often used AMD's Workstation Class GPUs in their products (vices the gaming/consumer variants), but since Apple writes the drivers, I am not sure what differences exist between say Radeon PRO in a Mac Pro, vices a Radeon in an Alienware?

:)

Rich S.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.