Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I've seen the same 4070 laptop chips run at 90W and 140W; there's a 1FPS difference. I can't imagine the M3 Max suddenly becoming a monster when pumped with power.

A M3 Max combined in package with another M3 Max does not have to get "pumped with power". Because the power envelope is substantially smaller you can 2-3 in the same envelope throwing at a single 4070 FE , 4080 , or 4090. That is the primary point. It is about using efficiency better, not 'draining the local power grid'. Two M3 Max dies obviously consume more power in the aggregate perspective , but individual there was no necessity to grossly throw more power at them individually.

Fundamentally, it isn't a 'bragging rights' contest. I'm very doubtful there is a "have to beat <external competitor X> at any cost " set as an primary objective. More likely it is to be as 'fast/powerful' as they can within the constraints they choose ( i.e., constraints not solely dictated by an external entity. ).
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Efficiency is a noble goal for sure, but people in the desktop space do not care about it at all. As said earlier in the thread, it's probably on the lowest part of the list of considerations. Efficiency may enable more mobility and innovation in in smaller form factors, but then you recall the size of the M1/M2 Ultra die is... gargantuan. So already you need to cut the performance way down for size considerations.

My hope is that future Apple designs will allow operating under higher clocks without sacrificing the low-power efficiency too much, but I’m not sure how feasible is it.
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
I've seen the same 4070 laptop chips run at 90W and 140W; there's a 1FPS difference. I can't imagine the M3 Max suddenly becoming a monster when pumped with power.

Yeah, there is no linear scalability when feeding it with more power. Yet. Even their hardware doubling down when going from Pro to Max to Ultra to whatever doesn't scale linearly.
 
  • Like
Reactions: turbineseaplane

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Even their hardware doubling down when going from Pro to Max to Ultra to whatever doesn't scale linearly.

M3 scores pretty much linearly between Pro and Max in Blender. M1 appears to have a much simpler shader scheduler, and has some serous scaling issues (I also observe this in microbenchmarks - M1 is rather sensitive to how you feed it shaders and needs a lot of work to effectively hide latencies, M3 is much more flexible).
 

Regulus67

macrumors 6502a
Aug 9, 2023
531
501
Värmland, Sweden
Specifics are few and far between, but I believe that R1 is the component responsible for managing the spatial aspects of Vision Pro in conjunction with the cameras and sensors built into the device. The graphics side is still left to the M2.
I see. One reason I asked was, was the way app windows are anchored in 3D space, as demonstrated in youtube videos. Seeing users walking around, both indoors and outdoors, while the apps stay in fixed positions.
I thought it might be offloading some work from the graphic cores in the M processors. Kind of like the Afterburner card did for ProRes videos. As if it a specialised CPU for 3D.

So you explain that aspect quite well, thank you. And like leman said, this is not a graphical ability.
Now I think of it more in line with the T2 chip. Not that they have anything in common. But in term of functionality that is added, that would not otherwise be present in the system
 
Last edited:

Regulus67

macrumors 6502a
Aug 9, 2023
531
501
Värmland, Sweden
The talk of Apple’s first spatial computer, is what made me to link this to spatial cognition skill.
As a Tower crane operator, I have developed this skill set to a very high degree. But everyone will not be able to reach the same level, even with several years of practise. Which very much determine how good a crane operator can become.

In my mind, the spatial cognition and graphical skill is very much linked. So I can draw in 2D or 3D using same. Even on paper.
 
  • Like
Reactions: heretiq

Zest28

macrumors 68030
Original poster
Jul 11, 2022
2,581
3,931
Nvidia’s recent surge is in the LLM world, which is 99 44/100 % rack-mount servers in data centers.

If you’re doing stuff in that area, you really do need massive arrays of GPUs to function. And Apple is not making hardware for that market — the same way that, for example, Ford doesn’t make jet engines for aircraft (even though Rolls Royce does).

With a footnote for the PC gaming world, the types of things you’d use GPUs for on a single-person computer work (within rounding) equally well on any of the GPU platforms out there.

And, if you need more than you can readily get in a single-person computer … these days, you’re using a cloud provider to do the number crunching for you. Far cheaper and easier to work with.

There might be another footnote in there for developers who create toy models on their workstations before deploying them to the cloud … but even that’s hard to justify these days. Do your development in the cloud with a small data set and deploy with the full set; much easer and — again — cheaper than buying a massively-specced computer to do stuff locally.

It's still good to do your prototyping locally / on-prem as it cost less.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
So you explain that aspect quite well, thank you. And like leman said, this is not a graphical ability.
Now I think of it more in line with the T2 chip. Not that they have anything in common. But in term of functionality that is added, that would not otherwise be present in the system

From what I understand the primary purpose of R1 is to do eye, head, etc. tracking with ultra-low latency. You could do these things on the CPU/GPU, but there would be a short delay, which could lead to motion sickness. I’ve also seen mentions that R1 is responsible for at least parts of the video output processing.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Seeing the latest earnings report from NVIDIA, it just shows how dominate NVIDIA currently is, nobody even comes close. Why not embrace the best technologies the industry has to offer for the Mac Pro?

Nvidia's breakout earning isn't based on selling single user workstation GPU cards. It is about managing the current relatively scarcity of data center cards. Several billion dollar companies are trying to fill up racks and racks of space in their data centers with GPU 'cards' which in very substantial numbers are NOT standard PCI-e format factor cards at all.
H100 SMX5

SXM5-White-4-NEW-FINAL.jpg


https://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/

That doesn't even fit in a Mac Pro ( or mainstream Dell/HP/Lenovo/etc Windows PC workstation. )
The massive NVLink switches in a Nvidia DGX system ... again don't fit in a Mac Pro.
The Infiniband links of DGX ... again not matched with Mac Pro either.

The PCI-e version of the card is the 'trimmed down' version. Not the max revenue generating version.

All of this seriously disconnected from the gamer oriented add-in cards that folks are going to buy 'off the shelf' at Newegg/Microcenter/'retail tech shop 42'/etc.




Even Google who build their own custom GPU's, has embraced NVIDIA in the end. Amazon also build custom ARM chips, but they also heavily rely on NVIDIA, there is no way around it.

Amazon AWS has hosted Mac Mini's if some customers want them. Any 'full service' remote data center service provider is going to slap whatever equipment folks want to 'rent' in their facility. AWS has four different flavors of CPUs they will rent you (Graviton , Intel, AMD , Apple M )

Apple is not in the 'full service' remote data center business. And the 'Mac Pro' isn't a natural fit to that kind of business either. (e.g., places like MacStadium/Maccolo/AWS/etc ) tend to have an order of magnitude more Mini's providing services than Mac Pros. The 'rack version' Mac Pro 2019/2023 isn't for mega datacenters. It is just as much for the local, single rack, video/music studio as any 'datacenter'.

This mania that Apple has 'gotta have' Nvidia GPU cards is mainly spinning a Fear Of Missing Out" (FOMO) story. The Mac Pro very likely makes money without Nvidia. 100% likely the whole Mac product line up and ecosystem makes money without Nvidia ( that has years of proven track record at this point. Well past highly delusion is try to spin a story otherwise even in the later Intel era. And in the ludicrous zone now in the M-series era. ). There is no FOMO factor here for Apple.



Where there is a bigger chance that Apple is messing up here is that there is no alternative additive AI/ML inferance/train option here. AI/ML has two major components to it; Training and Inference. Nvidia has a decent grip on 'training'. Inference is not a sure thing. Much of the hype of Nvidia stock price is riding on those being tightly coupled when they technically are not. You don't necessarily need a GPU to do very good inference. And Windows/Android/macOS/Linux are going to enhance local inference libraries more and (likely some more in an open , less 'moat-lock-in', fashion.).

Apple's problem is that they are on a path to driving their software stack too narrow. It isn't a single vendor hardware that is the root cause problem.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Found the guy who thought NVDA was too expensive at 300, now he's doubling down.

"Nvidia's largest US customers are Amazon, Microsoft, Google, Meta, and Dell."

You see they are selling to real companies, not SPAC funded AI companies.
That’s a complete misreading of my post. I don’t have a dog in this fight, nor do I own any stocks that aren’t part of my 401k.

The topic of the thread is video cards on the Mac Pro. People are pointing to Nvidia’s rapid adoption in the data center space as if that has anything to do with a workstation computer.

You quoted a post about the bubble that is AI, which a huge amount of start up companies are funded through SPACs. Most will fold, the others will be aquired.

I couldn’t give less of a **** about the stock price of Nvidia, or Apple for that matter.
 
  • Like
Reactions: Timpetus

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
It's still good to do your prototyping locally / on-prem as it cost less.

The local prototyping going on is NOT what is spiking Nvidia's revenue numbers. Prototyping is also primarily a 'cost center' not a revenue generating center. Mac Pros are expensive which means they primary need to be used in a fashion that makes the money back per seat deployed to cover the costs.

There is also rather tenuious connection to enhancing synergy with the rest of the Mac ecosystem. Prototyping so that can push the workload to another non-Mac platform isn't doing much. Most of the disconnect here is whether the Mac Pro is a Mac first and a box-with-slots container second (or third) or a bow-with-slots container first and being a Mac is a secondary role.

Apple views it as the first not the second. Compounding the disconnect Nvidia basically has the same type of viewpoint of Nvidia ecosystem first and being better broad spectrum Mac second. The revenue spike by Nvidia very likely makes them even more disinterested in being a very good Mac partner. They already were a bad pattern ( kneecapping of OpenCL and other moves ). There was a fluke period in the first half of the x86 transition were Apple was trying to push more a transition to a more open 'EFI' and OpenGL/OpenCL stack. Apple is off of UEFI and has deprecated an 'open' compute and graphics stack path.


Two things that would help is

i. PCI-e card pass through to Virtual Machines. That one is pretty much Apple's jobs since they don't allow any other hypervisors on the platform except for theirs. If there is a huge strategic disconnect on objectives and implementation that macOS doesn't want to do then it should minimally enable handing some device off to another OS stack that does want to do it.


ii. A more open AI/ML compute stack that was not entangled with core graphics stack. ( Metal is entangled with graphics. ). The Coral AI USB Inference device largely works with macOS.

https://coral.ai/products/accelerator

There is a MacPorts/Homebrew stack aligned with the standard Unix , no gui (i.e., graphics) stack of macOS. Avoid the "battle on monster island" match between CUDA and Metal and just be a detached interfere/train tool and it probably would work with some reasonable, limited help from both sides.
 

Pet3rK

macrumors member
May 7, 2023
57
34
Apple Silicon itself has proven that it's not good for professionals with high specs just like workstation. Mac Pro will die and I dont think Apple is interested in professional markets at all. Sadly, it will affect Mac's major markets such as video and music due to the hardware limitation.
My field uses CPU and RAM more than GPU. In fact, our specs is typically measured in GB/core. But suuuure. If it’s not good lol.
 

Zdigital2015

macrumors 601
Jul 14, 2015
4,143
5,622
East Coast, United States
"at the same power" is great if worried about laptops

When one wants to get big things done with GPUs, usually it's a desktop and the power efficiency is way down the list. I'm not saying it means nothing, but it's not near the top of the list.

I get that Apple wants something to tout - good for them - but they are getting killed on the desktop side of just having "the best"

(we are already 4 years into ASi .. how many years to we have to wait for them to "win"?)
Getting killed on the desktop side? The whole market at this point is geared towards mobile, not desktop. I can’t take an ATX chassis PC with me on the road or on an airplane. Besides gaming and higher end computing, NVIDIA is not a great choice for day to day or mobile anything. We live in a post PC world at this point. Besides, NVIDIA’s innovations right now are simply raising the wattage consumed and the prices of their GPUs, beyond that I’m not sure exactly why there’s so much excitement around them? They’re a horrible business partner and they suffer from the same arrogance that Apple suffers from at various times.
 

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Measuring a few metrics and claiming they are getting killed is disingenuous.

Currently the only thing Apple Silicon is getting beat at is GPU and people who complain when 96-core CPUs win out on multi-threaded tasks like 3d rendering and such.

For fun again, and people keep saying Apple is going to lose the content creation market....and yes we all know benchmarks mean squat; but isn't it kind of embarrassing for AMD and Intel to be beat by Apple Silicon in Puget Systems Photoshop and After Effects benchmarks? I just checked again today and M3 Max is on top in both of the benchmark tests. I mean Apple has only been at the CPU game for 3-4 years, ignoring anything they are doing on phones and tablets. (of course I imagine Apple has been churning on this for a lot longer in R&D.)

This is a wholly refreshing change since Apple systems used to be dog ass slow in After Effects comparatively to the market place.

If you need CUDA libraries for whatever you are doing, stop complaining already and just get a Windows or Linux system and get on with your life.
 

heretiq

Contributor
Jan 31, 2014
1,021
1,654
Denver, CO
I am the one trolling? This is a Mac forum last I checked. Y'all are beating a dead horse that has been beat to death so many times here; even by me.
Bingo @avkills. In virtually every thread, it’s always the same 3 member death cult deriding everything Apple does and arguing against any pro-Apple comment — often on topics they don’t understand and products they‘ve never used nor intend to use. it’s clear they’re here to troll, but I don’t understand what value they derive from trolling.
 
Last edited:

Yebubbleman

macrumors 603
May 20, 2010
6,024
2,616
Los Angeles, CA
Seeing the latest earnings report from NVIDIA, it just shows how dominate NVIDIA currently is, nobody even comes close. Why not embrace the best technologies the industry has to offer for the Mac Pro?

Even Google who build their own custom GPU's, has embraced NVIDIA in the end. Amazon also build custom ARM chips, but they also heavily rely on NVIDIA, there is no way around it.
Because Apple is only interested in making Apple GPUs for Apple hardware that can only run Apple operating systems...? Not sure what about this doesn't make sense.

I'm not saying NVIDIA doesn't have the superior graphics technology. I'm pretty sure that's the main reason why other tech companies even entertain NVIDIA's nonsense on the regular to begin with. But that's not what Apple Silicon Macs are aiming to do here.
 

Allen_Wentz

macrumors 68040
Dec 3, 2016
3,329
3,763
USA
"at the same power" is great if worried about laptops

When one wants to get big things done with GPUs, usually it's a desktop and the power efficiency is way down the list. I'm not saying it means nothing, but it's not near the top of the list.

I get that Apple wants something to tout - good for them - but they are getting killed on the desktop side of just having "the best"

(we are already 4 years into ASi .. how many years to we have to wait for them to "win"?)
Sure "When one wants to get big things done with GPUs, usually it's a desktop and the power efficiency is way down the list." But that is just a small subset of all the computing that gets done. Burning watts for marginal gains is a bad thing on all kinds of levels, which is part of why Apple's M series chips are doing so well.
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
Measuring a few metrics and claiming they are getting killed is disingenuous.

Currently the only thing Apple Silicon is getting beat at is GPU and people who complain when 96-core CPUs win out on multi-threaded tasks like 3d rendering and such.

For fun again, and people keep saying Apple is going to lose the content creation market....and yes we all know benchmarks mean squat; but isn't it kind of embarrassing for AMD and Intel to be beat by Apple Silicon in Puget Systems Photoshop and After Effects benchmarks? I just checked again today and M3 Max is on top in both of the benchmark tests. I mean Apple has only been at the CPU game for 3-4 years, ignoring anything they are doing on phones and tablets. (of course I imagine Apple has been churning on this for a lot longer in R&D.)

This is a wholly refreshing change since Apple systems used to be dog ass slow in After Effects comparatively to the market place.

If you need CUDA libraries for whatever you are doing, stop complaining already and just get a Windows or Linux system and get on with your life.

No one uses AE on CPU no more. Now that Adobe has finally figured out GPU is the future and optimized the most functionalities to squeeze your GPU no Apple computer is faster than latest desktop GPUs. I am not talking about putting titles over a video, doing some one point tracking and other basic stuff.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
I think it is interesting to compare technologies. Let's have a quick look. Where are Apple and Nvidia right now in relation to each other in terms of raw performance? From technology standpoint, M3 GPU is roughly on par with 2018 Nvidia Turing — both architectures feature concurrent FP32+INT32 execution on the same compute partition using independent FP/INT pipes. However, Apple's partitions are 2x wider than Turing, making Apple's GPU core more similar to Nvidia Ampere/Ada core. An Apple M3 GPU core can do 128 FP32 + 128 INT32 operations per clock, while Nvidia Ampere/Ada can do 128 FP32 or 64FP32 + 64INT32 operations per clock. When we look at the Blender benchmark database, we can see that M3 Max (40 compute partitions running at 1.33Ghz) performs identically to a mobile RTX 3080 Ti (58 compute partitions running at somewhere between 1.1 - 1.26Ghz). Based on the configuration alone, 3080 should be around 20-40% faster, and I think this is where the design philosophy differences between Apple and Nvidia become apparent (Apple seems to maximize utilization for each available compute unit, while Nvidia sacrifices utilization to increase the number of compute unit and let the massive parallelism sort things out).

Ada is significantly faster yet — the 4090 mobile features whopping 76 SMs running at 1.4-1.7Ghz — that's around 2x-2.4x more theoretical compute than M3 Max! Ada also features larger caches and faster RT — all this allows it to achieve a commanding lead of 2x over M3 Max and 3080 Ti mobile in Blender.

This is the situation we have today. What about tomorrow though?

Well, the obvious next step for Apple to do is to "pull an Ampere" and add FP32 capability to either FP16 or FP32 pipe (I think FP16 pipe is a more likely candidate as this would retain useful concurrent FP+INT execution). If Apple goes this route, each of their cores would be capable of 256FP32+128FP32 per clock, making them 2x more capable than Nvidia SMs. This should instantly boost Apple's performance on FP-heavy code by 30-60%, without increasing clocks. And this should be fairly easy for Apple to do, as their register files and caches already support the operand pressure. In theory, they might go even further: make all pipes symmetrical and do 3x dispatch per cycle, but that would likely be very expensive.

At any rate, if we look at Apple's progress with the GPUs, I think one can see long term plan. Each generation incrementally adds new features that are used to unlock new features the next generation. This is a well-executed multi-year plan, that has been delivering consistent performance and capability improvements every single release. I doubt that the current Apple GPUs architecture is close to its plateau, simply because there are still obvious things they can do to get healthy performance boosts. The same isn't really true for Nvidia or AMD. I just don't see how Nvidia can further improve Ada SM design to make it significantly faster — they can either continue increasing the SM count and raising the clocks, or they have to design a fundamentally new SM architecture that boosts compute density. Definitely looking forward to see what Blackwell will bring — does Nvidia intend to continue pushing their successful SM model with bigger and bigger designs, or will they do something new? For Nvidia's sake, I hope it is the latter.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.