Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What about mac pro with MPX hidras? Where each hidra is something like a m4 ultra+ ? Energy efficient, silent, connected via something the 400 Gb range instead of just thundebolt. Will it beat nvidia? Probably not but it will be something that could reduce costs in the end.

Aye, people have mused about the possibility for different multi-chip solutions for the Mac Pro, including SOC-over-board. There are no rumors that I know of that Apple has anything like that planned, but you never know.

Oh, and btw, yes AI/ML is a vast field as mentioned. For a small company in CV we get very far on 4090s and similar and we mostly added a single h100 for the memory increase when working on large images. 4090s i än gamer rigs are silent. The h100 needs its own room. A mac studio would be lovely for experimenting and adding in as ”worker” if it just would fully support its hardware when used with torch or mlx.
Tbh, most smaller companies i worked with use gamer cards since long time. Both for AI and viz. Quadros and radeon pros seldom made sense even in visualization if you didn’t have some very nische need like stereo rendering on a cluster with sync needs.
who is training ML on a 4090 or 5090? unless it’s for fine tuning or for dev/test before moving it to cloud. As some one who uses both Nvidia workstation and a Mac, they compliment each other. Large unified memory is helpful to test and tweak before blowing up money on High end Nvidia GPUs in cloud. Nvidia could potentially increase memory on future 6090, but that would eat in to their data center revenue.

There’s economies of scale in it for Nvidia. Apple’s Mac Pro costs what it costs partially because it leans on the R&D efforts from the millions and millions of base processors that sell in iPads and laptops. They stand on those shoulders, and put a few more of those cores on the die. A completely separate parallel effort to make something that doesn’t build on that tech? The prices can only go up from here. :)

People are still “trained” to think that the capability of high end solutions shouldn’t be available across the entire line. The fact that your average user, with their single threaded tasks, wouldn’t perceive a significant difference, day-to-day, between a Mac Pro and a MacBook Air (due to the single threaded performance being so similar) still seems foreign and wrong to them. Apple has done with AMD/Intel will never be able to do and instead of marveling at that fact, some are pointing at Intel “Why can’t it be more like that?” It could, Apple could just do what AMD/Intel does and ship solutions with FAR more disabled cores than they do today.

To combine these posts together, this is one advantage Apple has over Nvidia in this space, at least when it comes to boutique-firm/prosumer hardware. Even Apple's base machines have decent if not good margins and they don't have "big iron" device profits to protect. Nvidia's gaming chips have much lower profit margins than their professional devices and they try to actively dissuade people from using the gaming chips for the professional purposes (e.g. in addition to lower VRAM, you'll note Blackwell 2 gaming chips have a lot lower matmul throughput than Blackwell 1 professional chips and of course no FP64 acceleration and this has been true for many Nvidia product lines even when the two share silicon). This isn't to say that Apple wouldn't want high margins on a hypothetical Mac Pro Hidra, of course they would, but Apple are in a position to build something that delivers many, though not all, of the performance features Nvidia offers at the high end (the most obvious example is VRAM capacity, but increase ray tracing/FP32/matmul compute performance and suddenly ...), more economically than Nvidia (currently, Nvidia and AMD likely see the writing on the wall here too and more SOCs are expected from both - even AMD is supposedly working on an ARM chip).

I think you can link 4 Mac Studios together, albeit only with TB5 bandwidth. I’s the only solution for fine tuning full models under probably $500k right now and works reasonably well for MoE if you only have a couple people doing the work.

Once Apple addresses their slow matrix GPU performance things are really going to get interesting. I’m going to be devastated if Mac Pro gets m3 because it means 2 years without those massive gains.

nvidia has a huge lead with their interconnect technologies, Apple really should put something special in the Mac Pro if they can swing it, and bring it to the next Studio.

My guess is their internal PCC servers will indeed have fancier interconnects if the rumors are anything to go on, but what Apple can (economically) put on the Mac Pro/Studio is a different matter.
 
Last edited:
  • Like
Reactions: novagamer
Is anything known about the $51 “connecting cable” that links the two DGX Spark in the bundle? …

IMG_5530.jpeg
 
What I really want to see is the inter-connect between two Ultra Studios; then, the inter-connect between four Ultra Mac Pros!

Is it going to be exo, or something else....
 
Probably QSFP56 200GbE DAC (RDMA)

Four DGX inter-connected is a very real prospect...
I heard NVIDIA bought up Apple's overstock of Lightning cables for this. :D

You may know this, but NVIDIA previously said the entry-level unit (otherwise identical to the above, but with a 1 TB SSD) would be $3k ⇒ ≈$12k for a 4x bundle. But to start, they're apparently only offering the upper-end 4 TB $4k "Founder's Edition."
 
Last edited:
I heard NVIDIA bought up Apple's overstock of Lightning cables for this. :D

It blows my mind that USB-C (with its potent-for-ultimate-fail, center-bar connect) gained the upper-hand over Lightning; but, well: here we are 🤷‍♂️

Can 10GbE+TB5 actually compete with QSFP56/-dd?
 
Can 10GbE+TB5 actually compete with QSFP56/-dd?
From what I read, 10GbE+TB5 (90 Gbps) would be 2-4x slower, in terms of raw bandwidth, compared to QSDP56/DD (200/400 Gbps.)

As far as connectivity goes, I do wonder if all 4 TB5 ports in the Mac Studio could be aggregated together to act as a single virtual pipe. That would results in a theoretical 320Gbps of bi-directional bandwidth. I would think it's just a matter of writing the driver with the proper communication protocols?

Edit:
The M3 Ultra Mac Studio have up to six TB5 ports, so that give a theoretical bandwidth of 480 Gbps.
 
  • Like
Reactions: splifingate
Apple really should put something special in the Mac Pro if they can swing it

Mac Pro is dead I think - they may as well let it fizzle out.

The people who used to buy them for ability to add RAM as needed or change GPUs (or to swap between native Windows and MacOS) now have moved to PC workstations which offer much greater ability (dual 60 core Xeon, 4x Nvidia 6000 GPUs, etc).

Even consumer grade PCs are really fast.
 
It blows my mind that USB-C (with its potent-for-ultimate-fail, center-bar connect) gained the upper-hand over Lightning; but, well: here we are 🤷‍♂️
Yeah, I've found USB-C isn't a physically robust connection. It's the first connection for which I've ever encountered ports loosening over time, and that's even though they're only occasionally plugged and unplugged. And they're often not even that tight to start with.

That's probably why OWC has equipped its USB-C docks with screw holes for attaching connector stabilization devices to those ports—AFAIK, it's the only port type for which they've deemed these necessary (see below).

Meanwhile the Lighting port on my decade-old iPod Nano, which endured 1,000's of cycles of plugging/unplugging, remained tight.

We know the USB-C physical connection, with its 24 connectors, can support at least 80 GB/s bidirectionally (TB5). And I'm guessing it will support TB6 as well, which could be, say, 160 GB/s bidirectionally.

It seems a modified version of Lightning (with its number of connectors increased from 16 to 24) could do the same—and they could have make Lighting 50% wider, giving it 24 connectors while retaining its current connector width, and it would still be about the same total width as USB-C.


1744269227533.png
 
Last edited:
A slightly different perspective: I've been using Apple Pro notebooks since the PowerPC days, and I am over the moon with how far Apple laptop graphics performance has come. With Crossover, I am playing Hogwarts Legacy and Cyberpunk 2077 on my Mac! Compared to what I used to put up with, it's truly incredible.
 
  • Like
Reactions: Timpetus and Homy
When the M1 Max launched, Apple said that it rivaled the flagship NVIDIA GPU at the time, the RTX 3080.

However, in 2025, when comparing the M4 Max MacBook Pro to PC laptops equipped with the flagship NVIDIA GPU, the RTX 5090, which costs the same as a M4 Max MacBook Pro, the MacBook Pro gets destroyed, it is not even close.

And this is true even on battery power.

View attachment 2498773

View attachment 2498774
This is faster then that, but that is faster then this...A reason i never use (or look) at tests is simple. Before i buy a mac i ask myself what im gone do with it. Usualy 6 things. And those 6 things are don perfectly with the mac's i bought. So i am happy. What's the rush with "faster and faster"? Do you need to be somewhere else? :)
Overall i am very satisfied will all my apple stuff! Exept one thing..... I had to drive 850 km to get my Apple vision pro! Because the Apple importer refuses to sell them! (same as in the beginning with all Apple speakers) And on top of THAT i cant get any service! In the country where i live it isnt sold so, "Sorry we cant help you" and in the country where they DID sell me one it is "Sorry, your not living in Germany! So there i am after spending nearly 5000.- ! Now THAT p@@@@s me off.
 
  • Like
Reactions: SnoFlo
Mac Pro is dead I think - they may as well let it fizzle out.

The people who used to buy them for ability to add RAM as needed or change GPUs (or to swap between native Windows and MacOS) now have moved to PC workstations which offer much greater ability (dual 60 core Xeon, 4x Nvidia 6000 GPUs, etc).

Even consumer grade PCs are really fast.
There are other uses for PCIe slots than just GPUs, there’s still a market for the current mac pro, small as it is. And there’s still rumors swirling about an “extreme” chip slotted above the ultra for a future Mac Pro.
 
From what I read, 10GbE+TB5 (90 Gbps) would be 2-4x slower, in terms of raw bandwidth, compared to QSDP56/DD (200/400 Gbps.)

As far as connectivity goes, I do wonder if all 4 TB5 ports in the Mac Studio could be aggregated together to act as a single virtual pipe. That would results in a theoretical 320Gbps of bi-directional bandwidth. I would think it's just a matter of writing the driver with the proper communication protocols?

Edit:
The M3 Ultra Mac Studio have up to six TB5 ports, so that give a theoretical bandwidth of 480 Gbps.

AAPL has always been conservatively-sparse with their connectivity; if the 25/26 MP allows the addition of 200/400GbE, maybe we can see comparisons.

I reserve judgement until then.
 
As far as connectivity goes, I do wonder if all 4 TB5 ports in the Mac Studio could be aggregated together to act as a single virtual pipe. That would results in a theoretical 320Gbps of bi-directional bandwidth. I would think it's just a matter of writing the driver with the proper communication protocols?

Edit:
The M3 Ultra Mac Studio have up to six TB5 ports, so that give a theoretical bandwidth of 480 Gbps.
From what I've read, it's not possible to do that with TB (though I've not found an authoritative source).

We know it is possible to aggregate I/O bandwidth upstream of the TB controllers, which is presumably what Apple does when it connects its SoC to the PCIe slots on the Mac Pro.

As I wrote above, you could get 60.5 GB/s = 484 Gb/s bidirectional bandwith with a single x16 PCIe 5.0 slot—and twice that with x16 PCIe 6.0.
 
Last edited:
  • Like
Reactions: novagamer
Comparing them to Nvidia who has been in the game for decades is not even a good comparison.

This argument doesn’t make sense when you consider that both Apple and people have been comparing the m chips to intel which is probably and older company than nvidia
 
I am sorry but considering Apple only released their first Mac-based Silicon less than 5 years ago, they are doing pretty well. Comparing them to Nvidia who has been in the game for decades is not even a good comparison.

They don't even target the same markets or consumers.
1) Apple is not as new to this as you portray. The CPU and GPU cores Apple uses on the AS Macs use the same basic architecture as the CPU and GPU cores on their iPhones. And Apple's been designing those in-house since A4, which was released 15 years ago, in 2010.

2) Having said that, I think it is fair to note that Apple is new to designing PC-scale CPUs and GPUs, and that we can thus expect they should be on the sharp slope of the improvement curve.

But that doesn't mean it's not fair game to compare them to existing tech! That's fully fair game for any new tech, and Apple itself realizes this, as evidenced by its own NVIDIA vs AS comparision slides.

3) Of course they target overlapping markets. At the low end, Apple is trying to get PC users to switch to Macs. And the higher end you go the more likely you are to find multi-platform users.

Here on MR I've seen several posts from video pros who own a Mac Studio or MP because they prefer working in MacOS, but also have a home-built NVIDIA-based PC that they use when they have a lot rendering they need to get done quickly. If Apple GPU's were suffiicently powerful they wouldn't need the latter (or maybe they'd still keep building separate PC rendering boxes, if their performance/cost is significantly higher than what they can get from Apple).
 
Here on MR I've seen several posts from video pros who own a Mac Studio or MP because they prefer working in MacOS, but also have a home-built NVIDIA-based PC that they use when they have a lot rendering they need to get done quickly. If Apple GPU's were suffiicently powerful they wouldn't need the latter (or maybe they'd still keep building separate PC rendering boxes, if their performance/cost is significantly higher than what they can get from Apple).

I think Apple should make PCIe-based ASi compute/render cards; use the SoC (SoIC) in the Mac Pro for real-time work & the compute/render card(s) for queued jobs...
 
  • Like
Reactions: bcortens
I think Apple should make PCIe-based ASi compute/render cards; use the SoC (SoIC) in the Mac Pro for real-time work & the compute/render card(s) for queued jobs...
IIUC what you have in mind, A PCIe render card sounds like it would essentially be a slotted eGPU. And, as you know, Apple's not shown an interest in supporting those.

I'm no better at reading Apple's tea leaves than anyone else. But if I were to guess, I'd expect if Apple wanted to increase the rendering/GPU compute power of its MP, it would add a separate GPU die to the SoC (perhaps in a stacked configuration on top of the base die), giving it access to its UMA. That would be consistent with its unified architecture in a way that an eGPU would not.
 
Not sure if this was mentioned already but the other advantage Mac Pro can probably have over Mac Studio is a better cooling system for longer sustained peak performance. (I’m assuming there are extreme workloads for which thermals does become a bottleneck in the Mac Studio.)
 
  • Like
Reactions: Allen_Wentz
IIUC what you have in mind, A PCIe render card sounds like it would essentially be a slotted eGPU. And, as you know, Apple's not shown an interest in supporting those.

I'm no better at reading Apple's tea leaves than anyone else. But if I were to guess, I'd expect if Apple wanted to increase the rendering/GPU compute power of its MP, it would add a separate GPU die to the SoC (perhaps in a stacked configuration on top of the base die), giving it access to its UMA. That would be consistent with its unified architecture in a way that an eGPU would not.

As I said a compute/render card that is used to send compute/render jobs to; the SoC (SoIC) handles what work one is doing in assorted apps, the compute/render card(s) handle all compute/render jobs sent to it for processing...

So these theoretical compute/render cards are NOT linked into the UMA, but recieve (and send back) compute/render jobs via the PCIe slot(s)...

NO display output from these compute/render cards...!

Although, I WOULD like to see Apple create a class of desktop/personal workstation chips that GREATLY increase GPU horsepower for the end user...!
 
Not sure if this was mentioned already but the other advantage Mac Pro can probably have over Mac Studio is a better cooling system for longer sustained peak performance. (I’m assuming there are extreme workloads for which thermals does become a bottleneck in the Mac Studio.)
I'm not sure if the difference would be significant. If you're curious, with some searching you can probably find extended high-load performance comparisons of the top-end M2 Ultra Studio and top-end M2 Ultra Mac Pro to check this.

And if the M2 Ultra Studio doesn't suffer by comparision, the M3 Ultra Studio wouldn't either, as the latter has a lower max TDP than the former:

1744346762499.png
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.