Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm seeing quite a few developers using the fact their common codebase lets them make Mac, iOS & iPad versions of apps as separate products doing bundle pricing - eg Mona for Mastodon, I think I paid $25 for all my devices, which is a fair bit more than a lot of the super cheap pricing the App stores have seen. I suspect sensible software pricing may be coming back for non-subscription priced products.
It would be a welcomed gift :)
 
So…who’s going to pick up the new Mac Pro? Is anybody seeing value in it?

I know there were multiple people here who don’t care about graphical horsepower enough for a lack of GPU support to be a dealbreaker, so now that we know what it actually is, who’s on board?
I think I will buy the m3 ultra mac pro, hoping for 384 gb of ram
 
I'm a VFX guy and been exclusively a Mac user since 2003. I really should've bought a PC a long time ago for the sort of work I do. I was learning Redshift on my 15" MacBook Pro using an Nvidia RTX 2070S in an eGPU on Windows with Boot Camp. I was pretty set on building a PC around that RTX card, when I was awarded some EU Covid business funding specifically for purchasing equipment, and I got a 2019 Mac Pro.

Apple Silicon had already been announced so I knew it wasn't going to be my 10 year computer, but given it cost me nothing, I thought why the heck not.

A couple of years later and I'm really used to using Redshift with Cinema 4D on my Mac (hallelujah). I must say I'm pretty cheesed off that Apple:

1. aren't (yet?) supporting 3rd party GPU's with Apple Silicon.
2. have completely killed off MPX modules

I would be a natural purchaser of this Mac Pro - or whatever Mac Pro is around in a couple of years, because I would take my GPU's and boost whatever AS SoC is in there.

But now I can't. If nothing changes with GPU support, every single bit of kit in my Mac Pro will be redundant.

I don't need SDI cards, I just need render performance. Perhaps the GPU renderers will get better at supporting CPU as well, but at the moment that feels like several years off.

Disappointed, but still happy with my setup for now.
I have difficulties to understand the issue with different tools for such specialised work. It is not that it is impossible to communicate between Apple and Windows machines so it would be easy to setup two different machines for different purposes. Two software licenses for the same software might be problematic and a valid reason for a pro. Two different OS is not in my opinion.
 
So the reviews are in on the M2 Ultra Mac Studio and as we all suspected...it's awesome. Is it what we wanted to hear? No. Were we all hoping for a serious Mac Pro and not a gimped Mac Pro to force people to focus on the Mac Studio while they continue to R&D the REAL Mac Pro? Yes. But alas, here is what we have for now and for now...it's an excellent system for the overwhelming majority of folks that have been in this thread, can we agree on that?

I'm not happy with it because of obvious reasons, but it does run 8K displays now externally and it is the most powerful Mac ever made "if we only factor in CPU" and it is blazing fast and a fantastic workhorse of a machine that will satisfy 90% of Apple's user base which is, after all, who they are clearly focused on...

but...

...I have to wonder...

...if they're not planning on releasing a completely revolutionized powerhouse of a Mac Pro, and they truly are just doing this current Mac Pro from now on...

...then what's up with the overkill on the powersource? Why use a power source that can power...I don't know, extremely powerful GPU's?...

...just wondering...
 
So the reviews are in on the M2 Ultra Mac Studio and as we all suspected...it's awesome. Is it what we wanted to hear? No. Were we all hoping for a serious Mac Pro and not a gimped Mac Pro to force people to focus on the Mac Studio while they continue to R&D the REAL Mac Pro? Yes. But alas, here is what we have for now and for now...it's an excellent system for the overwhelming majority of folks that have been in this thread, can we agree on that?

I'm not happy with it because of obvious reasons, but it does run 8K displays now externally and it is the most powerful Mac ever made "if we only factor in CPU" and it is blazing fast and a fantastic workhorse of a machine that will satisfy 90% of Apple's user base which is, after all, who they are clearly focused on...

but...

...I have to wonder...

...if they're not planning on releasing a completely revolutionized powerhouse of a Mac Pro, and they truly are just doing this current Mac Pro from now on...

...then what's up with the overkill on the powersource? Why use a power source that can power...I don't know, extremely powerful GPU's?...

...just wondering...

They're just reusing parts, man.

Also, there's this. I think it's time to just throw in the towel and accept things for what they are
https://www.macrumors.com/2023/06/11/apple-exec-discusses-mac-pro-lack-of-egpu-support/
Apple's hardware engineering chief John Ternus briefly touched on the matter in an interview with Daring Fireball's John Gruber last week, explaining that expandable GPU support for Apple silicon is not something that the company has pursued.

"Fundamentally, we've built our architecture around this shared memory model and that optimization, and so it's not entirely clear to me how you'd bring in another GPU and do so in a way that is optimized for our systems," Ternus told Gruber. "It hasn't been a direction that we wanted to pursue."
 
I'm not happy with it because of obvious reasons, but it does run 8K displays now externally and it is the most powerful Mac ever made "if we only factor in CPU" and it is blazing fast and a fantastic workhorse of a machine that will satisfy 90% of Apple's user base which is, after all, who they are clearly focused on...

I'm entertaining fantasies of my next big evolution in workstations being something like a drumkit, with a 27" 4k Wacom Cintiq front & centre, two 24"-ish touchscreens out at 45 degrees to each side for file browsers & palettes, and a superwide (curved) display higher in front for large format stuff like spreadsheets, code lines etc.

A more arm active, and less wrist-centric setup.

No real need for that to be powered by macOS, and it could do proper tethered VR as well. The real kick though, is that while I can go on to HP's American site and fully customise a Z6 or better shipping in 2 weeks, in Australia they've only ever offered stock configs, and the G5 series are "we'll call you when they're in... sometime later this year".

*sigh*
 
  • Love
Reactions: maikerukun
They're just reusing parts, man.

Also, there's this. I think it's time to just throw in the towel and accept things for what they are
https://www.macrumors.com/2023/06/11/apple-exec-discusses-mac-pro-lack-of-egpu-support/
Hmmm I hadn't seen that article. And while he ruled out EXTERNAL GPU's, he never ruled out or even mentioned Card Based GPU's...something made by Apple for Apple Silicon. I know it seems like a reach, and perhaps it is, but we know those cards exist, and they exist for something...
 
I'm entertaining fantasies of my next big evolution in workstations being something like a drumkit, with a 27" 4k Wacom Cintiq front & centre, two 24"-ish touchscreens out at 45 degrees to each side for file browsers & palettes, and a superwide (curved) display higher in front for large format stuff like spreadsheets, code lines etc.

A more arm active, and less wrist-centric setup.

No real need for that to be powered by macOS, and it could do proper tethered VR as well. The real kick though, is that while I can go on to HP's American site and fully customise a Z6 or better shipping in 2 weeks, in Australia they've only ever offered stock configs, and the G5 series are "we'll call you when they're in... sometime later this year".

*sigh*
I'm sorry to hear you'll be on a waiting list for the one you want, that sucks :/ But it sounds awesome! I just unboxed my Puget finally, literally over the past hour...been shooting the unboxing of it...here's a couple of pix of the internals, including those two RTX 4090's sitting in there just waiting to rip my electric bill to shreds lolol. It's super clean and extremely well laid out. I know you want that G5, but I'd be curious how long it would take PUGET to put one together and ship it over to you...
Michael Simpson Jr - Puget 1.jpg
Michael Simpson Jr - Puget 2.jpg
Michael Simpson Jr - Puget 3.jpg
Michael Simpson Jr - Puget 4.jpg
 
and to be clear, I am of the opinion that we haven't seen the last of the Mac Pro. I honestly believe this thing is a stop gap meant to complete the transition but nowhere near what they actually have cooking in the lab.

We will see their Frankenstein soon enough...
 
  • Like
Reactions: MoonCakeTropics
Hmmm I hadn't seen that article. And while he ruled out EXTERNAL GPU's, he never ruled out or even mentioned Card Based GPU's...something made by Apple for Apple Silicon. I know it seems like a reach, and perhaps it is, but we know those cards exist, and they exist for something...

That "eGPU" thing is something that Macrumors slapped on top. It technically isn't what he was saying.

" ... chief John Ternus briefly touched on the matter ....
Fundamentally, we've built our architecture around this shared memory model and that optimization, and and so it's not entirely clear to me how you'd bring in another GPU and do so in a way that is optimized for our system
..."

He isn't talking about external GPUs. He is talking about lack of ability to have an optimized shared memory . A separate card ( some distance away at substantively high latencies) probably doesn't classify as optimized whether Apple builds it or not. The Speed of Light means the farther away you are the higher the latencies go. It doesn't matter if have an apple logo on your work badge or not.

Lots of folks keep hand waving that the fact that non-uniform memory architecture NUMA is still technically shared in the broadest sense of the term. It is shared, but not optimized. Accepting NUMA is trading off optimization for more capacity. It is a non transparent issue for the software that is layered on top. ( yet another optimization dimension of not having to write different classes of GPU stacks for different products across the product line up).

When Apple says "Unified Memory" it is highly indicative of Uniform , Unified, Homogenous Memory". Not "if you take one or three hops on different lengths you eventually get there Memory" . Pragmaticaly UltraFusion isn't more 'multiple hops" than the internal mesh is anyway. It is a bigger uniform mesh that gets around fab reticle limits. It main purpose is to make the NUMA aspects disappear from being relevant.
 
" ... chief John Ternus briefly touched on the matter ....
Fundamentally, we've built our architecture around this shared memory model and that optimization, and and so it's not entirely clear to me how you'd bring in another GPU and do so in a way that is optimized for our system
..."

I raised this in the pre-release threads but I'll bring it up here again (and this is more of a general response and not directed right at deconstruct60):

Apple has already told developers to assume unified memory on Apple Silicon. Apple Silicon Metal apps *only work with unified memory.*

Even if Apple management backtracked and decided they would give the option of discrete cards - it's too late. The software choices have already been made. Metal apps on Apple Silicon are fundamentally incompatible with discrete cards.

That's why discrete cards aren't coming back. Maybe eventually some sort of crazy mesh thing. But the decisions were already made years ago that prevent traditional third party discrete cards from returning.

Third party card based GPUs are not coming back. It's over. Stuff like Resident Evil Village isn't Apple Silicon only because of some weird Intel CPU compatibility. It's Apple Silicon only because it only works with Unified Memory and Apple GPUs. It doesn't work at a software level with third party discrete cards.
 
I'm sorry to hear you'll be on a waiting list for the one you want, that sucks :/ But it sounds awesome! I just unboxed my Puget finally, literally over the past hour...been shooting the unboxing of it...here's a couple of pix of the internals, including those two RTX 4090's sitting in there just waiting to rip my electric bill to shreds lolol. It's super clean and extremely well laid out. I know you want that G5, but I'd be curious how long it would take PUGET to put one together and ship it over to you...

It's not even a waiting list - HP have never offered build-to-order here, you get 3 stock configs for each system, and that's about it. Apple did more config options for the 2019 than HP Australia offers for the Z.

Sadly, Puget doesn't ship here - US & Canada Only. Boxx don't even have a local distributor any more. Granted, Singularity are in my home state if I wanted to go crazy watercooled, though geographically they're about as far away as Manhattan to Miami, so not somewhere I'll pop in for a service dropoff *lol*.

I will never get over the Cronenberg-esque bodyhorror of Noctua's flesh-coloured fans 🤣
 
  • Haha
Reactions: maikerukun
I'm sorry to hear you'll be on a waiting list for the one you want, that sucks :/ But it sounds awesome! I just unboxed my Puget finally, literally over the past hour...been shooting the unboxing of it...here's a couple of pix of the internals, including those two RTX 4090's sitting in there just waiting to rip my electric bill to shreds lolol. It's super clean and extremely well laid out. I know you want that G5, but I'd be curious how long it would take PUGET to put one together and ship it over to you...View attachment 2217694View attachment 2217696View attachment 2217695View attachment 2217697

It’s definitely not as beautiful inside or out but it should be one heck of a machine in performance.

Puget supposedly doesn’t ship to Australia yet but I think with enough interest they may reverse that. They will have customers ready and waiting.

I have friends in USA west coast who could get the machine to me and I might end up doing that if I don’t get tempted to build something better myself.
 
  • Like
Reactions: maikerukun
I will never get over the Cronenberg-esque bodyhorror of Noctua's flesh-coloured fans 🤣

They have black fans you can order - although they're a different SKU and sometimes harder to find. Might be why they didn't use them.

Could also be for branding. Some people love everyone knowing they have Noctua fans.
 
I raised this in the pre-release threads but I'll bring it up here again (and this is more of a general response and not directed right at deconstruct60):

Apple has already told developers to assume unified memory on Apple Silicon. Apple Silicon Metal apps *only work with unified memory.*

Even if Apple management backtracked and decided they would give the option of discrete cards - it's too late. The software choices have already been made. Metal apps on Apple Silicon are fundamentally incompatible with discrete cards.

That's why discrete cards aren't coming back. Maybe eventually some sort of crazy mesh thing. But the decisions were already made years ago that prevent traditional third party discrete cards from returning.

Moore's Law sagging kind of leaves the door open several years down the road. But yes, until Apple squeezes almost all possible uplift out of focusing developers on optimizing for Metal Apple GPU there is tons of 'greenfield' opportunity out there. And it has impact on the WHOLE line up; not just the Mac Pro and a narrow handful of way out on the fringe eGPU users.

The recent counter example following pursuing every possible dGPU option in software is Intel's dGPU rollout. They tried to chase after every oddball dicrete graphics quirk that Nvidia/AMD had been juggling for years in a much shorter amount of time. And that bought them what?

Apple beat the 4070 Ti on some benchmarks.



Did any of the Intel cards do that? Nope.
Make it work , then make it fast. Intel is finally getting their software stack out of 'driver SNAFU' land, but they have blown up tons of reputation credit. Intel's stuff worked lots better if had ReBAR on (closer to uniform shared memory) then when it was off.

And surprise , surprise , surprise Intel are now shooting to solidify the mid range coverage first before doing anything to move higher. ( build a more complicated system out of a working one. Instead of most complicated as first step. )


Ultra 1 -> Utlra 2 was like a 20% increase. If Apple can go 15% , 15% , 15% over M3 , M4 , M5 they are no where near in bad shape. but that will take major focus. ( and there will be corner-case bigger wins stuff in there; e.g. some targeted HW Raytracing )

The whole "it don't kill the 4090 right now it is a complete fail" is a nutty approach to building a solid technology graphics foundation.


Third party card based GPUs are not coming back. It's over. Stuff like Resident Evil Village isn't Apple Silicon only because of some weird Intel CPU compatibility. It's Apple Silicon only because it only works with Unified Memory and Apple GPUs. It doesn't work at a software level with third party discrete cards.

I suspect also that Apple didn't push into this "Windows game port kit" stuff earlier is again that they want to build off a more solid foundation. If they wait until M2 , M3 generation to do that , then it is likely to be far, far , far more effective then had tried to do that on day zero back in 2020.

And there is only one GPU to port to. That makes the porting process easier. As opposed to a balkanized port. Make it work, then make it fast.
 
The whole "it don't kill the 4090 right now it is a complete fail" is a nutty approach to building a solid technology graphics foundation.

Maybe. I think this is being driven by:
- People have jobs to do and don't want to be a technical experiment for Apple
- The price of the Mac Pro. The $/performance ratio is extremely poor except for some specialized workloads (should be great for video transcoding or anything touching ProRes.)
- Feeling like Apple is kind of half assing it. Technical issues aside - M2 Extreme or whatever would have changed the performance story a lot.

I don't think they had to kill the 4090 right out the door. But shipping a $7000 machine that can only be configured up to GeForce 4070 performance is a problem. And it didn't help Apple is cutting off every other path - including upgrades to current gen GPUs for the 2019 Mac Pro.

And there is only one GPU to port to. That makes the porting process easier. As opposed to a balkanized port. Make it work, then make it fast.

Yeah. I don't feel like this angle is talked about enough.

Apple feels more comfortable going after game developers because they reduced the possible configurations to one. It's an easier story to sell. It also helps that Apple Silicon doesn't work well with generic graphics APIs because of its unique design - finally justifying Metal. So Apple can also now competently push native ports instead of translated ports.

The opposing case to this would be that maybe developers who already have Vulkan or OpenGL optimized for Radeon cards get left behind. But OpenGL and Vulkan don't make any sense in the Apple Silicon era.

But again - that means that third party cards probably aren't coming back.
 
  • Like
Reactions: ivion
It's not even a waiting list - HP have never offered build-to-order here, you get 3 stock configs for each system, and that's about it. Apple did more config options for the 2019 than HP Australia offers for the Z.

Sadly, Puget doesn't ship here - US & Canada Only. Boxx don't even have a local distributor any more. Granted, Singularity are in my home state if I wanted to go crazy watercooled, though geographically they're about as far away as Manhattan to Miami, so not somewhere I'll pop in for a service dropoff *lol*.

I will never get over the Cronenberg-esque bodyhorror of Noctua's flesh-coloured fans 🤣

Any PC store can build a system just as good as the Puget and there's system builders that do ultra high end PCs for Creative work (I'm sure they can slap in some high end Nvidia's if that's your thing). I've read in editors chats of Puget systems not turning on as soon as they opened the box in America and support is not in every state, so it makes zero sense doing it here.
 
Maybe. I think this is being driven by:
- People have jobs to do and don't want to be a technical experiment for Apple
- The price of the Mac Pro. The $/performance ratio is extremely poor except for some specialized workloads (should be great for video transcoding or anything touching ProRes.)
- Feeling like Apple is kind of half assing it. Technical issues aside - M2 Extreme or whatever would have changed the performance story a lot.

by the points above.

1. Apple Unified Memory Graphics architecture is deployed on about 1 billion devices. This isn't some quirky fluke variation that is never been tried anywhere or doesn't have a track record.

If Apple had tried to switch Macs over to Apple Silicon in 2017-18 era when they first started talking smack about how the A series was getting close to 'laptop'/'desktop performance, then yeah. but the rollout here is hasn't been super fast.

Also just drives home the importance of moving folks to a solid hw/sw stack.

What Apple is doing isn't some bleeding edge experiment with no track record to gauge the risk against. Folks can still use their W6800X-Duo for years to come.


2. Price . The MP 2019 BTO 'max everything' price was up near $50K . The MP 2023 .. maxes out around $12K. There is a real move on this iteration to put more value into the $6-12K range than the MP 2019 did. Is the $/performance easy to swallow now? No, but it is incremental progress.

3. Would the M1 Extreme really have solved the problem???? Go back to the previous point. "Expensive"... some Rube Golberg contratiopn that tried to 'force' 4 dies into working together would not have been cheap. If still trying to fill that $30-50K price zone it would do that... but why does Apple want to be in the $30-50K range?

the M1 Ultra to M2 Ultra was a at least a 20% increase in performance ( higher in some narrow areas) using the same base 800GB/s memory foundation. They fixed some major bottlenecks. what was more important ... more expensive and slower or more $/perf value proposition ?




I'm still a bit dubious that there was an M2 Extreme. A M1 attempt, that didn't ship in volume. M3 generation seems a better place to 'merge' in the added . they pretty likely need a chiplet to make an ">2 die " SoC economically effective. (.e.g., more like the MI300 but not quite that huge. 3 dies 'compute' dies could work without driving costs too high. ).


I don't think they had to kill the 4090 right out the door. But shipping a $7000 machine that can only be configured up to GeForce 4070 performance is a problem. And it didn't help Apple is cutting off every other path - including upgrades to current gen GPUs for the 2019 Mac Pro.


Yes, that cap at 4070 is a problem. I think that is glass half empty / half full thing. Folks are chirping about the empty 'half' only it really isn't a half' Apple is covering over half of Nvidia's product range with a 2-3 year old fab process. There are worse problems to have.

As long as Apple is competing there is always going to be competitive alternatives coming down the road. So there is always a "problem" gap in the line up. It never goes away. Hamster on a treadwheel; can run as fast as you want but the other side of the cage isn't going to get any closer.

Apple said that the 16 core / W5700 was their best selling CPU/GPU options. Covering the GPU xx7xx range is a basic foundation to have a customer base for the Mac Pro ; not the hyper outliers.


As for W7000 variants for the MP 2019 ... if the base system was still shipping , maybe. But Apple doing new work for folks who aren't mostly paying. Probably not going to happen. If folks were going to buy 90K 7800/7900 MPX maybe Apple would bite. But probably less than 5K and vast majority of folks by-pass and buy off the shelf Windows cards. That is basically a ponzi scheme. That is a "problem" too.

Apple is going to have to iterate at a decent pace . No go into a hole for 3 years and sleep on the Mac Pro. With the glut of cheaper 6800/6900 out there there are lots of MP 2019 folks that can move up on that geneation in their boxes to gap for another 1-2 years relatively easily.



The opposing case to this would be that maybe developers who already have Vulkan or OpenGL optimized for Radeon cards get left behind. But OpenGL and Vulkan don't make any sense in the Apple Silicon era.

Vulkan is where I think Apple has taken it to the fanatical stage though. MoltenVK is layered on top of Metal. Apple snarfed some open source layer (CrossRoad's ) to do that porting kit thing. That is layered on top of Metal. Some apps have port costs issues that trump performance issues. There are gaming engine layers on top of Metal also.

Apple could do with MoltenVK a similar effort they did with Blender of just put resources there to iron out bugs and performance bottlenecks that probably hit other frameworks layered on top of Metal. They don't have to make it a 'equal first class' level .

AMD's MI300 presentation today they were all over "Open" APIs to getting work down. Intel has "OpenAPI". Apple is like "Open anything Khronos ... drop dead.". Even Nvidia has broader support than that while still primarily digging a bigger moat around their stuff.

SYCL is another one. ( key foundation element of Intel's OpenAPI ).

Similar to Apple's chronic "playing catch up role" to PyTorch.



But again - that means that third party cards probably aren't coming back.

If Apple doesn't ignore CXL forever there is a possible window several years out , but it would only be forward from those future systems. If Apple halts at PCI-e v4 and doesn't do much more leveraging v5/v6 as a NUMA bus for secondary systems then that would be the end.
 
Last edited:
by the points above.

1. Apple Unified Memory Graphics architecture is deployed on about 1 billion devices. This isn't some quirky fluke variation that is never been tried anywhere or doesn't have a track record.

If Apple had tried to switch Macs over to Apple Silicon in 2017-18 era when they first started talking smack about how the A series was getting close to 'laptop'/'desktop performance, then yeah. but the rollout here is hasn't been super fast.

Also just drives home the importance of moving folks to a solid hw/sw stack.

What Apple is doing isn't some bleeding edge experiment with no track record to gauge the risk against. Folks can still use their W6800X-Duo for years to come.

Sure - I didn't mean the architecture itself was an experiment. What I meant is the concept of "we're going to focus on specialized accelerators for specific use cases instead of general compute power" is an experiment. The 2019 Mac Pro was basically a solid all-rounder depending on how it was configured.

With the new Mac Pro - if your use case is within one of its specialized accelerators (like ProRes), the system will perform well. If your use case is not one Apple deems important - well you're kind of screwed right now.

That's the experiment - machines with specific acceleration targets instead of brute force. Apple has stopped selling all-rounders and thats going to leave some people out in the cold.

The promise is that Apple will keep targeting specific use cases and fold those people back in with accelerators. But how long will that take? Will Mac Pro owners with use cases that don't fit right now even come back to the platform?

2. Price . The MP 2019 BTO 'max everything' price was up near $50K . The MP 2023 .. maxes out around $12K. There is a real move on this iteration to put more value into the $6-12K range than the MP 2019 did. Is the $/performance easy to swallow now? No, but it is incremental progress.

That's kind of misleading. No one was buying the $50k Mac Pro. The customizability was so that users could buy a Mac Pro optimized for their use case. Users would upgrade the Mac Pro along one or two performance paths - but very few were upgrading along all paths at once.

The reason the new Mac Pro is so cheap is because Apple cut a bunch of use cases. The Mac Pro with 1.5 terabytes of memory or 4 GPUs didn't suddenly get cheaper. It got cheaper because configs no longer exist. Apple didn't make the Mac Pro cheaper through innovation - they just cut off all the higher end configs and all the use cases they decided they didn't want to support anymore. The $50k Mac Pro doesn't exist anymore because you can't buy those specs - not because Apple got that much more efficient.

3. Would the M1 Extreme really have solved the problem???? Go back to the previous point. "Expensive"... some Rube Golberg contratiopn that tried to 'force' 4 dies into working together would not have been cheap. If still trying to fill that $30-50K price zone it would do that... but why does Apple want to be in the $30-50K range?

$30k-$50k? This feels kind of like a straw man argument. Doubling the number of cores would not cost another $20k.

As for W7000 variants for the MP 2019 ... if the base system was still shipping , maybe. But Apple doing new work for folks who aren't mostly paying. Probably not going to happen. If folks were going to buy 90K 7800/7900 MPX maybe Apple would bite. But probably less than 5K and vast majority of folks by-pass and buy off the shelf Windows cards. That is basically a ponzi scheme. That is a "problem" too.
Who said anything about an MPX module?

They could do drivers for generic 7900s.

Vulkan is where I think Apple has taken it to the fanatical stage though. MoltenVK is layered on top of Metal. Apple snarfed some open source layer (CrossRoad's ) to do that porting kit thing. That is layered on top of Metal. Some apps have port costs issues that trump performance issues. There are gaming engine layers on top of Metal also.

Apple could do with MoltenVK a similar effort they did with Blender of just put resources there to iron out bugs and performance bottlenecks that probably hit other frameworks layered on top of Metal. They don't have to make it a 'equal first class' level .

There isn't really much of a point. Vulkan shaders and pipelines that you'd write for a Radeon or GeForce card are different than what you'd write for an Apple Silicon GPU.

Stuff like tiled memory does exist in Vulkan - but only basically works on Android. And porting Android apps isn't the big thing people are looking for with Vulkan on Mac. People want the Windows apps - but those aren't written for architectures like Apple Silicon.

Apple hardware also works in ways Vulkan doesn't expect. On Apple Silicon - Tessellation is a compute stage operation that runs before vertex shading. Vulkan oriented hardware folds it into the graphics pipeline after the vertex shader. MoltenVk does a lot of fancy stuff to work around that. (And what MoltenVk does works - but isn't necessarily good for performance. It opens a new separate compute pass - and prevents the developer from taking the opportunity to do the tessellation work in an existing compute pass.)

Vulkan on Apple Silicon really doesn't make much sense when you look deeper into it. At best - you're going to end up with a bunch of apps that are badly optimized for Apple Silicon because they weren't written with TBDR in mind. Vulkan made more sense when you could take Vulkan code written for a discrete GPU with discrete memory and run it on a Mac with a discrete GPU and discrete memory. No point anymore with Apple Silicon.

Apple Silicon is a specialized GPU with specialized requirements. Era of generic graphics APIs is over. Even within the Vulkan ecosystem the answer for targeting Apple Silicon like hardware is "write new shaders, write new pipelines."
 
  • Like
Reactions: mode11
$30k-$50k? This feels kind of like a straw man argument. Doubling the number of cores would not cost another $20k.

The $50K part yes. A huge chunk of that is Apple RAM mark-up charges can never get to on Apple Silicon because the capacity is chopped down so far. But in denial that Apple wasn't salivating over the lotto ticketing wining revenues from the small number of customers who selected it. Apple could have cut costs on the MP 2019 by just skipping Intel's ">1 TB RAM tax" all together with zero hit on max CPU core count. They didn't. Apple blocked that cheaper option. So yeah, I left that on the table. Apple's greed for the $50K margins is one of the problems the MP 2019 had that they are backing away from on the MP 2023.

if need actual chiplets to get to that doubling it would cost more. Either only using those chiplets on the Extreme which dramatics impacts economies of scale. ( custom chips for a very small volume) . "Too expensive" was the rumored major contributing cause to why the Extreme was abandoned. It wasn't because they didn't get it to work.
Throw in pragmatically have to raise the floor of the RAM capacity at Apple's prices.



Who said anything about an MPX module?

They could do drivers for generic 7900s.

So cards that aren't sold on Apple Store and Apple collects no revenues for are going to get drivers? I don't see where Apple , AMD , or any actual card vendor collects up revenues to pay for this. The nominal Widows cards are contributing to paying for Windows drivers; not Mac drivers.

If Apple were to sell a card only for the Intel Mac Pro getting the Apple design team to put AUX power wires on it would be a challenge. [ New Apple product needs for GPU driving new drivers being done. Revues from those products go into paying for those drivers. That is standard modus operandi at Apple. The margins are so 'fat' on MPX modules that getting to breakeven in a year probably is doable. Won't make money, but wouldn't loose it either. ]

If trying to get ASUS, MSI, Sapphire to make a "Mac marketed" card then the MP 2019 is dead as far as unit volume growth. Signing up for a suicide mission? Probably not. Margins are already pretty poor in the general GPU card business and card with negative target market growth is just a way to loose money faster.

If skipping both of the above, basically backpedaling here to aiming at pure reference design cards where not really looping in the card vendors at all. Not really looping in Apple synergies much at all. And AMD throwing minimal effort just to get something out the door at lowest possible cost because given the smallest possible budget to complete the task. And then do 2-3 years of bugs fixes with an even more meager maintenance budget because there is no revenue source here at all.


If Apple was still selling the MP 2019 and/or the cards could work in the MP 2023 there would be some possible target market growth potential. I could see the point because Apple would pick a card (certify it) and sell it on the Apple Store ( like they sell J2i outside the MP BTO page).

AMD's 7700 and 7800 GPUs are stalled on release in part due to the large glut of 6800 cards out there in the inventory. The "buy upgrades much later because they are cheaper" crowd is mainly going to get the 6800's. And when 7900's get 'cheaper' the MP 2019 will be really close to being de-supported.

In the general market, the 7900 is mainly aimed at the 5000/Vega , 4000 , 3000 folks, not the 6000 folks. There is a 6900->7900 upgrade group that moves quicker , but it is substantively smaller. So really doing 'for free' drivers for even much smaller group than the MP 2019 user base. ( quite similar to the "hardly anybody is up there" $30-50K group. )

Vulkan on Apple Silicon really doesn't make much sense when you look deeper into it. At best - you're going to end up with a bunch of apps that are badly optimized for Apple Silicon because they weren't written with TBDR in mind.

Rosetta apps run sub-optimially and yet there are present on Apple Silicon. There are headless , but highly useful apps that could run on macOS. It just looking at maximum number of apps then yeah if Apple could suck all of the other Apple store apps they would increase the total aggregate quantity. Some applications are willing to trade some small amount of performance to lower total aggregate porting costs. There is a huge number of Electron apps floating around on Macs . Do they squeeze maximum performance out of Apple GPUs? Nope.

There really isn't a huge impedance mismatch with Vulkan and Metal. Apple had to jump into Blender because the "throw everything Khronos in the trash can" blew up Blender's limited budget long term porting plan. If Apple wants to jump in and cover every limited budget open source app that was planning to use Vulkan to lower porting cost then fine. But I wouldn't hold my breath on them spending that kind of money.

I'm not saying that Apple has to take control of the whole MoltenVK source code base. Just that interface bottlenecks or bugs in the interface to Metal that have a root cause presence on the Metal "half" of the interface shouldn't get lost in a "pfttt, doesn't matter' bug queue where Apple just blows it off forever. Should be better than the minimalistic Tier 1 support that Apple generally provides.

The more Apple is hugely anti open everything on graphics stack code the more and more Nvidia/AMD/Intel are going to point at Apple and say "they're a bigger 'embrace , extend, extinguish' player than Nvidia." and that isn't going to help Apple over the long term.



Vulkan made more sense when you could take Vulkan code written for a discrete GPU with discrete memory and run it on a Mac with a discrete GPU and discrete memory. No point anymore with Apple Silicon.

Vulkan makes more sense if trying to control portability costs. That is mainly it. Imagination Tech had Vulkan working on their iGPU. Other people's port budgets don't grow on Apple's whims. Pushing folks up onto 'taller/thicker' porting layers for folks controlling porting costs only increases the distance away from Metal. That isn't going to necessarily improve performance either. The commercial , "for pay" frameworks/engines then probably less. The frameworks on a budget , probably not.

Apple likes to point at iOS as being the big elephant in the room with Metal. Well, Android has what alternative to Vulkan? And it is also a big elephant in the larger room. If Apple is just 'wishing' Vulkan into oblivion, that is probably drinking Cupertino kool-aid.
 
It’s definitely not as beautiful inside or out but it should be one heck of a machine in performance.

Puget supposedly doesn’t ship to Australia yet but I think with enough interest they may reverse that. They will have customers ready and waiting.

I have friends in USA west coast who could get the machine to me and I might end up doing that if I don’t get tempted to build something better myself.
I can definitely understand the position of just Thanosing the machine you want "Fine, I'll do it myself", but you may be on to something...if they see the interest is there, "which I think with what Apple just did, it may be", they may reverse that.

I have to admit, I like how regal it looks, the sturdiness of the brown tones and kind of like how clean and straight to the point it is. 2 RTX 4090's, a 64 core thread ripper, 30TB of storage, a power brick, heat sink, and enough fans to make sure all this power and rage doesn't blow me and my neighbors up LOL. And coming from Apple, this looks extremely clean to me. perhaps I'll customize it a little bit at some point :)
 
It's not even a waiting list - HP have never offered build-to-order here, you get 3 stock configs for each system, and that's about it. Apple did more config options for the 2019 than HP Australia offers for the Z.

Sadly, Puget doesn't ship here - US & Canada Only. Boxx don't even have a local distributor any more. Granted, Singularity are in my home state if I wanted to go crazy watercooled, though geographically they're about as far away as Manhattan to Miami, so not somewhere I'll pop in for a service dropoff *lol*.

I will never get over the Cronenberg-esque bodyhorror of Noctua's flesh-coloured fans 🤣
LMFAO!!! You and I got very different feelings from those colors LOL. It looks very regal to me. Looks like the Autumn in Boston, Massachusetts to me, and I love it :)
 
  • Like
Reactions: mattspace
So cards that aren't sold on Apple Store and Apple collects no revenues for are going to get drivers? I don't see where Apple , AMD , or any actual card vendor collects up revenues to pay for this. The nominal Widows cards are contributing to paying for Windows drivers; not Mac drivers.

Sure. They could release new GPU drivers just to be nice after convincing everyone to buy Mac Pros in 2019. Doesn't net them any more, but keeps their customers happy.

I know it's not likely to happen, but it's kind of ridiculous the 2019 Mac Pro gets orphaned after two GPU generations - with just one upgrade generation that will ever be available.

Rosetta apps run sub-optimially and yet there are present on Apple Silicon.

For now.

I'm not saying that Apple has to take control of the whole MoltenVK source code base. Just that interface bottlenecks or bugs in the interface to Metal that have a root cause presence on the Metal "half" of the interface shouldn't get lost in a "pfttt, doesn't matter' bug queue where Apple just blows it off forever. Should be better than the minimalistic Tier 1 support that Apple generally provides.

Vulkan is the bottleneck to Vulkan on macOS. There's no fixing it. The spec isn't designed in alignment with Apple hardware. The fix is Metal.

The interface is a reflection of the hardware. What you're actually asking for is hardware changes to make the hardware more aligned with Vulkan.

The more Apple is hugely anti open everything on graphics stack code the more and more Nvidia/AMD/Intel are going to point at Apple and say "they're a bigger 'embrace , extend, extinguish' player than Nvidia." and that isn't going to help Apple over the long term.

How is Vulkan open? Where do I find the source to Nvidia's Vulkan driver on Windows?

Vulkan has some open source implementations. By and large - the most popular Vulkan implementations are closed.

Vulkan makes more sense if trying to control portability costs. That is mainly it. Imagination Tech had Vulkan working on their iGPU.

Imagination Tech has Vulkan working on their GPUs by basically forking Vulkan (or at least having a subset of Vulkan aimed at their exact use case.) Imagination GPU Vulkan and Radeon/GeForce Vulkan and not interchangeable.

That's the problem with Vulkan as a "standard." Vulkan isn't standard. Imagination has their entire own pipeline and shader specialization within Vulkan.

Most people consider Vulkan all standardized because they only support Radeon/Geforce GPUs and never have to deal with optimizing for TBDR GPUs. To them - because they only support one style of GPU - it feels like a standard.

Which brings me back to my original point - even if Apple supported Vulkan, they'd be down the Imagination branch of Vulcan, which the packages you're probably looking for don't support anyway.

Even then - I'm not convinced Vulkan is actually compatible with Apple hardware anyway. The Asahi Linux folks seem like they're having to do a lot of emulation of certain functions so far.

And FWIW - Apple is dealing with the same thing even inside the Metal ecosystem. Radeon Metal and Apple Silicon Metal aren't really the same thing and have different optimization paths usually requiring different sets of shaders and pipeline specializations.

Apple likes to point at iOS as being the big elephant in the room with Metal. Well, Android has what alternative to Vulkan? And it is also a big elephant in the larger room. If Apple is just 'wishing' Vulkan into oblivion, that is probably drinking Cupertino kool-aid.

iOS isn't the reason for Metal. Apple Silicon is the reason for Metal. Again - Vulkan is not really compatible with Apple Silicon hardware. MoltenVk came out of era of Radeon cards. Last I checked - they were still trying to figure out how to get performant on Apple Silicon. And they still have the root problem that some parts of the Vulkan spec are out of alignment with Apple Silicon.
 
I raised this in the pre-release threads but I'll bring it up here again (and this is more of a general response and not directed right at deconstruct60):

Apple has already told developers to assume unified memory on Apple Silicon. Apple Silicon Metal apps *only work with unified memory.*

Even if Apple management backtracked and decided they would give the option of discrete cards - it's too late. The software choices have already been made. Metal apps on Apple Silicon are fundamentally incompatible with discrete cards.

That's why discrete cards aren't coming back. Maybe eventually some sort of crazy mesh thing. But the decisions were already made years ago that prevent traditional third party discrete cards from returning.

Third party card based GPUs are not coming back. It's over. Stuff like Resident Evil Village isn't Apple Silicon only because of some weird Intel CPU compatibility. It's Apple Silicon only because it only works with Unified Memory and Apple GPUs. It doesn't work at a software level with third party discrete cards.
Apple GPU performance seems to be losing, not gaining, ground on discrete GPUs. Inability to convert and roll out extreme chips is not helping.

How many cycles before this is so painfully obvious even mac apologists cannot cover for it and apple is embarrassed enough to do another apology tour?
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.