Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It's an hid device, only Accelerators I know in macOS are the afterburner card and T2 transcoding acceleration feature.

CUDA is completely capable of running in user space.
It can run in user space but can't render nothing to frame buffers, neither has faster un supervised main memory/devices access so random memory access will be taxed meaningfully same for tandem GPU (not big deal having nvlink).
 
I'm slowly starting to wonder whether the Mac Pro is not some sort of Unicorn. What's Apple holding back to release the bloody thing ?

They've got a lot of plates spinning, from the necessary point updates to iOS and Catalina, to the new hardware, to the new services dropping (and Apple TV+ is probably more important to their future than the Mac Pro.)

[Which is probably one reason they should have just made Mojave a long-term support update or something and waited on Catalina, so you could still have run your legacy 32-bit stuff on your Mac Pro and they could iron out a lot more of these issues before jumping to the next big compatibility break. But c'est la vie.
 
No, FYI kernel mode drivers has to be digitally signed by apple to be loaded, once the new driverKIY framework replace old kext, only non-kernel mode (user space) devices will work with unsigned drivers.


Here this topic is properly discussed https://forums.macrumors.com/threads/driverkit-api-works-for-gpus.2183918/

Unsigned drivers? No. The drivers are now called System Extensions (as opposed to Kext ). If bother to look at the System Extensions link on the Apple doc page you linked in.

"... To successfully activate your extension, you must adhere to the following rules:
....
....You must use the same Team ID when signing the extension that you use for signing your app, unless the extension has the com.apple.developer.system-extension.redistributable entitlement. ..."

Systems Extensions only partially run in User space. They have elevated access from normal User space in terms of address space access and in terms of run time priority levels. These drivers are not normal apps.
They are given explicit narrow access to the a subset of the kernel space to access only the specific low level devices they are suppose to be coupled to ( adding some pages to that user mapping, but not whole kernel address space). They also run at higher priorities than normal apps (still have to handle relatively non-human response time sensitive interrupts. ) . Those are special privileges and no Apple isn't going to hand those privilege out of anonymous , random folks in the increased security future kernels.

System Extensions are still signed (and will be notarized) . Just not the same set of signed privileges. And the new perhaps quirky part is that they are distributed bundled to applications. (presumably to provide GUI settings , updates, etc. )


System Extension X doesn't necessarily get access to what Extension Y gets access to even inside the same class of extension. Kext was much more 'flatter' (less fine grained) and uniform in terms of access , which creates potential security holes. But it isn't like there isn't disruption is totally eliminated if the System Extension is rogue.
 
Last edited:
Does anyone know the maximum number of supported GPUs in macOS?

It’s just a silly thought really but I’m imagining the Modular Mac Pro could have 4 x vega II (two vega II duo) plus 12 Radeon VII as eGPUs = 16 GPUs.

Imagine the rendering power of that, and obvs the energy bill.
 
Does anyone know the maximum number of supported GPUs in macOS?

It’s just a silly thought really but I’m imagining the Modular Mac Pro could have 4 x vega II (two vega II duo) plus 12 Radeon VII as eGPUs = 16 GPUs.

Imagine the rendering power of that, and obvs the energy bill.
you may run into power supply limitations beyond two of those cards?
 
Does anyone know the maximum number of supported GPUs in macOS?

My understanding is there is no software/OS limit, but the API for GPU has never been officially opened for Mojave+. Unsure where that rumored developer GPU API stands and have not seen any updates provided in several months.

Have seen machines in client offices who've worked with PCIe expansion boxes with 4+ GPUs at a time in older OS versions. Now that Mojave+ officially works with eGPU and multiple eGPU setups, you can theoretically daisy-chain in the right setup. You will reach an x16 limit at some point and there will be some latency introduced, but all reports show it's little to no impact for multiple GPU case scenarios.
 
Does anyone know the maximum number of supported GPUs in macOS?

It’s just a silly thought really but I’m imagining the Modular Mac Pro could have 4 x vega II (two vega II duo) plus 12 Radeon VII as eGPUs = 16 GPUs.

Imagine the rendering power of that, and obvs the energy bill.


Not clear how you are getting to 12 eGPUs, but if it is daisychaining multiples on a single Thunderbolt "bus" then it probably does not work as well as you'd think it might.


https://forum.blackmagicdesign.com/viewtopic.php?f=3&t=76425


Conceptually 2 on a bus is less than x2 each. Four would be less than x1 . You'd have to have computations that took a very substantial amount of run time after the much more prolonged load/save times that they'd would be incurring.


With two Vega II duos there would be four busses feed by x4 and depending upon how the standard two x4 bus to the standard TB ports you'd only be able to healthily feed 6 eGPUs. ( not 12. ).

MPX Bay 2 is suppose to have two x16 slots. ( If those are independently provisioned ) then could use three-four x16 PCI-e expansion bays with four slot each. That would be 12 or 16 cards. But even if stuck with just two 4 bay expanders coming out of MPX Bay 2 ( 8 card count) that is a bigger "bang per slot" than just just 2 GPUs for a Vega II Duo. If really chasing max card count then the 4 bay expanders cover much more ground than a single MPX Bay.

Most likely you'd be sucking bandwidth out of several of the other slots ( 5-8) also.
[automerge]1572032780[/automerge]
.....

However, I am not sure where @shuto got his 12 eGPU number from. Link good sir?

I suspect he is simply counting total Thunderbolt 3 ports ( 4 on each Vega II and 4 standard on the system. 3 * 4 = 12 ). That isn't a good idea. It is the TB controllers ( not ports) that pragmatically count.

Plus the 4 Vega GPUs are going to have substantially different latencies than the others so how the data is partitioned and delegated probably won't be linear by GPU number. Can blow lots of money doing this. The $/performance isn't going to be good on the vast majority of workloads.
 
Last edited:
My understanding is there is no software/OS limit, but the API for GPU has never been officially opened for Mojave+. Unsure where that rumored developer GPU API stands and have not seen any updates provided in several months.

It wouldn't be surprising if GPU count was tracked with 4 or 8 bits block of memory packed into 32 (or 64) bits with some other attributes of the card. 2^4 would be 16 and 2^8 would be 256 ( the latter being such an impractical large number as to be "good enough for everybody").

You will reach an x16 limit at some point and there will be some latency introduced, but all reports show it's little to no impact for multiple GPU case scenarios.

trimming down to x8 or x4 doesn't. but x1 does show impacts for more than a few scenarios. 4 different games using the same "cut scene filler" to hide latency isn't really multiple scenarios. The problem more so is more likely that are making big impact on the 4 Vega II that don't have the latency problem and had to shift to a slower data distribution algorithm to loop in the other cards without Infinity Fabric.
 
I edit in FCPX, does anyone know if the 7,1's base GPU fare against my 6,1's Dual AMD D700's? I would like to know how much of an impact I will have editing if I go base.

I tried to compare benchmarks, but Geekbench only shows one D700, not both. Perhaps my understanding is limited, but if someone could shed some light. That would be great.
 
I edit in FCPX, does anyone know if the 7,1's base GPU fare against my 6,1's Dual AMD D700's? I would like to know how much of an impact I will have editing if I go base.

I tried to compare benchmarks, but Geekbench only shows one D700, not both. Perhaps my understanding is limited, but if someone could shed some light. That would be great.
You should see around a 40% jump in performance going from dual D700 to the base in the 7,1.
 
  • Love
Reactions: Korican100
You should see around a 40% jump in performance going from dual D700 to the base in the 7,1.

Yeah, it's so easy to lose sight of just how crazy old and out-dated the 6,1 Mac Pro is. That base config video card in the 7,1 that half the thread here has been decrying as insultingly low end? It'll blow away a trash can's dual D700s.

In any normal universe I'd have updated twice now since January 2014. But with Apple I'm still running a top-of-the-line Mac Pro six years down the road, three years after it's been fully depreciated, and four years after the warranty expired on it.
 
  • Like
Reactions: Korican100
I suspect he is simply counting total Thunderbolt 3 ports ( 4 on each Vega II and 4 standard on the system. 3 * 4 = 12 ). That isn't a good idea. It is the TB controllers ( not ports) that pragmatically count.
Thanks for your reply deconstruct60. Yeah I was just counting the ports. That’s great you know about the buses. So yeah max six eGPUs sounds right, wouldn’t want the eGPUs to be running less than x4.

I think my plan at the moment is to buy Radeon VII card to use with Modular Mac Pro as will be cheaper than vega II and hopefully not much slower for GPU rendering. All depends on unknown pricing tho! Then over time slowly upgrade the system.
 
Yeah, it's so easy to lose sight of just how crazy old and out-dated the 6,1 Mac Pro is. That base config video card in the 7,1 that half the thread here has been decrying as insultingly low end? It'll blow away a trash can's dual D700s.

In any normal universe I'd have updated twice now since January 2014. But with Apple I'm still running a top-of-the-line Mac Pro six years down the road, three years after it's been fully depreciated, and four years after the warranty expired on it.

Thermal corner or not, the fact that they didn't even bother updating the 6,1 to Broadwell and put a minimal amount of effort into at least giving you a better bang for your buck even if stuck with the same GPUs is still so incredibly dumb.
 
  • Like
Reactions: Jethro! and H2SO4
You should see around a 40% jump in performance going from dual D700 to the base in the 7,1.

A 40% gap between a 580X and a computation stuck on a single D700 is probably in the ball park. A computation balanced over both D700 versus a 580X probably won't be that large and far closer to just simple parity with the D700 combo.

If go to Apple's overview marketing page for the Mac Pro there is a chart for Final Cut Pro . iMac Pro with Vega is x1.5 (50% ) over a dual D700. A Vega II Pro Duo is x2.9. If the single 580X could leave the dual D700 in the dust then Apple would be bragging about it. They aren't. The iMac Pro's Vega 64X is significantly faster than a upclocked 580 baseline (it isn't a 10% difference between those two. More like a 40-50% difference).

In the subset of FPCX where you can't split the computation duties over both (and largely have a boat anchor second GPU), then yes there would be around a 40% sized gap.

The Vega II "solo" should also put a margin in that ballpark on a fully engaged dual D700 configuration.

Any one of the MPX modules is probably better cooled that the dual D700 so it is a better place to be for extended, taxing workloads. But the 580X isn't a huge combined max TFLOPS leap. It has upsides is being more single card ( where computation can't go dual) and more affordable.

P.S. It will be interesting to see where the Afterburner card comes out on pricing. For a subset of workloads, a 580X + Afterburner might be a bigger bang for the buck than trying to throw all the budget at the GPU(s).
 
Last edited:
  • Like
Reactions: Korican100
Thermal corner or not, the fact that they didn't even bother updating the 6,1 to Broadwell and put a minimal amount of effort into at least giving you a better bang for your buck even if stuck with the same GPUs is still so incredibly dumb.

It was, but I don’t think it was their intention. Assuming a two year upgrade cycle would be Late 2015, which meant engineering in 2014 and 2015 and Apple was too consumed with the iPhone 6 and 6 Plus, the Apple Watch, the iPad Air 2, 12.9” iPad Pro and all the sales and support to keep up a torrential pace of updates and grow those markets. No one in their right mind was worrying about the Mac Pro then, the revenue just wasn’t there.
 
It was, but I don’t think it was their intention. Assuming a two year upgrade cycle would be Late 2015, which meant engineering in 2014 and 2015 and Apple was too consumed with the iPhone 6 and 6 Plus, the Apple Watch, the iPad Air 2, 12.9” iPad Pro and all the sales and support to keep up a torrential pace of updates and grow those markets. No one in their right mind was worrying about the Mac Pro then, the revenue just wasn’t there.
they didn't give it the chance. Making a proprietary box like that needs real support or the users will abandon it. That is what happened. People wouldn't have minded so much if they could have swapped in their own components.
 
Read somewhere, AMD pro Vega II should be much cheaper than expected, at about 1200$ / GPU, two Vega II should rise the cgmMP up to 4800$, so assuming a 6000$ base adding 4gpu s 16 core CPU and 1tb SSD should be below or close to 13k$ w/o ram upgrades, a 1.5tb upgrade should alone add 30k$.

For those comfortable with AMD ROCm are good news a cheap Linux workstation (as long you can install Linux), a single Vega II equals a rtx2080ti in performance with twice ram (both non ECC)
 
  • Like
Reactions: OkiRun
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.