Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JesterJJZ

macrumors 68020
Original poster
Jul 21, 2004
2,475
845
So, if the cost of the Vega II Duos is crazy expensive, like it might be, I've been considering going with four AMD Radeon VII GPUs. There's room in the tower for them, but what about power? From what I can tell, there are two pairs of 8pin connectors with each pair supplying 300w, which is the power requirement of one Radeon VII.

With 75w from four slots, plus 2x300w, plus 75w aux, that gives us 975w useable power.
So are we limited to 3 powered GPUs?
 
So are we limited to 3 powered GPUs?

Screen Shot 2019-06-06 at 5.18.35 PM.png

From the pictures it certainly SEEMS like there are four 8-pin power labeled for slots 1, 2, 3, and 4 and one 6-pin or 8-pin for slots 5-8. Theoretically, this should be able to power 5+ GPUs, depending on your GPU's power requirements. With four straight x16 slots, it may block slot 6 when in use.

Not 100% clear if it's going to be limited to ONE 8-PIN PER PCIE (1, 2, 3, 4) based on the labels on the images. Also not 100% positive if there is an "activation" required to enable, or if there is a requirement for a card to be physically present, such as a card in slot 2 in order to use the allotted slot 2 8-pin. Those are the types of details we're likely going to need to wait for answers on if you're not planning to use the MPX.
 
That's from the MPX slot. An off the shelf GPU won't connect to that unless someone makes an adapter from the MPX slot to dual 8pin or something.
They said there are 4x 8Pin PCIe power plugs available (PCIe 3.0 spec not the higher wattage PCIe 4.0 one)
 
They said there are 4x 8Pin PCIe power plugs available (PCIe 3.0 spec not the higher wattage PCIe 4.0 one)

Looks like 2 sets of 8pin. 300w per set. Most powerful gpus use two 8pin each.
 
That extra one at the top is maybe a 6-pin for powering the Afterburner ASIC?

RED Rocket has a 6-pin.
 
That extra one at the top is maybe a 6-pin for powering the Afterburner ASIC?

Unless Apple's film animation and pictures of the Afterburner card is deceptive, there is no aux power. It is pretty clear there is no substantive fan/blower on the card not does it have a "tall fin" system like the MPX modules. It is a simple, single wide card. It only does one thing so 75W is probably pretty good. It may be a bleeding edge FPGA from Intel/Altera since Apple bet on them for major portions of this system. That wouldn't be cheap, but doesn't need tons of thermal headroom.

For example
".. The FPGA is rated at 70 watts, and it is a full-blown Arria 10 GX 1150. This FPGA has its own cache memory and does not include ARM cores as etched next to the FPGA gates like some Arria and Stratix chips from the Altera line do. ..."
https://www.nextplatform.com/2018/05/24/a-peek-inside-that-intel-xeon-fpga-hybrid-chip/

Intel has "stuffed' an arria 10 into a Xeon package with just another 70W. Similar thing. they can toss out the ARM or any kind of base OS processor and basically consume and return streams on very simple commands. ( Arria 10 is 20nm implementation. Agile X is 10nm and up to 40% less power, but not cheap. :) )


RED Rocket has a 6-pin.

That is a better candidate. Probably also some other I/O and more modest GPUs that could need one (if allocated both MPX bays to non GPU hardware ).
 
So, if the cost of the Vega II Duos is crazy expensive, l

If crazy expensive?
The Vega VII is based off the Instinct MI50 and it is ~ $700 ( and a decent chance close to at cost).
The Pro Vega II is based of the Instinct MI60 ( the full 64 ). So the base cost is more and Apple (and/or AMD) is likely to put a 30% mark up on it. At least $999 wouldn't be surprising. ( unless Apple is completely bending the volume cost curve for AMD could easily crack $1K. The whole Mac Pro system is priced with "got plenty of money to spend" mentality) The dual thing probably over 2K ( more MI60 features turned on. can't find a public price for a MI60 ).


With 75w from four slots, plus 2x300w, plus 75w aux, that gives us 975w useable power.
So are we limited to 3 powered GPUs?

I know "wait for Navi" is a bit thin but there could be some cards that take just one 8-pin. That would mean 4 GPUs. When Apple will deliver drivers for those may not be until 2020, but eventually.
 
I know "wait for Navi" is a bit thin but there could be some cards that take just one 8-pin. That would mean 4 GPUs. When Apple will deliver drivers for those may not be until 2020, but eventually.

Two Radeon VII now and upgrade to four Navi later would probably be ok.
 
Looks like 2 sets of 8pin. 300w per set. Most powerful gpus use two 8pin each.
Alternatively, each MPX bay can support:

One full-length, double-wide x16 gen 3 slot and one full-length, double-wide x8 gen 3 slot (MPX bay 1)

Or two full-length, double-wide x16 gen 3 slots (MPX bay 2)

Up to 300W auxiliary power via two 8-pin connectors

Guess I interpreted it the wrong way
 
If crazy expensive?
The Vega VII is based off the Instinct MI50 and it is ~ $700 ( and a decent chance close to at cost).
The Pro Vega II is based of the Instinct MI60 ( the full 64 ). So the base cost is more and Apple (and/or AMD) is likely to put a 30% mark up on it. At least $999 wouldn't be surprising. ( unless Apple is completely bending the volume cost curve for AMD could easily crack $1K. The whole Mac Pro system is priced with "got plenty of money to spend" mentality) The dual thing probably over 2K ( more MI60 features turned on. can't find a public price for a MI60 ).

Your estimate is actually quite encouraging - from the Mac Pro press release page:

• Maxon’s Cinema 4D is seeing 20 percent faster GPU render performance when compared to a Windows workstation maxed out with three NVIDIA Quadro RTX 8000 graphics cards.

These cards are $10k. So I was expecting the price of the GPU in the Mac Pro to be at least equal to the Quadro.
 
I know "wait for Navi" is a bit thin but there could be some cards that take just one 8-pin. That would mean 4 GPUs. When Apple will deliver drivers for those may not be until 2020, but eventually.

In that case, may be install only two dual 8pin Navi card make more sense. There should be some more powerful cards which can also pull lots of power. But not just same performance card with lower power consumption. Especially AMD still fall behind in the performance game.
 
Your estimate is actually quite encouraging - from the Mac Pro press release page:

• Maxon’s Cinema 4D is seeing 20 percent faster GPU render performance when compared to a Windows workstation maxed out with three NVIDIA Quadro RTX 8000 graphics cards.

These cards are $10k. So I was expecting the price of the GPU in the Mac Pro to be at least equal to the Quadro.

Quadro RTX 8000 is around $5K
 
Can nvidia be supported if driver is available or are they
Locked out” from Apple’s lost?

NVIDIA "simply" needs a macOS driver for 10.14/10.15 to work with whatever GPUs or series of GPUs they determine appropriate or certified. I haven't seen any reports of the graphics API available in/for 10.15 DP1 yet. This was "supposed to" make creating certified/compatible GPU drivers for macOS a lot easier. We'll see, I guess.
 
  • Like
Reactions: Macpro2019
If crazy expensive?
The Vega VII is based off the Instinct MI50 and it is ~ $700 ( and a decent chance close to at cost).
The Pro Vega II is based of the Instinct MI60 ( the full 64 ). So the base cost is more and Apple (and/or AMD) is likely to put a 30% mark up on it. At least $999 wouldn't be surprising. ( unless Apple is completely bending the volume cost curve for AMD could easily crack $1K. The whole Mac Pro system is priced with "got plenty of money to spend" mentality) The dual thing probably over 2K ( more MI60 features turned on. can't find a public price for MI60

The full Vega 20 chip probably has very low yields to begin with. Add 32GB HBM2 and the prices will be through the roof. I’d be surprised if a single card is under 5k.
 
  • Like
Reactions: th0masp
The full Vega 20 chip probably has very low yields to begin with. Add 32GB HBM2 and the prices will be through the roof. I’d be surprised if a single card is under 5k.

5K for the single card?! I certainly hope not! :(

Historically, the Pro AMD Vega cards have not really been that pricey on the Macs, but the Vega 20 with 32GB HBM2 will be another beast, I will agree to that. I’m thinking 1,5K for the upgrade to a single card, 3K+ for the Duo.
[doublepost=1559922052][/doublepost]I’m also getting a feeling that Apply might look at that high base price as where they will earn the most of their Mac Pro 2019 money, maybe less so on the CPU and GPU upgrades. Definitely looking forward to the GPU price reveal with anxiety and some stress! :eek:
 
NVIDIA "simply" needs a macOS driver for 10.14/10.15 to work with whatever GPUs or series of GPUs they determine appropriate or certified. I haven't seen any reports of the graphics API available in/for 10.15 DP1 yet. This was "supposed to" make creating certified/compatible GPU drivers for macOS a lot easier. We'll see, I guess.

I watched the DriverKit session at WWDC 2019 and the new stuff isn't a magical "get out of jail free" card for Nvidia at all. Developing system extensions is easier now because Apple is in the process of kicking all 3rd party code out of the kernel and into "user space" mode on the processors. So drivers can die and the system has no kernel panic impact. The upside to being in user space is that driver developers can use normal LLDB to debug their drivers ( as opposed to no pragmatically needing two Macs to debug their work). So writing code in a safer manner should get easier for the developers.

Eventually all current kext will be banned in future versions of macOS. DriverKit only really covers HID ( keyboard mouse) , USB , and one other area that slips my mind at the moment. It doesn't cover IOKits "display" class yet. What DriverKit covers here in 10.15 will get banned in the next iteration after that. 10.16 will cover more legacy IOKit classes.
Basically over the next couple of years everyone with a kext needs to do some substantive upgrades to their code. Pragmatically IOKIt is being deprecated. ( I don't think Apple is explicitly using that term at the moment you have to be blind not to see the freight train coming at this point. )

The new "system extensions" will get direct access to special hardware at kernel like level of access, but it will all flow through the kernel (which only Apple is going to write).

The Nvidia problem is seems to more likely be what Nvidia wants to write that is at variance with the general rules of the road that Apple is laying down. This increasingly looks like a dust up on how to harmonize between Metal and CUDA. Nvidia wanting CUDA to first and Metal second. And Apple not going to take Metal being put in second class status. How they can make the perhaps co-equal first class is something they haven't had to do in the past and still doesn't seem to be worked out. The move to System Extension and DriverKit is an opportunity to come up with a new, better shared solution, but if Nvidia is basically doing nothing and primarily just playing 'hard ball' with Apple .... they are blowing it. The window to what the new graphics interface looks like is probably almost closed at this point.
 
  • Like
Reactions: Zdigital2015
I watched the DriverKit session at WWDC 2019 and the new stuff isn't a magical "get out of jail free" card for Nvidia at all. Developing system extensions is easier now because Apple is in the process of kicking all 3rd party code out of the kernel and into "user space" mode on the processors. So drivers can die and the system has no kernel panic impact. The upside to being in user space is that driver developers can use normal LLDB to debug their drivers ( as opposed to no pragmatically needing two Macs to debug their work). So writing code in a safer manner should get easier for the developers.

Eventually all current kext will be banned in future versions of macOS. DriverKit only really covers HID ( keyboard mouse) , USB , and one other area that slips my mind at the moment. It doesn't cover IOKits "display" class yet. What DriverKit covers here in 10.15 will get banned in the next iteration after that. 10.16 will cover more legacy IOKit classes.
Basically over the next couple of years everyone with a kext needs to do some substantive upgrades to their code. Pragmatically IOKIt is being deprecated. ( I don't think Apple is explicitly using that term at the moment you have to be blind not to see the freight train coming at this point. )

The new "system extensions" will get direct access to special hardware at kernel like level of access, but it will all flow through the kernel (which only Apple is going to write).

The Nvidia problem is seems to more likely be what Nvidia wants to write that is at variance with the general rules of the road that Apple is laying down. This increasingly looks like a dust up on how to harmonize between Metal and CUDA. Nvidia wanting CUDA to first and Metal second. And Apple not going to take Metal being put in second class status. How they can make the perhaps co-equal first class is something they haven't had to do in the past and still doesn't seem to be worked out. The move to System Extension and DriverKit is an opportunity to come up with a new, better shared solution, but if Nvidia is basically doing nothing and primarily just playing 'hard ball' with Apple .... they are blowing it. The window to what the new graphics interface looks like is probably almost closed at this point.

I think NVIDIA views having any of its technologies take a back seat to Metal 2 and ML Kit(Core ML 3) as completely unacceptable. Imagine Apple allowing NVIDIA back in only to see them prioritize CUDA (NVIDIA writes the CUDA driver now, I don't see them letting Apple do it or sharing proprietary tech with them) while Core ML 3 lags behind. I suspect NVIDIA would also try hard to push AMD out any way they could as they want to dominate the Deep Learning and Machine Learning fields even more than they seem to already.

I think NVIDIA is really more a competitor and Apple won't let them back in the door after whatever happened in the past between them.
 
Your estimate is actually quite encouraging - from the Mac Pro press release page:

I was mainly setting a lower bound ( plus as the BTO order cost which doesn't fold in the base GPU cost Apple is making folks fold into price of the card. Apple doesn't particularly give a "rebate ' credit for the card you traded up over. ).


• Maxon’s Cinema 4D is seeing 20 percent faster GPU render performance when compared to a Windows workstation maxed out with three NVIDIA Quadro RTX 8000 graphics cards.

These cards are $10k. So I was expecting the price of the GPU in the Mac Pro to be at least equal to the Quadro.

I think the $10K is the top end ML cards not the Quadro. But still Apple's is topping those three with four of theirs. All four adding up to $10k range maybe. ( the three RTX 8000 is about $15K so if Apple's quad fit into $15K they'd be doing pretty good. But the "we have Metal not CUDA" factor means it is more than this one simple benchmark where everything is tilted in their direction. It is not the Quadro that is going to 'murder' the MPX kit (and Mac Pro) sales if they price these too high. )

Where the price ends up is whether these Apple MPX GPU kits are a moderate-hight volume selection for the Mac Pro 2019 or relatively low. So if 60-80% of folks are buying them Apple probably could wrangle AMD out of the multiple triple digit mark-ups they are slapping on the basic card component. AMD will make more because Apple is going to sell many thousands of these and having deployed GPUs is better than corner case stuff. Apple did that with the MP 2013 GPU where the D500 and D700 were way below what AMD was charging for FirePro models.

If Apple is thinking that most folks are going to skip MPX ( that would be a bit odd, but ... ) then the run rate would be low and Apple would be in same boat as AMD in wanted to "print money" off of the far fewer ones sold. So split the triple digit mark-ups. What is also probably missing are some more reasonable MPX GPUs in the middle and similar impact ( number sold largely disconnected from overall Mac Pro sales so prices higher. ). If there is still a "hole waiting to be filled" in the MPX line up at launch they'll need to mention that if don't want more "$999 stand" bad PR.





.
[doublepost=1559930093][/doublepost]
I think NVIDIA views having any of its technologies take a back seat to Metal 2 and ML Kit(Core ML 3) as completely unacceptable.

That kind of "your tech has to loose for mine to win" mindset is probably the whole root issue. Whether it is Apple , Nvidia, or most likely both (with slightly differing degrees ) posturing with that attitude. Apple isn't going to 'cave' on this because Metal is directly tied into all the rest of the Apple os instances. If Nvidia wants to pick a fight with iOS, it is going to loose.

It isn't necessarily about "beat seat" it is about having a co-equal peer. It can't achieve a 'tie' then Apple owning the OS means they own the tie-breaker. At the end of the day Nvidia is just a subcomponent subcontractor in the ecosystem.


Imagine Apple allowing NVIDIA back in only to see them prioritize CUDA (NVIDIA writes the CUDA driver now, I don't see them letting Apple do it or sharing proprietary tech with them) while Core ML 3 lags behind.

chuckle like the "embrace, extend , extinguish" they did on OpenCL.... yeah wouldn't be surprising. But that tactic isn't necessary. When Jobs got back to Apple he announced that Apple had to leave the notion of "Microsoft has to loose for Apple to win" mindset. [ Well he also needed a giant load of MS money to keep the lights on at the time too] Apple has done better since


I suspect NVIDIA would also try hard to push AMD out any way they could as they want to dominate the Deep Learning and Machine Learning fields even more than they seem to already.

Trying hard to be a extremely good subcontractor involving going to the folks running the overall project and finding out what they want to do and aligning up with their major strategic goals and looking to weave in what your goals are with theirs. A tactic that involves kicking AMD in the kneecaps ( and perhaps Apple indirectly ) to force AMD off the picture is a poor partner. Nvidia GPUs getting along substantially better with Intel GPUs than AMD does would be an "out compete" rather than a "push out of the way".

Nvidia is heading for the similar kind of bubble on Deep Learning as they had on cryptocurrencies. harvesting data from everywhere and hauling back to mega cloud data repositories has limits. On device DL/ML is coming up and will take over as being a driving force of the overall opportunity. Nvidia is spending lost of energy to dig a deeper most to slow that down but it is coming like a flood. It is more a matter of how much time they are buying until the levee is breached.


I think NVIDIA is really more a competitor and Apple won't let them back in the door after whatever happened in the past between them.

it isn't really about being a competitor. Apple gets along pretty well with Microsoft. Companies with broad footprints know they have "coopetition" challenges with other bigger companies.
 
That kind of "your tech has to loose for mine to win" mindset is probably the whole root issue. Whether it is Apple , Nvidia, or most likely both (with slightly differing degrees) posturing with that attitude. Apple isn't going to 'cave' on this because Metal is directly tied into all the rest of the Apple os instances. If Nvidia wants to pick a fight with iOS, it is going to loose.

"Apple isn't going to 'cave' on this because Metal is directly tied into all the rest of the Apple os instances." - That's the money quote...when Apple announced Metal, there was a collective "here we go again" from developers. We all know why, and I think it was pretty justifiable given Apple's history of introducing whiz bang tech and then setting it adrift after things don't go as they would like. But Metal 2 is now the underlying tech across iOS, iPadOS, macOS and tvOS, and things are turning out differently than I thought they would. NVIDIA is less likely to be deferential to Apple in this regard and we already know the winner of this competition. I think from an Apple Corporate point of view, anything, and I do mean anything, that an outside vendor could potentially do to derail the iPhone/iOS money train is going to hit a roadblock a mile high and a mile wide. Apple went to Intel for cellular modems after they hit an impasse with Qualcomm because they were going to do anything they could to not be held hostage by a company that is portrayed as a partner, but is at least one of their Tier 1 component subcontractors. Without them, no iPhone...no iPhone, there goes two-thirds of your revenue...but that is a different story for a different thread.


It isn't necessarily about "beat seat" it is about having a co-equal peer. It can't achieve a 'tie' then Apple owning the OS means they own the tie-breaker. At the end of the day Nvidia is just a subcomponent subcontractor in the ecosystem.

I have read very little about Jensen Huang, co-founder and CEO of NVIDIA, but from what little I have gleaned, he seems affable enough, but also intensely driven to succeed and dominate the market that NVIDIA occupies. He strikes me as somewhat like Larry Ellison, who was a friend of Steve Jobs, but possibly his personality and demeanor and NVIDIA's corporate culture clashed with Jobs' to a point where Steve said, "**** it, I don't want to deal with them anymore. Tim, they're out as a supplier...go call AMD, tell me which contracts we still have to fulfill so I don't have to ****ing pay them any more ****ing money, and then **** them." Pure conjecture on my part, although I am pretty sure the dialogue is spot on.


Chuckle like the "embrace, extend , extinguish" they did on OpenCL.... yeah wouldn't be surprising. But that tactic isn't necessary. When Jobs got back to Apple he announced that Apple had to leave the notion of "Microsoft has to loose for Apple to win" mindset. [ Well he also needed a giant load of MS money to keep the lights on at the time too] Apple has done better since.

Steve Jobs had a rich and rather strange history with Bill Gates that I don't quite understand. I can't tell if they actually liked each other or not. Steve never seemed to have a problem criticizing MS, but I wonder somedays if that was more directed at Steve Ballmer than Bill. Oddly enough, I think Steve was pragmatic enough to know that MS was important to Apple. I don't think it was because of the cash. I think it was because while MS owned the market, Steve always knew Apple's brand and cachet and the things it came up with could never be matched by MS. Which upon reading sounds a bit narcissistic and dysfunctional, but I digress.


Trying hard to be an extremely good subcontractor involving going to the folks running the overall project and finding out what they want to do and aligning up with their major strategic goals and looking to weave in what your goals are with theirs. A tactic that involves kicking AMD in the kneecaps (and perhaps Apple indirectly) to force AMD off the picture is a poor partner. Nvidia GPUs getting along substantially better with Intel GPUs than AMD does would be an "out compete" rather than a "push out of the way".

I would counter that with the infamous NVIDIA GPP program - Nvidia kill GPP “rather than battle mis-information” as AMD win battle ...https://www.pcgamesn.com/nvidia-cancel-gpp


Nvidia is heading for the similar kind of bubble on Deep Learning as they had on cryptocurrencies. harvesting data from everywhere and hauling back to mega cloud data repositories has limits. On device DL/ML is coming up and will take over as being a driving force of the overall opportunity. Nvidia is spending lost of energy to dig a deeper most to slow that down but it is coming like a flood. It is more a matter of how much time they are buying until the levee is breached.

"On device DL/ML is coming up and will take over as being a driving force of the overall opportunity." Would it be too boastful of me to say that NVIDIA is not leading in this arena and that it has no competition for the A11 and A12 Bionic that powers tens of millions of iOS devices and therefore give Apple a substantial lead over NVIDIA? Or am I misunderstanding the subject relative to this discussion? I have a weak grasp of DL/ML details as it relates to CUDA and Metal 2/OpenML.


It isn't really about being a competitor. Apple gets along pretty well with Microsoft. Companies with broad footprints know they have "coopetition" challenges with other bigger companies.

I have to admit that perhaps I am reading too much cloak and dagger into Apple and NVIDIA's relationship. However, I don't see it mending ever. I certainly could be wrong, but I don't think I am.
 
I never thought Apple would release a tower again and look what happened. While I don't think it's likely, I don't put Nvidia completely out of the picture. Never know.

I'd love to fill the new Mac Pro with 2080ti cards.
 
  • Like
Reactions: 09872738
I never thought Apple would release a tower again and look what happened. While I don't think it's likely, I don't put Nvidia completely out of the picture. Never know.

It is not impossible but if Apple sells a surprising amount of these and eGPU on macOS grows at a health clip (and some of Nvidia's high growth targets don't pan out ) then could see a change. It won't be surprising for Nvidia to at least take a bit of a 'wait and see' positioning here.

Nvidia jockeying the RTX to sit on MP 2010-2012 with 2016-21017 macOS version stuck in time only further enables the "wait and see" positioning. That too didn't really align with where Apple was strategically going.


I'd love to fill the new Mac Pro with 2080ti cards.

This system skews a substantive amount of the power toward the CPU socket provision. It isn't really built to absolutely max out on the largest number possible if standard cards. The thermal scope is a bit wider, but it still has that classic Mac Pro scope of "just two big cards".

This Mac Pro is similar to the Mac Pro 2013 is that it is aimed toward future GPU cards. This time they have lots more wiggle room if the roadmaps are off. Getting 2017's "four cards" worth into two in 202x is what it is geared toward.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.