Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Grumply

macrumors 6502
Original poster
Feb 24, 2017
285
194
Melbourne, Australia
Hi guys,

Having just had to retreat (for the time being) from a 7,1 purchase, I’m looking to do a new 5,1 build with a few of the parts I’d pre-emptively bought for the new machine, along with a couple of the cards I held on to from my previous 5,1 build (which I just sold). That’s left me with the following components to stuff into the new 5,1 (12-core 3.46Ghz, 96GB RAM):

- 2x Radeon VII 16GB
- 2x 2TB Samung 960 Pro NVME on a Lycom DT-130 x8 PCIe Card (Cache Drive)
- 4x 6TB 3.5” SATA HDDs in RAID 10 (Media Drive)
- 512GB 2.5” SATA SSD installed in optical bay (Boot Drive)
- Sonnet Allegro Pro 4x Type-A USB 3.2 (10Gbps) PCIe Card
- Blackmagic Design Decklink Mini Monitor 4k PCIe Card

This would all be simple enough were it not for the 2nd Radeon VII card. Which takes me from four cards @ 5 PCI slots-high (which fits naturally in the 5,1), to five cards at seven PCI slots-high (which most certainly does not).

So I’m trying to figure out a configuration that’s going to allow me to put all of these components to use, whilst preferably not breaking the bank, or requiring me to build one of the open-air/cables-everywhere Frankenstein rigs that our more adventurous MacRumorians sometimes attempt.

Now with so many PCIe slots required, something is going to have to end up outside. So the big question here is “what”?

As far as I can tell, I basically have two choices. I can either mount the two Radeon VII GPUs inside the 5,1 (in slots 1 and 3), and then use slot 2 to feed something like a NetStor NA211A-G3 TurboBox v3 external ePCIe expansion box, and put the Decklink card, the 4x USB3.2 card, and the dual M.2 Lycom card in that.

Or, I put the two Radeon VIIs into something like a repurposed old Nvidia Quadro Plex box, which I feed into slot 1. And then put the Lycom M.2 card in slot 2, and the others in the remaining slots.

The Netstor route is probably the safest, as it will definitely work, and the box itself will be more useful/current/easier-to-sell when I eventually upgrade. However it is considerably more expensive at $1,200 AUD (which almost matches what I’ve just paid for the new 8-core 5,1 body + 12-core 3.46Ghz CPU upgrades!) compared to $500-850 AUD for the various Nvidia Quadro Plexes that I’ve been able to track down.

However for the Netstor option to work, I need to be able to power the 5,1 with both Radeon VIIs, 12-cores, 4x 3.5” spinning HDDs and a 2.5” SATA SSD from it’s internal PSU. And even if I do the Pixlas mod, and downvolt the two Radeon VIIs (as many on here recommend) I don’t know whether the 5,1 will have enough juice to cover all that?

The Nvidia Quadro Plex option is therefore a bit easier (in theory), as it includes a PSU that can power both Radeon VIIs, and would therefore simply need to be plugged in (no Pixlas modding required). However, although we have reports on here about people using the Quadro Plexes with 5,1s in the past. I don’t know if anyone has actually tried to repurpose them with AMD cards before. So there’s no guarantee that one would even work.

I suspect it probably will, as (from what my research so far has suggested) PCIe expansion modules are generally fairly simple. The computers you plug them into normally just treat the additional cards as being attached directly. However there’s always a chance that Nvidia built some kind of proprietary protections into them, and Google offers no reports of anyone having tried this before.

I’ve also looked into the larger Netstor GPU expansion boxes (too expensive) and into building a box of my own with a used x16 backplane and host adapter (just add PC case and PSU) but that also works out to more money than the rest of the machine combined.

So if anyone has any advice or suggestions on how to approach this, I’d love to hear them.

If there are any alternatives to the Netstor NA211A-G3 I’d love to know what they are. And if anyone’s managed to power dual Radeon VIIs (plus 4x 3.5” HDDs) using only the internal PSU, I’d love to hear about that too.

Having both the Netstor externally and a 2nd PSU externally (to feed power into the two Radeon VIIs) is more gack than I really want to deal with - so I have a strong preference for keeping this build to a single external box/cable if at all possible.

Now ultimately, if there isn’t a reasonable approach to making this work, I can simply sell the second Radeon VII (which is still brand new and hasn’t been opened), but needing more grunt than my previous 5,1 (12-core 3.46Ghz, 64GB RAM, 12GB Titan X, SATA SSD RAID) is the reason I’ve wasted the last month of my life trying to make the 7,1 purchase happen. So finding a way to get both Radeon VIIs running will certainly help cover things until I can move on to newer tech.

Any advice anyone can offer would be greatly appreciated.

Cheers
 
  • Like
Reactions: Flint Ironstag

AlexMaximus

macrumors 65816
Aug 15, 2006
1,237
582
A400M Base
Interesting project, wow! Here is my advice and how I would approach it if I would stick in your shoes.

#1 Do a Pixlas mod for Vega VII (1)
#2 Remove all HDD's inside your MP and put Vega VII (2) in the top slot. You will need to remove the standard dual bracket and use a single slot bracket for the Vega. -> This second Vega VII needs to be powered from an external source.
#3 Putt all spinning HDD's in an external 4x HDD case
#4 Use the remaining slot between the two Vega cards for the USB-C Adapter, -> this one is used for the external HDD 4x case from #3
#5 Go for a PCIe single case from to put your BlackMagic Decklink card in, connect it to the USB-C as well.
#6 Done. -> This way you don't need much $$. Use a Power Supply from your remaining parts. Only cost will be two external boxes, (HDD's 4x if you cant use existing Quadra box & PCIe box for BM card)
3-polaris-gpu-mac-pro.jpg
 
Last edited:
  • Like
Reactions: Schismz and Grumply

Grumply

macrumors 6502
Original poster
Feb 24, 2017
285
194
Melbourne, Australia
Thanks Alex,

If I have to get an external PSU to power one of the GPUs anyway, I'd probably just take the easy route, and get one that will power them both.

For #5 what PCIe cases can you get for the Decklink Card that will operate via USB-C? I haven't encountered anything like that before.

Regardless of that though, I think you may have just made the whole thing a LOT cheaper and easier for me to do. You made me take a look at Blackmagic's site, and I saw that they've still got the "Intensity Shuttle" capture/playback box (a product I'd completely forgotten about), if I can use that to send a clean 10-bit signal to my reference monitor in Davinci Resolve, then I can simply replace my Decklink card with that. And then everything else (excluding the 3.5" HDDs) will fit in the 5,1!

Is anyone here familiar with the Intensity Shuttle? Will that replace the Decklink Mini Monitor 4k for me? (my reference monitor is only HD, and can take HDMI as well as SDI)
[automerge]1579358335[/automerge]
Another interesting possibility I've just stumbled across, is a replacement for the discontinued (and much missed) Sonnet Tempo Pro: https://www.sonnettech.com/product/fusion-dual-ssd-raid/overview.html

They call it Fusion, and it pairs a dual 2.5" SATA SSD hardware RAID, with a single 10Gbps USB-C output.

I've sent them an email to see if performance on the RAID and/or the USB-C port will drop when both are being used at the same time. But if they're not, I could potentially use the SSD RAID as my media drive (with a couple of high-capacity SSDs) and the USB-C to get solid speeds from an external NVME cache drive. I'd have to put the media drive internally in that configuration, because it would be the only way to get decent transfer speeds when copying new footage onto the media drive.

Another quick question. If you have the GPU in slot 4, obviously 3.5" drives won't fit - but is it possible to squeeze 2.5" SATA SSDs into the SATA ports still?
 
Last edited:
  • Like
Reactions: AlexMaximus

h9826790

macrumors P6
Apr 3, 2014
16,656
8,587
Hong Kong
Hi guys,

Having just had to retreat (for the time being) from a 7,1 purchase, I’m looking to do a new 5,1 build with a few of the parts I’d pre-emptively bought for the new machine, along with a couple of the cards I held on to from my previous 5,1 build (which I just sold). That’s left me with the following components to stuff into the new 5,1 (12-core 3.46Ghz, 96GB RAM):

- 2x Radeon VII 16GB
- 2x 2TB Samung 960 Pro NVME on a Lycom DT-130 x8 PCIe Card (Cache Drive)
- 4x 6TB 3.5” SATA HDDs in RAID 10 (Media Drive)
- 512GB 2.5” SATA SSD installed in optical bay (Boot Drive)
- Sonnet Allegro Pro 4x Type-A USB 3.2 (10Gbps) PCIe Card
- Blackmagic Design Decklink Mini Monitor 4k PCIe Card

This would all be simple enough were it not for the 2nd Radeon VII card. Which takes me from four cards @ 5 PCI slots-high (which fits naturally in the 5,1), to five cards at seven PCI slots-high (which most certainly does not).

So I’m trying to figure out a configuration that’s going to allow me to put all of these components to use, whilst preferably not breaking the bank, or requiring me to build one of the open-air/cables-everywhere Frankenstein rigs that our more adventurous MacRumorians sometimes attempt.

Now with so many PCIe slots required, something is going to have to end up outside. So the big question here is “what”?

As far as I can tell, I basically have two choices. I can either mount the two Radeon VII GPUs inside the 5,1 (in slots 1 and 3), and then use slot 2 to feed something like a NetStor NA211A-G3 TurboBox v3 external ePCIe expansion box, and put the Decklink card, the 4x USB3.2 card, and the dual M.2 Lycom card in that.

Or, I put the two Radeon VIIs into something like a repurposed old Nvidia Quadro Plex box, which I feed into slot 1. And then put the Lycom M.2 card in slot 2, and the others in the remaining slots.

The Netstor route is probably the safest, as it will definitely work, and the box itself will be more useful/current/easier-to-sell when I eventually upgrade. However it is considerably more expensive at $1,200 AUD (which almost matches what I’ve just paid for the new 8-core 5,1 body + 12-core 3.46Ghz CPU upgrades!) compared to $500-850 AUD for the various Nvidia Quadro Plexes that I’ve been able to track down.

However for the Netstor option to work, I need to be able to power the 5,1 with both Radeon VIIs, 12-cores, 4x 3.5” spinning HDDs and a 2.5” SATA SSD from it’s internal PSU. And even if I do the Pixlas mod, and downvolt the two Radeon VIIs (as many on here recommend) I don’t know whether the 5,1 will have enough juice to cover all that?

The Nvidia Quadro Plex option is therefore a bit easier (in theory), as it includes a PSU that can power both Radeon VIIs, and would therefore simply need to be plugged in (no Pixlas modding required). However, although we have reports on here about people using the Quadro Plexes with 5,1s in the past. I don’t know if anyone has actually tried to repurpose them with AMD cards before. So there’s no guarantee that one would even work.

I suspect it probably will, as (from what my research so far has suggested) PCIe expansion modules are generally fairly simple. The computers you plug them into normally just treat the additional cards as being attached directly. However there’s always a chance that Nvidia built some kind of proprietary protections into them, and Google offers no reports of anyone having tried this before.

I’ve also looked into the larger Netstor GPU expansion boxes (too expensive) and into building a box of my own with a used x16 backplane and host adapter (just add PC case and PSU) but that also works out to more money than the rest of the machine combined.

So if anyone has any advice or suggestions on how to approach this, I’d love to hear them.

If there are any alternatives to the Netstor NA211A-G3 I’d love to know what they are. And if anyone’s managed to power dual Radeon VIIs (plus 4x 3.5” HDDs) using only the internal PSU, I’d love to hear about that too.

Having both the Netstor externally and a 2nd PSU externally (to feed power into the two Radeon VIIs) is more gack than I really want to deal with - so I have a strong preference for keeping this build to a single external box/cable if at all possible.

Now ultimately, if there isn’t a reasonable approach to making this work, I can simply sell the second Radeon VII (which is still brand new and hasn’t been opened), but needing more grunt than my previous 5,1 (12-core 3.46Ghz, 64GB RAM, 12GB Titan X, SATA SSD RAID) is the reason I’ve wasted the last month of my life trying to make the 7,1 purchase happen. So finding a way to get both Radeon VIIs running will certainly help cover things until I can move on to newer tech.

Any advice anyone can offer would be greatly appreciated.

Cheers

For your info, the cMP can power 2x radeon VII via Paxlas mod, even not downvolted. Not from my cMP, but I know a guy keep making that config cMP for selling.

Of course, I still recommend you to downvolt the card to make it run faster and cooler.
 
  • Like
Reactions: Grumply

Ludacrisvp

macrumors 6502a
May 14, 2008
797
363
Another idea would be get the 4 port nvme riser card (one that has a pcie switch) and then make your own extra pcie 3.0 4x slots using this:
then you can run your nvme drives and a couple other things off a single slot.
Youd have to figure out the physical locations but it’s an inexpensive way to get more pcie slots.
 

itdk92

macrumors 6502a
Nov 14, 2016
504
180
Copenhagen, Denmark
I suggest a cubix or netstor with both GPUs, connected to slot 1.

We have a theee RVII setup for some of our customers, and in those cases we use those pcie external enclosures.

We tend to want to keep one card in the system, so often it ends up being in slot 4.

Hope it helps.
 

Snow Tiger

macrumors 6502a
Dec 18, 2019
854
634
Modified Pixlas mod with quad 8 pin ( 150 W ) PCIe booster power cables to the dual 300W class GPU .

Dual Vega Frontier Editions all internally mounted and powered . Yes , its stable . Excess cables need a little trimming ... but its amazing what the factory PSU is capable of .

The Dual Vegas pulled around 660 W all by themselves at load . Bandwidth is better than using an external expansion chassis with the same cards .

Nice thing about this mod is the side access panel can be properly secured and the Mac looks quite spiffy and professional . Unless you looked closely , you'd never guess the power under the hood .

P6135727.jpg


Something just occurred to me ... what do you call a modded mod ?
[automerge]1579376547[/automerge]
Or if you're dynamite with a power drill , aluminum side access doors just love this mod .

The external PSU is a 1000W EVGA ATX with lots of 8 pin PCIe booster power connections .

This does the same thing as the above , but takes less skill , should last longer , guaranteed to bedazzle clients in attendance and is not that expensive .

Just make certain that external PSU always provides juice to the booster cables before your Mac starts up .

P3255680-450x600.jpg
 
Last edited:

AlexMaximus

macrumors 65816
Aug 15, 2006
1,237
582
A400M Base
@ Grumply

Two wrongs on my side. I thought you could use an external TB3 case and be downward compatible on USB-C. That's not correct. To go this route, you would need a TB3 TitanRidge controller card (or newer), witch - although can work on a cMP - is a big PITA.


Also, if you really go with the second Vega on the top slot, you should actually have two full slots available. You really don't need any PCIe external box.
 

Grumply

macrumors 6502
Original poster
Feb 24, 2017
285
194
Melbourne, Australia
Also, if you really go with the second Vega on the top slot, you should actually have two full slots available. You really don't need any PCIe external box.

Thanks Alex, I could potentially make do with only two slots, if the Blackmagic Intensity Shuttle USB3 will work with the 5,1 via the Sonnet Allegro Pro USB3.2 Type A adapter card, that would allow me to replace the Decklink Card with the Intensity (which is an external unit), and then the USB3.2 and the Lycom NVME cards could take the two remaining slots.

Alternatively, I could pull the NVME card, and feed those M.2 drives into the USB3.2 for my cache drive (which is overall a slightly cheaper option).

However that leaves me with having to run SATA SSDs internally (in RAID 0) for my Media Drive, which would be an added expense. If I had to have both the Media Drive and Cache Drive running through the USB3.2 card, the overall speeds of both will drop considerably.

However, in a new development, someone may have just come to my rescue. I've been offered a Cubix expander at a fair price, and that basically solves all of my issues. With the added benefit that it'll be much easier to sell when I upgrade than an Nvidia Quadro Plex, and potentially, I could just keep it and use it as a mega NVME storage box with the new machine.
 
  • Like
Reactions: Flint Ironstag

Snow Tiger

macrumors 6502a
Dec 18, 2019
854
634
I finally found a few pictures of the quad 8 pin PCIe booster power mod of a Nehalem cMP PSU , that I made for the Dual Vega F.E. configuration ( post number 8 of this thread ) . This mod will push two 300 W class GPUs in a cMP . Build this at your own risk as it really will provide 600 W ( 4 x 150 W ) worth of juice . I was always worried it'd burn out or catch fire , but it was always reliable and safe during my tests .

P6095714.JPG

P6095715.JPG


P6095716.JPG

P6095717.JPG
 
Last edited:

zedex

macrumors 6502
Oct 21, 2018
312
134
Perth, WA
Hi @Grumply - I've solved it!

You've been advised by some of the best modders and contributors on this forum - all of whom provided thought provoking and workable suggestions BUT your original 'vision' is achievable AND should deliver some extra benefits not mentioned in your dream system setup..

Given your detailed assessment of minimum acceptable hardware requirements for a 'path of least resistance' config, accomodating (A) 5 PCIe cards (7 high) installed concurrently (B) closed loop (fully contained) expansion hardware (C) the best stable fully-integrated system power source to meet 100% of dream setup power demands delivered with the lowest risk of / possibility for system damage.

BASED ON THESE FACTORS (A, B and C):
(1) You need a Mac Pro 5,1 (big green tick)
(2) You need a Netstor NA211A-G3 Turbobox PCIe Expansion Chassis* (I'll send you one from Perth, WA for AUD 800-900 or a swap - I'm very envious of your DT130 with 2x 960 PROs.. I'm sure we can work something out ;) )

*with integrated Graphics PSU - 2x8-pin (300W)

With 'minimum acceptable hardware requirements' met - you can have everything you want configured as follows:

cMP Slot 1: Radeon VII 16GB
cMP Slot 2: Lycom DT-130 x8 PCIe Dual m2 NVMe Host Adapter Card (ASM2824 Switch)
-- Port 1 2TB Samung 960 Pro NVME
-- Port 2 2TB Samung 960 Pro NVME
cMP Slot 3: Netstor NA211A PCIe HOST Card
cMP Slot 4: Blackmagic Design Decklink Mini Monitor 4k PCIe Card

NA211A Slot 5: Netstor NA211A PCIe TARGET Card
NA211A Slot 6: Radeon VII 16GB
NA211A Slot 7: [NONE - Not Accessible due to Double Width GPU]
NA211A Slot 8: Sonnet Allegro Pro 4x Type-A USB 3.2 (10Gbps) PCIe Card

This config is PERFECT (a word I use judiciously ..and not just when I'm describing the phone calls I've made to politely ask for favours from foreign dignitaries in Eastern Europe).


BENEFITS BEYOND THE OBVIOUS
(1) You can now power your 'Dual Radeon VII' GPUs with enough headroom to power a 3rd GPU (because.. cMP 5,1 PSU + NA211A PSU = cMP 7,1 PSU)

(2) In the strictly defined "5 PCIe cards (7 high)" config, my 'recommended' PCIe card slot placement is deliberate and improves upon your initial thinking in subtle but important ways. The DT130 *must* be in SLOT 2 - it will use (saturate) 8 of the 16 lanes at every opportunity it gets. The GPU will typically use (dynamically negotiate a link speed requiring..) between 1 and 4 lanes of bandwidth 364 days a year**. Similarly, putting the Netstor HOST Card (PCIe 2.0 x4) in SLOT 2 - will never use more than 4 of the 16 available lanes.

(3) PCI-e cards installed at positions 3 and 4 are both single-slot form factor.
By installing the single-slot 'Netstor NA211A PCIe HOST Card', the 2nd double-width Radeon VII GPU can be safely installed in the Turbobox (NA211A) and the potential problem of installing "4x 6TB 3.5in SATA HDDs in RAID 10" in SATA Bays 1 - 4) is magically resolved.
This approach also means that, creatively "retro-fitting" messy external devices in order to compensate for weak spots in the build becomes WHOLLY unnecessary (i.e. extra GPU power supply, bulky 4-bay storage enclosure for 3.5in HDDs).

(4) Sonnet USB3 Card hosted in NA211A Turbobox provides 4 additional [extra-fast] USB ports up to 1.5m from the where the cMP is positioned.

** @IanK MacPro recently posted the GPU benchmark test score for a single (x1) Radeon VII in slot 1 and then the results of the GPU benchmark test score for Dual (x2) Radeon VII in slot 1 and slot 4. Score from second test was double that of the first test providing clear evidence that nothing is gained by installing GPUs in valuable 16 lane slots
 

Grumply

macrumors 6502
Original poster
Feb 24, 2017
285
194
Melbourne, Australia
Thanks Zedex, appreciate the offer (I'd have taken you up on it), but I got a good deal on a Cubix Xpander chassis, and went with that.

Just added a Highpoint 7101A to take over as my primary Media drive, and the DT-130 will handle the cache.

This thing is a beast. With the latest version of Davinci offering Metal decoding for Redcode, I can now get usable playback with 8k Redcode Raw on this 10 year old machine(!). It's kinda crazy.

I've also added a Windows 10 install, and that's letting me use the dual Radeon VIIs (via DirectX 12) to game smoothly in MAXED out 4k graphics for modern titles like Red Dead Redemption and Tomb Raider (whilst COVID has made it illegal for me to work). Pretty wild.
 

Snow Tiger

macrumors 6502a
Dec 18, 2019
854
634
Hi @Grumply - I've solved it!

You've been advised by some of the best modders and contributors on this forum - all of whom provided thought provoking and workable suggestions BUT your original 'vision' is achievable AND should deliver some extra benefits not mentioned in your dream system setup..

Given your detailed assessment of minimum acceptable hardware requirements for a 'path of least resistance' config, accomodating (A) 5 PCIe cards (7 high) installed concurrently (B) closed loop (fully contained) expansion hardware (C) the best stable fully-integrated system power source to meet 100% of dream setup power demands delivered with the lowest risk of / possibility for system damage.

BASED ON THESE FACTORS (A, B and C):
(1) You need a Mac Pro 5,1 (big green tick)
(2) You need a Netstor NA211A-G3 Turbobox PCIe Expansion Chassis* (I'll send you one from Perth, WA for AUD 800-900 or a swap - I'm very envious of your DT130 with 2x 960 PROs.. I'm sure we can work something out ;) )

*with integrated Graphics PSU - 2x8-pin (300W)

With 'minimum acceptable hardware requirements' met - you can have everything you want configured as follows:

cMP Slot 1: Radeon VII 16GB
cMP Slot 2: Lycom DT-130 x8 PCIe Dual m2 NVMe Host Adapter Card (ASM2824 Switch)
-- Port 1 2TB Samung 960 Pro NVME
-- Port 2 2TB Samung 960 Pro NVME
cMP Slot 3: Netstor NA211A PCIe HOST Card
cMP Slot 4: Blackmagic Design Decklink Mini Monitor 4k PCIe Card

NA211A Slot 5: Netstor NA211A PCIe TARGET Card
NA211A Slot 6: Radeon VII 16GB
NA211A Slot 7: [NONE - Not Accessible due to Double Width GPU]
NA211A Slot 8: Sonnet Allegro Pro 4x Type-A USB 3.2 (10Gbps) PCIe Card

This config is PERFECT (a word I use judiciously ..and not just when I'm describing the phone calls I've made to politely ask for favours from foreign dignitaries in Eastern Europe).


BENEFITS BEYOND THE OBVIOUS
(1) You can now power your 'Dual Radeon VII' GPUs with enough headroom to power a 3rd GPU (because.. cMP 5,1 PSU + NA211A PSU = cMP 7,1 PSU)

(2) In the strictly defined "5 PCIe cards (7 high)" config, my 'recommended' PCIe card slot placement is deliberate and improves upon your initial thinking in subtle but important ways. The DT130 *must* be in SLOT 2 - it will use (saturate) 8 of the 16 lanes at every opportunity it gets. The GPU will typically use (dynamically negotiate a link speed requiring..) between 1 and 4 lanes of bandwidth 364 days a year**. Similarly, putting the Netstor HOST Card (PCIe 2.0 x4) in SLOT 2 - will never use more than 4 of the 16 available lanes.

(3) PCI-e cards installed at positions 3 and 4 are both single-slot form factor.
By installing the single-slot 'Netstor NA211A PCIe HOST Card', the 2nd double-width Radeon VII GPU can be safely installed in the Turbobox (NA211A) and the potential problem of installing "4x 6TB 3.5in SATA HDDs in RAID 10" in SATA Bays 1 - 4) is magically resolved.
This approach also means that, creatively "retro-fitting" messy external devices in order to compensate for weak spots in the build becomes WHOLLY unnecessary (i.e. extra GPU power supply, bulky 4-bay storage enclosure for 3.5in HDDs).

(4) Sonnet USB3 Card hosted in NA211A Turbobox provides 4 additional [extra-fast] USB ports up to 1.5m from the where the cMP is positioned.

** @IanK MacPro recently posted the GPU benchmark test score for a single (x1) Radeon VII in slot 1 and then the results of the GPU benchmark test score for Dual (x2) Radeon VII in slot 1 and slot 4. Score from second test was double that of the first test providing clear evidence that nothing is gained by installing GPUs in valuable 16 lane slots

It's been long known that GPU based crypto-mining needs just one PCIe lane , GPU compute needs just four PCIe lanes and video gaming and quad slot RAID / Drive Interface Cards need 16 PCIe lanes . Direct host bus connections are a requirement for the highest performance for each device . If you use an external expansion chassis that has more than one GPU being fed by a data cable from the chassis' controller card in a single host bus PCIe connection , the max available bandwidth of that slot ( I don't think you'll get access to all 16 lanes ) will simply be spread across all the GPUs in that chassis . You'll be able to connect lots of devices , but at a reduced performance level .
 
Last edited:

Grumply

macrumors 6502
Original poster
Feb 24, 2017
285
194
Melbourne, Australia
It's been long known that GPU based crypto-mining needs just one PCIe lane , GPU compute needs just four PCIe lanes and video gaming and quad slot RAID / Drive Interface Cards need 16 PCIe lanes . Direct host bus connections are a requirement for the highest performance for each device . If you use an external expansion chassis that has more than one GPU being fed by a data cable from the chassis' controller card in a single host bus PCIe connection , the max available bandwidth of that slot ( I don't think you'll get access to all 16 lanes ) will simply be spread across all the GPUs in that chassis . You'll be able to connect lots of devices , but at a reduced performance level .

I looked into this quite a bit to determine where I'd get the best bang for buck from the dual GPUs.

Should I mount them in the two internal x16 slots (and give up one x4 slot)? Or mount one in slot 4 (the x4 slot at the top) and one in slot 1 (the double-width x16 slot at the bottom) and sacrifice three of the SATA HDD bays? OR mount both of them in the expansion chassis (fed into a single x16 slot)?

For colour grading/video editing work, the performance difference appears to be extremely negligible. It's a couple of percentage points at most between having both cards in their own x16 slots, versus having one in x16 and one in x4, or having both on a single x16 slot. So not worth sacrificing the 2nd x16 slot for (which is better put to use for NVME storage).

Through using the machine for a bit of gaming in Windows these past few weeks, playing DirectX 12 games (which allow for multiple-GPU support), there is a tonne of added performance from having the two cards (in expansion chassis) feeding into a single x16 slot, compared to using just one of them at a time. With both paired together, I can play in 4k at maxed out video settings (excluding Anti-Aliasing) perfectly smoothly. With just a single card being used, the framerate drops to too stuttery a level at the same settings.

I can't be bothered sorting out a power solution to mount both cards to their own x16 slots internally, to see how gaming performance compares in that (hypothetically) superior setup. But it would be interesting to compare.
 

Snow Tiger

macrumors 6502a
Dec 18, 2019
854
634
I looked into this quite a bit to determine where I'd get the best bang for buck from the dual GPUs.

Should I mount them in the two internal x16 slots (and give up one x4 slot)? Or mount one in slot 4 (the x4 slot at the top) and one in slot 1 (the double-width x16 slot at the bottom) and sacrifice three of the SATA HDD bays? OR mount both of them in the expansion chassis (fed into a single x16 slot)?

For colour grading/video editing work, the performance difference appears to be extremely negligible. It's a couple of percentage points at most between having both cards in their own x16 slots, versus having one in x16 and one in x4, or having both on a single x16 slot. So not worth sacrificing the 2nd x16 slot for (which is better put to use for NVME storage).

Through using the machine for a bit of gaming in Windows these past few weeks, playing DirectX 12 games (which allow for multiple-GPU support), there is a tonne of added performance from having the two cards (in expansion chassis) feeding into a single x16 slot, compared to using just one of them at a time. With both paired together, I can play in 4k at maxed out video settings (excluding Anti-Aliasing) perfectly smoothly. With just a single card being used, the framerate drops to too stuttery a level at the same settings.

I can't be bothered sorting out a power solution to mount both cards to their own x16 slots internally, to see how gaming performance compares in that (hypothetically) superior setup. But it would be interesting to compare.

PCIe add on card placement is a judgement call only you can make . But , there are extension cables you can use to attach to signal and power connectors that have their Slots / Bays physically obstructed by other components , so that you can add additional gear to your System .

3M makes some shielded PCIe extension cables that can be snuck under Add On cards and attached to unused interfaces . There are SATA power cable extensions that can be attached to the backplanes of the HDD Bays , even if that area is obstructed by a PCIe add on card . I'm using two of those right now with a Build to feed some USB devices .
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.