Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

macguru9999

macrumors 6502a
Original poster
Aug 9, 2006
822
389
Just asking, say you want to run a power hungry video card like the radeon 7 or vega and dont want to mod the power supply, and of course we dont have thunderbolt 3 nor wish to use it for a video card. ... does an external pcie box connected via a pci 16 lane card with its own 300w power supply provide a solution ?

This one even looks like a cheesegrater, has anyone tried a vega card in one ? and are there any drawbacks or speed humps ?

https://www.span.com/product/NetSto...-x8-+-One-x4-inc-Desktop-Card-amp-Cable~63658
 
Just asking, say you want to run a power hungry video card like the radeon 7 or vega and dont want to mod the power supply, and of course we dont have thunderbolt 3 nor wish to use it for a video card. ... does an external pcie box connected via a pci 16 lane card with its own 300w power supply provide a solution ?

This one even looks like a cheesegrater, has anyone tried a vega card in one ? and are there any drawbacks or speed humps ?

https://www.span.com/product/NetSto...-x8-+-One-x4-inc-Desktop-Card-amp-Cable~63658

as far as I know stuff like this works fine, however I worry that 300W would not be sufficient for that card under load.
the only reason stuff like this isnt more common is usually the costs are astronomical. I looked into it once for an unrelated project, to split an X16 slot out into 4x x4s, and even that was close to $1K just for the splitter card, let alone a separate chassis.
 
as far as I know stuff like this works fine, however I worry that 300W would not be sufficient for that card under load.
the only reason stuff like this isnt more common is usually the costs are astronomical. I looked into it once for an unrelated project, to split an X16 slot out into 4x x4s, and even that was close to $1K just for the splitter card, let alone a separate chassis.

looks like its a bit too close for comfort if you are spending big money on the enclosure...
https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-7nm,5977-5.html

"AMD extracts as much performance out of Radeon VII's power budget as possible. Through our three-run recording, the card averages almost 298W with spikes that approach 322W.

Very little power is delivered over the PCI Express slot. Rather, it's fairly evenly balanced between both eight-pin auxiliary connectors."
 
On those chassis's probably only the expansion slots are something unique. I bet you can replace the power supply with a bigger one. Plus I have seen lower prices for this box. Try B&H photo.
 
But the problem is that the power supply output has to be connected to a motherboard or some sort of switch as the plug is not the same. I have a Dell DA 18 amps power supply but to connect it straight to the card I need a jumper or a switch between 2 pins and I loose the sensing input. There is a good 4 Lane to 16 lane pcie converter with dual power input I found for 180$, but the supplier is out of stock.
 
  • Like
Reactions: Alex Sanders74
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
No. It only allows a 2nd PSU to be switched on along with the motherboard. If you look at the pin configuration that they posted, you will see that the 2nd PSU only has power switch inputs connected.

In short: no jumper or fancy switch needed. Just connect and your 2nd PSU will work as sole PSU for the external GPU.
 
No. It only allows a 2nd PSU to be switched on along with the motherboard. If you look at the pin configuration that they posted, you will see that the 2nd PSU only has power switch inputs connected.

In short: no jumper or fancy switch needed. Just connect and your 2nd PSU will work as sole PSU for the external GPU.
Ok so you are mentioning motherboard. Can you elaborate on this? So apart from an external GPU, the extension cable and this connector I would need a motherboard too? Cause this connector can not attach for sure to the logic board. i don' get it. Sorry .
 
Make sure you pay attention to both lane speeds and power. Many of these "cheaper" expansion boxes do not provide enough lane speed for a primary GPU via single x16 > x16. Several are x16 > a few x4 with bad switching style controllers. They are probably fine for secondary or non-display GPUs (render farm).

The Cubix Xpander was the solution many with a decent budget used for this purpose for years. They are around $3K and have 1200 watt or greater PSU's. If you see cheaper used, make sure they include the HIC.
 
Ok so you are mentioning motherboard. Can you elaborate on this? So apart from an external GPU, the extension cable and this connector I would need a motherboard too? Cause this connector can not attach for sure to the logic board. i don' get it. Sorry .

You yourself mentioned the motherboard. The motherboard and logic board are practically the same for all intents and purposes.

However, if you meant to say that you can't connect that thing to the Mac Pro's logic board, then yes, I concur, that's not possible in that form.

For the Mac Pro, what you'll have to do, essentially, is to splice two of the wires of a stock Mac Pro PSU in order to get power on signal to the external PSU. The principle is the same as the adapter I linked, just that I don't think there's anything readily made for the Mac side.

Here's a pinout: http://pinoutguide.com/Power/apple_mac_pro_psuj3_pinout.shtml

You can thank Apple for not using common connectors that the rest of the industry uses. Typically, this sort of external GPU expansion work is only done on the PC side and not so much on the Mac side.

I also don't know if you will even get modern GPUs to work with the Mac Pro even after going through all these hoops. The biggest hurdle is always the drivers and Apple doesn't seem too keen on nVidia ever since 2014.
 
image.jpg
I’ve built my own custom one in a old G5 chassis :
 
What is the mobo? How do you connect it to the PCIE on the cMP?
cyclone microsytem pcie expender PCIe 2.0. There is some overlapping discussion in the Blade SSDs - NVMe & AHCI thread #836 #841

There are also PCIe 3.0 solutions so you could add PCIe 3.0 slots. https://www.bhphotovideo.com/c/product/980261-REG/dynapower_usa_na255a_xgpu_netstor_6_slot_pcie.html

Before there was Thunderbolt, there was external PCIe adapters and cables. I don't know if these are hot pluggable. Hot pluggable is a thing that existed for PCIe before Thunderbolt. It depends on both software and hardware.

The PCIe external cables can be x1, x4, x8, x16. x1 is common with PCIe risers used for crypto currency miners or eGPUs from an mPCIe slot (these usually use a USB 3.0 cable).

https://www.onestopsystems.com/blog-post/pcie-over-cable-goes-mainstream
https://www.onestopsystems.com/pcie-expansion
 
i had the netstor, but the cyclone is better.
with the netstor you get 16 to 16/8/8/8
the cyclone is 16 to 8/8/8/16/16/8/8/8.

the cyclone are extremely rare, and when one is available i buy it .
i have bought every single host card i could find.

none of those are hotplugable, you have to shut off the computer to plug/unplug anything.

but honestly this the best investment i made. so versatile.
I basically have all the card i need in that chassis and plug it to the machine i need. it is silent and from outside it just look like i have two macpro sitting next each other.

the other benefit is that my mac pro psu dont work as hard because there is only the cpu and memory and anfeltec inside the case.
 
cMP is not good enough at this point to be sinking another £700+ into obscure hardware like the extender box linked in OP. Just my opinion.
 
i had the netstor, but the cyclone is better.
with the netstor you get 16 to 16/8/8/8
the cyclone is 16 to 8/8/8/16/16/8/8/8.

the cyclone are extremely rare, and when one is available i buy it .
i have bought every single host card i could find.

none of those are hotplugable, you have to shut off the computer to plug/unplug anything.

but honestly this the best investment i made. so versatile.
I basically have all the card i need in that chassis and plug it to the machine i need. it is silent and from outside it just look like i have two macpro sitting next each other.

the other benefit is that my mac pro psu dont work as hard because there is only the cpu and memory and anfeltec inside the case.
Just bought one 5 pcie slot with the card and the cable. this one is one of the 2 options in the Davinci Resolve Mac config guide. Now I need to buy a power supply and chassis. I think rack mount 7 slot will be the best fit.
 
Last edited:
Just bought one 5 pcie slot with the card and the cable. this one is one of the 2 options in the Davinci Resolve Mac config guide. Now I need to buy a power supply and chassis. I think rack mount 7 slot will be the best fit.
this is the one i have in my supermicro sc848 chassis but any any chassis will do.
just keep in mind that not all psu are equal noise wise, and go straight to 1200/1500w because all modern gpu can peak at 300w
also make sure your 426 card is setup properly because it play a role on the equalisation of the signal regarding the lenght of cable.
also the fan needs to move a lot of air because with 2/3/4 gpu you need to take a lot of heat from the chassis.
[doublepost=1558714136][/doublepost]
cMP is not good enough at this point to be sinking another £700+ into obscure hardware like the extender box linked in OP. Just my opinion.
this is what you dont really get :
it is system and machine agnostic
so you are not buying it for this particular machine, it will work with ANY machine using pcie, or thunderbolt.

keep in mind that pciex16 gen 2 is still 80Gb/s when T3 tops at 40gb/s...

show me any mac that can handle 4 pcie ssd in raid 0, 3 gpu, a 24 bay raid array, a 10gbe card.....
this is true modularity
 
this is the one i have in my supermicro sc848 chassis but any any chassis will do.
just keep in mind that not all psu are equal noise wise, and go straight to 1200/1500w because all modern gpu can peak at 300w
also make sure your 426 card is setup properly because it play a role on the equalisation of the signal regarding the lenght of cable.
also the fan needs to move a lot of air because with 2/3/4 gpu you need to take a lot of heat from the chassis.
I got the Rosewill HERCULES-1600S PSU for 150$ on their eBay store. Now I salvaged an old IBM Pentium 2 case, but I definitely need to replace the fan inside. Also, the cable is only 0.3 long so I will see how close both cases need to be.
 
keep in mind that pciex16 gen 2 is still 80Gb/s when T3 tops at 40gb/s...
The difference is more than that:
PCIe 2.0 x16 is 64 Gbps (don't forget the 8b10b encoding).
Also need to add PCIe protocol overhead so maybe it's more like 48 Gbps.

Thunderbolt 3 is 22 Gbps for PCIe traffic (the 40 Gbps includes DisplayPort traffic).
 
I got the Rosewill HERCULES-1600S PSU for 150$ on their eBay store. Now I salvaged an old IBM Pentium 2 case, but I definitely need to replace the fan inside. Also, the cable is only 0.3 long so I will see how close both cases need to be.
the shorter the better...
the 1m ones are a nightmare they are super stiff, and exerce a lot of pressure on the card.
 
  • Like
Reactions: startergo
i had the netstor, but the cyclone is better.
with the netstor you get 16 to 16/8/8/8
the cyclone is 16 to 8/8/8/16/16/8/8/8.

the cyclone are extremely rare, and when one is available i buy it .
i have bought every single host card i could find.

none of those are hotplugable, you have to shut off the computer to plug/unplug anything.

but honestly this the best investment i made. so versatile.
I basically have all the card i need in that chassis and plug it to the machine i need. it is silent and from outside it just look like i have two macpro sitting next each other.

the other benefit is that my mac pro psu dont work as hard because there is only the cpu and memory and anfeltec inside the case.
With the Netstor, I can use PCIe 3.0 x1, x2, x4, or x8 devices at full speed.

Below is a picture of a Netstor NA255A in front of a MacPro3,1. The Netstor has long 1.5m cables so I could have moved it elsewhere. The Netstor has four Thunderbolt 3 cards (two GC-TITAN RIDGE and two GC-ALPINE RIDGE - only the GC-TITAN RIDGE currently work - using warm boot from Windows 10). Thunderbolt traffic to a Samsung 950 Pro SSD has these AJA System Test Lite write/read (MB/s) results:
PCIe 1.0 x4: 768/743
PCIe 2.0 x4: 1018/1482
PCIe 3.0 x4: 1071/2423

You see PCIe 3.0x4 can give 1000 MB/s more for Thunderbolt devices. Of course, without using Thunderbolt, you could get up to 3500 MB/s from PCIe 3.0 x4 depending on the device while PCIe 2.0 x4 would remain around the 1500 MB/s range. Changing the x16 Mac Pro slot between PCIe 1.0 and PCIe 2.0 has no affect because PCIe 1.0 x16 >= PCI 3.0 x4. I should try this in a G5 but there's no NVMe driver for it. Maybe an AHCI device could work.

I use the "fast.sh" script to change the slot speeds and the "pcitree.sh" script to see the speed of all devices. I made a version of pciutils to work with both old and new MacOS versions. fast.sh with pciutils is the workaround I use on a MacPro3,1 to enable PCIe 2.0 for a PCIe 3.0 device that starts up as PCIe 1.0.


Netstor NA255A.JPG
[doublepost=1558788657][/doublepost]The slot spacing in the Netstor is a little weird. The four slots accommodate double wide cards (graphics cards that use two PCIe back plates at 0.8" or 20.32 mm spacing).

The space between slot 1 (the target x16 adapter) and slot 2 is also a standard 1.6" (two slots).

The space between slot 2 and slot 3 (and also between slot 4 and slot 5) is slightly larger (by 4 mm?). The space between slot 3 and slot 4 is larger still (another 5 mm?).
 

Attachments

  • pciutils_joevt.zip
    1.2 MB · Views: 206
  • scripts_joevt.zip
    6.7 KB · Views: 273
  • Like
Reactions: Flint Ironstag
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.