Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

startergo

macrumors 603
Original poster
Sep 20, 2018
5,100
2,293
Just wondering what will be the real benefit of:
Amfeltec SKU-075-02
or
Amfeltec SKU-075-06

According to the manual:
The “PCI / PCIe Expansion Backplane” (backplane) is a four 32-bit PCI and two x16 PCI express slot backplane (each x16 PCI Express connector has one PCI Express lane).

So lets think about 2 scenarios:
Amfeltec SKU-075-02
1. Using PCIe slot 2 as an expansion:
What will be the speed difference between 2 identical video cards installed in PCIe 1 on the backplane and the other on the PCIex16 on the expansion chassis.
2 Highpoint 7101A installed in the same manner, what will be the speed and throughput difference if you have 4 NVMe's installed?

Amfeltec SKU-075-06
Speed and throughput difference between the video cards and NVMe's installed as above but through the mini PCIEx1 (1 lane). mini PCIEx1 has the same width as the X16 on the expansion board.
 

Attachments

  • BG 5,1.jpg
    BG 5,1.jpg
    103.8 KB · Views: 1,586
  • PCI_PCIE_Expansion_Backplane_hwmanual_v1.1.pdf
    985.2 KB · Views: 377
Last edited:
Just wondering what will be the real benefit of:
Amfeltec SKU-075-02
or
Amfeltec SKU-075-06

According to the manual:
The “PCI / PCIe Expansion Backplane” (backplane) is a four 32-bit PCI and two x16 PCI express slot backplane (each x16 PCI Express connector has one PCI Express lane).

So lets think about 2 scenarios:
Amfeltec SKU-075-02
1. Using PCIe slot 2 as an expansion:
What will be the speed difference between 2 identical video cards installed in PCIe 1 on the backplane and the other on the PCIex16 on the expansion chassis.
2 Highpoint 7101A installed in the same manner, what will be the speed and throughput difference if you have 4 NVMe's installed?

Amfeltec SKU-075-06
Speed and throughput difference between the video cards and NVMe's installed as above but through the mini PCIEx1 (1 lane). mini PCIEx1 has the same width as the X16 on the expansion board.

Well it's older tech that got used a lot in crypto-mining for whatever reason, but it could be repurposed as a modern drive array of NVMe's. Too pricey new most likely to make that worthwhile though, but if you already had one collecting dust maybe. Best part of risers and such though is putting distance between your parts to limit any thermal damage. Pass-through cards are all you'd want to use though.

It would all be limited to x1 speeds due to their use of a x1 host card- so 5 Gbps. It's splitting the one lane between all the created lanes.

They have a newer GPU centered model that shows a x4 host card that might suit your intentions better, with 2 x16 slots. Again, pass throughs would max out that throughput.

If you spend the money on a x16 NVMe card like that-you'll want dedicated bandwidth since it's already working a x16 connection to it's max with 4 NVMe's feeding it (in RAID). I could see why if you are only using it as a host for 4 separate drives.

*Side-note, do you know where I can get a higher res copy of the MP diagram?
 
Last edited:
Yes I contacted the company and they confirmed the speed is limited by the bandwidth x1. So I see no purpose either . After I posted this I also saw the newer version with 4 pcie 2 but with same limitation . I wish there was something like this but just x16 instead of x1.

I got the image for the block diagram from the macrumors forum. I don't know why it came out so crappy . But it is somewhere here.
 
  • Like
Reactions: Reindeer_Games
This is more about additional slots and for a very limited audience. Good for multiple I/O controllers for low speed applications. Option for older PCI is perfect for supporting products that haven't been updated in a decade.
 
Last edited:
The majority of people who need speedy PCIe expansion units are likely better searching for a used Cubix Xpander or similar. These were basically "made for" use with Mac Pro 4,1 & 5,1 back in the days when the Mac Pro Cheesegrater style was still being sold.
 
  • Like
Reactions: Reindeer_Games
Just found this :
https://www.dolphinics.com/products/PXH832.html


Would you share your opinions?
Features
  • PCI Express® 3.0 compliant - 8.0 Gbps per lane
  • Link compliant with Gen1 and Gen2 PCI Express
  • Quad SFF-8644 Cable connector
  • PCIe 3.0 cables or MiniSAS-HD cables
  • Four x4 Gen3 PCI Express cable ports that can be configured as:
    • One - x16 PCI Express port
    • Two - x8 PCI Express ports
    • Four - x4 PCI Express ports
  • Copper and fiber-optic cable connectors
  • Clock isolation support
  • Low profile PCI Express form factor
  • EEPROM for custom system configuration
  • Link status LEDs through face plate
Link Speeds 32 Gb/s per port / 128 Gb/s
Application Performance <130 nanoseconds cut through latency port to port
Active Components Broadcom /PLX Gen 3 PCIe Switch
PCI Express Base Specification 3
Topologies Transparent Host/Target up to 4 devices
Cable Connections Four x4 iPass®+ HD / MiniSAS-HD /SFF-8644 copper cables, 0.5 - 9 meters Power Consumption 10 Watts (typical, 14 Watts worst case ) + 800 miliwatts (typical) pr connected x4 AOC
 

Attachments

  • PXH832_Product_Brief.pdf
    507.6 KB · Views: 270
  • Like
Reactions: Reindeer_Games
Just found this :
https://www.dolphinics.com/products/PXH832.html

Would you share your opinions?

Link Speeds 32 Gb/s per port / 128 Gb/s

I like it- especially if you wanted to do a quad-GPU enclosure (dreamer coming out...)...LOL.

But seriously it's a chunk of change, I'm sure- but I hope the application could justify the any price you pay. And just as a a reminder-the PCIe 3.0 to 2.0 lane compression will put you at ~6200-6400 MBps worth of simultaneous bandwidth. A little above my ability to comment beyond a theoretical application-though I do think it would be a great addition since I'm contemplating moving my single GPU externally to keep things cool since it may be the way it's all going.

The majority of people who need speedy PCIe expansion units are likely better searching for a used Cubix Xpander or similar. These were basically "made for" use with Mac Pro 4,1 & 5,1 back in the days when the Mac Pro Cheesegrater style was still being sold.

I think both are more pricey than the 48" x4 extension, and a recycled PC tower I had pictured in my mind-LOL. But I do like how clean those pre-made's make it.
 
Last edited:
Its not clear to me what this (dolphinics card) connects too..? product brief doesn't show any enclosure pictures

Also - how much is it? Those CUBIX things are lovely (as are the OSS enclosures) but they are mind-blowingly expensive
 
  • Like
Reactions: Reindeer_Games
Its not clear to me what this (dolphinics card) connects too..? product brief doesn't show any enclosure pictures

Also - how much is it? Those CUBIX things are lovely (as are the OSS enclosures) but they are mind-blowingly expensive
Still waiting for pricing, but here is some feedback from the support team:

"If your Mac Pro has a free x16 slot, it can most likely support the card.

We do stock some simple 2-slot backplanes (which after adding a target card, another PXH832 for instance, will give you one free PCIe x16 slot. As an option, we also have a specialized target card for Onestop Systems (and compatible) backplanes and chassis - the MXH833, which also will work well with a PXH832 host adapter."
 
  • Like
Reactions: Reindeer_Games
Its not clear to me what this (dolphinics card) connects too..? product brief doesn't show any enclosure pictures

It looks like it uses a identical/similar modeled card as a host controller in each of your chassis. I think those model variances are the ones the starterego listed the the manufactures quote.
 
Last edited:
Nearly all should use some version of a PCIe card and push/pull lock extension cables of some kind to connect to an external box or tower. There are messy/dirty solutions that basically look like cobbled together magazine racks and there are others that are clean towers. At the time of MP4,1/MP5,1 still being sold, there were several options to choose from. Harder to find NEW units these days with the transition to Thunderbolt, but they do exist. And they've never been cheap. It's a niche market usually tapped by professionals for specific needs.

Cables are usually some combination of these:
https://www.digikey.com/product-detail/en/molex-llc/0745460403/WM1147-ND/1278256
https://www.celco.com.tw/prodetails.php?pid=13&cid=5

The screw in ones flat out will not carry the bandwidth for x16. If they claim it and it's a basically a ribbon cable with screws, proceed with caution. I've seen those setups work, but they are generally cobbled together or piecemeal solutions made by users with soured/found parts vs. purchasing a solution.

If you're legitimately looking for a modern solution, at least contact these people for a price quote:
https://www.onestopsystems.com/pcie-expansion

Some of their units are "upgradeable" to TB connections and theoretically could be repurposed in the future.
 
  • Like
Reactions: Reindeer_Games
Here is more feedback from the Dolphinics support:

"For these backplanes, you could use either the PXH832 or the MXH832 as target cards - the MXH833 is not recommended here (the difference between the MXH832 and MXH833 is how they generate reset- and clocking signals - the MXH833 is only recommended with OneStop Systems backplanes). You'd get PCIe Gen3 speed (8GT) and x4 width against each of the 4 backplanes, as the host- and target-cards would link up at Gen3, x4. Assuming a x16, Gen3 host slot (on the host adapter), this means you'd have the total bandwidth that the slot provides, against the devices in the expansion backplanes.

Also note that the target card will support up to Gen3, x16 on its slot interface, so if you have say a Gen1 x16 device hosted, you'd get at Gen1, x16 connection to that. The Gen3, x4 bandwidth over the cable is effectively the same as Gen1, x16 bandwidth, so you would have close to full bandwidth to this device.

We sell the IBP-G3X16-2 for USD 299/a, and the PXH832/MXH832s for USD 995/a (prices are net, so shipping and possible local taxes would be in addition.
You'll also need 4 cables (MiniSAS HD or PCIe) cables for this setup our transparent host-target link fully supports MiniSAS HD, so those would be recommended. "

He also mentioned some discount.
 
Here is more feedback from the Dolphinics support: The Gen3, x4 bandwidth over the cable is effectively the same as Gen1, x16 bandwidth, so you would have close to full bandwidth to this device.

Are you referring to using dual PCIe 3.0 x4 upstream cards to a single PCIe downstream host?

In theory:

1 x16 PCIe 2.0=1 x8 PCIe 3.0 = 2 x4 PCIe 3.0.

There is a slight backwards compatibility penalty of 5% or so, but you wouldn't notice it. But it passes by bandwidth math check.
 
I was referring to this:
upload_2019-2-10_17-44-10.png


One host card to 2 baseboards would be 3x8 (same as 2x16), or one host with 4 baseboards is 3x4
 
Ok, I see what you are saying-but you are effectively creating a x16 switch-not creating additional bandwidth is my only point. It looks nice.
 
Ok, I see what you are saying-but you are effectively creating a x16 switch-not creating additional bandwidth is my only point. It looks nice.
Well if you use one host card plus a target card on a onedtop system backplane similar to the pictured you get x16 multiplied .even with 2 target cards and 2 onedtop system backplanes you can get around 10 extra pcie running at x16 width.
 
But in order to create data-flow it must still go in and out of that x16 slot which means both input and output or x8 speeds. A x16 throughput can only be created via PLX if it is going to another x16 PLX host and there is only one other slot in a 4,1/5,1 Mac Pro.
 
But in order to create data-flow it must still go in and out of that x16 slot which means both input and output or x8 speeds. A x16 throughput can only be created via PLX if it is going to another x16 PLX host and there is only one other slot in a 4,1/5,1 Mac Pro.

Link Speeds 32 Gb/s per port / 128 Gb/s
Application Performance <130 nanoseconds cut through latency port to port
Active Components Broadcom /PLX Gen 3 PCIe Switch
 
Link Speeds 32 Gb/s per port / 128 Gb/s
Application Performance <130 nanoseconds cut through latency port to port
Active Components Broadcom /PLX Gen 3 PCIe Switch

Do a copy and paste between two drives even in RAID and you’ll top out at 32 Gbps. It’s a switch-not an amplifier. It’s the same thing as the native switch in our logic boards.
 
You said
"A x16 throughput can only be created via PLX if it is going to another x16 PLX host and there is only one other slot in a 4,1/5,1 Mac Pro"
in this case you will have the host and the target card with PLX switches, right?
[doublepost=1549873600][/doublepost]If you have any doubts lets discuss it I will pass your concerns to the support team
 
Short story form-only application I see is multiple GPU’s to spread out the bandwidth. Otherwise you can get the same speeds internally. Everyone’s need is different but that’s overkill unless you need every bit of power you can throw at it. Hope it’s worth it!
 
Last edited:
Short story form-only application I see is multiple GPU’s to spread out the bandwidth. Otherwise you can get the same speeds internally. Everyone’s need is different but that’s overkill unless you need every bit of power you can throw at it. Hope it’s worth it!
I was hoping for a solution to run 2-3 GPU"s at native speed . Maybe one in the cMP chassis and 2 outside .it is quite expensive though I agree, but is there a cheaper solution?
 
Would suggest you really evaluate your applications that can utilize multiple GPUs before proceeding with that spend. In the past multiple stacked GPUs were helpful for CUDA processing IF and ONLY IF the application could utilize them. And there was a falloff with impact for each added. (I do not remember the numbers off hand, but remember around 3 max being "ideal".)

There are limited applications on macOS that can utilize GPU properly, let alone multiple GPUs. Most are not designed for this type of processing, especially with the latest AMD GPUs on macOS. macOS could barely work with GPU switching on a native MacBookPro11,3 with NVIDIA built-in.
 
  • Like
Reactions: Reindeer_Games
Would suggest you really evaluate your applications that can utilize multiple GPUs before proceeding with that spend. In the past multiple stacked GPUs were helpful for CUDA processing IF and ONLY IF the application could utilize them. And there was a falloff with impact for each added. (I do not remember the numbers off hand, but remember around 3 max being "ideal".)

There are limited applications on macOS that can utilize GPU properly, let alone multiple GPUs. Most are not designed for this type of processing, especially with the latest AMD GPUs on macOS. macOS could barely work with GPU switching on a native MacBookPro11,3 with NVIDIA built-in.
Sage advice here. Whatever your app, check for multiple GPU benchmarks and do a cost / benefit analysis. If your app is https://hashcat.net/hashcat/ - go for it. Linear scaling - throw all the GPUs you can handle at it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.