Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My reason for asking is if its compatible. It could need a nice way to add pcie lanes. I have seen similar units that are the same idea but they generally go for a few thousand dollars.
 
Check out barefeets.com. Reviews there.
[doublepost=1504965338][/doublepost]Make that barefeats.com.
 
I found this site the other day. Looks like something that could work for cMP users. I'm not sure.

What do you guys think?
It looks like it takes one PCIe lane, with a switch splits it four ways, and gives you four PCIe x16 slots with one oversubscribed lane per slot.

That's one 500 MB/sec lane split four ways.
 
Should work, it's just PCIe :D

No. Just no.

Most PCIe splitters do not work in any cMP.

The Amfeltec x4 to 4x 1 and Squid with x16 to 4 x 4 kits work - i use one - but the price (over 200€) is likely not worth it. The linked backplane uses switching and not lane splitting only so you loose performance/lanes if it works at all.

https://forums.macrumors.com/posts/24975285/

https://forums.macrumors.com/thread...ts-in-pre-2013-mac-pro.1772259/#post-23135925

https://m.imgur.com/a/Uz6cj
 
No. Just no.

Most PCIe splitters do not work in any cMP.

The Amfeltec x4 to 4x 1 and Squid with x16 to 4 x 4 kits work - i use one - but the price (over 200€) is likely not worth it. The linked backplane uses switching and not lane splitting only so you loose performance/lanes if it works at all.

https://forums.macrumors.com/posts/24975285/

https://forums.macrumors.com/thread...ts-in-pre-2013-mac-pro.1772259/#post-23135925

https://m.imgur.com/a/Uz6cj
Guess :D was completely missed. Mr. Payne isn't that new around here and has already navigated the minefield of buying USB cards. I won't argue with you on "Most PCIe splitters do not work in any cMP." though. I've found solutions that work for me and stick with them.

My official stance is Buyer Beware!

OP, I imagine you already know your intended application for this and whether or not the splitter's restricted bandwidth will be a consideration?
 
My logic behind this really isn't to do with bandwidth. I want to keep my stock gt120 in my machine. However by the time I put in another gpu, maybe some ssd's I suddenly will be lacking for room.

I would like to buy the sonnet allegro pro USB 3.0 card as well as the caldigit usb3.1/USB c card. The day will also come where the internal drive bays will be a storage limit. I'm happy to use raid 1 for smaller drives but going to massive capacity drives kind of scares me as it's like having all your eggs in one basket and mighty expensive to replace if one fails (losing data is always bad but better to have lots of low capacity and lose a little then few high capacity and lose everything).

So oneday I'm going to need some kind of external setup and will need to be able to connect to it.

I just thought this may be a usable way to have USB cards or other cards like maybe fibre channel or whatever in an external enclosure. Sort of like using it as a hub or dock type setup.
 
My logic behind this really isn't to do with bandwidth. I want to keep my stock gt120 in my machine. However by the time I put in another gpu, maybe some ssd's I suddenly will be lacking for room.
Your logic is quite reasonable. This device has 500 MB/sec of bandwidth, that it can share according to load. If only one device is active, it gets 500 MB/sec. If four devices try to run full speed, the PCIe switch gives each 125 MB/sec. For lower bandwidth devices, or devices which never run simultaneously - a good choice.

(And although I couldn't find any reference that showed that this device has a PLX or other PCIe switch - it really needs a switch to function. And a switch can dynamically allocate bandwidth according to demand - each of the four slots can get the 500 MB/s if the other three are idle. "Splitters" are for 12 volt DC power cords - a serial packet-based network like PCIe needs more intelligent switches.)

Using PCIe switches to over-subscribe bandwidth is good - especially since disk drives with PCIe x4 connections are common. Many of my servers have a PCIe x8 card that fans out to six PCIe x4 drive connections. Even in RAID-0, that gives me almost 8 GB/sec of disk bandwidth (PCIe 3.0).

My concern with this switch is that someone might think that it can magically run four PCIe x16 devices from a single PCIe x1 connection. It can - but one needs to realize that you don't get 32 GB/sec throughput.
 
We discussed all these things two years ago but after that it became clear that trying to modify the cMP to make it stay relevant was expensive, unstable and bug filled. It's cheaper and safer to make a Hackintosh or use Thunderbolt devices connected to a MBP.
 
I agree - don't waste your time and money. It is a junk. Build a hackintosh or get MP2013/MBP + TB boxes.
 
It looks like it takes one PCIe lane, with a switch splits it four ways, and gives you four PCIe x16 slots with one oversubscribed lane per slot.

That's one 500 MB/sec lane split four ways.

Correct me if I am wrong, but a single 4-lane PCIe slot on a 2010-2012 Mac Pro has a throughput of roughly 2,000 MB/s, not 500, right? (500 MB/sec per lane, x 4.)
 
(And although I couldn't find any reference that showed that this device has a PLX or other PCIe switch - it really needs a switch to function. And a switch can dynamically allocate bandwidth according to demand - each of the four slots can get the 500 MB/s if the other three are idle. "Splitters" are for 12 volt DC power cords - a serial packet-based network like PCIe needs more intelligent switches.)

It has one, likely Pericom as these are cheaper for multinode things that do not base on 'splitting' but rather on 'sharing'.

Same stuff you get for miners on ebay nowadays, they have as i noted issues in Macs often.

Correct me if I am wrong, but a single 4-lane PCIe slot on a 2010-2012 Mac Pro has a throughput of roughly 2,000 MB/s, not 500, right? (500 MB/sec per lane, x 4.)

Correct. The SLOT has x4, however the Amfeltec GPU switch uses only an x1 uplink.

My splitter uses x4 and splits to 4x dedicated x1.

The Squid board uses x16 and splits to 4x x4 but also has the ability to use x4 uplink and still share x4 with both ports.

We discussed all these things two years ago but after that it became clear that trying to modify the cMP to make it stay relevant was expensive, unstable and bug filled. It's cheaper and safer to make a Hackintosh or use Thunderbolt devices connected to a MBP.

Depends i guess? If you have fun with it, why not. This is a expansion kit on top of a 5,1 - It uses the x16 slot to power 2 x4 SSDs and 2 GPUs at x4 each via a Squid board, and splits slot 4 into 4x x1 for USB and other cards.

This totals to:

- 1 x1 1.1 expanded from Airport
- 1 x4 2.0 full height in tower usable (slot 4)
- 1 x16 full height in tower usable (slot 1)
- 2 M.2 x4 spots external (Squid connected to slot 2)
- 2 x4 slots external (Squid)
- 4 x1 slots external (Splitter connected to slot 3)

https://prnt.li/f/82298dc9c4fc1dcc8fa5c0e688c17ecb-mu7dohd0ei.jpg
 
Hey guys i just ordered one of these for my hackintosh, highpoint doesn’t have raid/drivers/software for Mac but from what i’ve read they aren’t that great anyway. They also have only officially tested Samsung 960 EVO and Pro NVME SSDs.

http://www.highpoint-tech.com/USA_new/series-ssd7101a-specification.htm


Are there any cheaper NVME drives that would give me negligible downsides in speeds, what’s everyone think of the Samsung EVO vs Pro?

Im thinking of just running four of these in Raid 0, should i install Softraid Lite and run these in Raid 0? (I’m planning on having backups of all my Video media on seperate drives so redundancy is not needed on the Highpoint Raid solution.

Stripe size?
 
Has anyone tried an NVMe drive with High Sierra yet? I want to confirm that someone has tried it on a non-hackintosh machine.

http://barefeats.com/hard225.html

Hey guys i just ordered one of these for my hackintosh, highpoint doesn’t have raid/drivers/software for Mac but from what i’ve read they aren’t that great anyway. They also have only officially tested Samsung 960 EVO and Pro NVME SSDs.

http://www.highpoint-tech.com/USA_new/series-ssd7101a-specification.htm


Are there any cheaper NVME drives that would give me negligible downsides in speeds, what’s everyone think of the Samsung EVO vs Pro?

Im thinking of just running four of these in Raid 0, should i install Softraid Lite and run these in Raid 0? (I’m planning on having backups of all my Video media on seperate drives so redundancy is not needed on the Highpoint Raid solution.

Stripe size?
 
Hello,
William_si thanks a lot for your informative posts. ;)
Your custom cMP is what I'm searching to do with my cMP 4.1 flashed to 5.1.

Actually I've installed an Inno3D GTX 1050 single slot in slot 1 x16, maybe I'll buy an another one, and a more powerful GTX in slot 1, I'll put the two GTX 1050 single slot in 3 & 4, to use the 75w of each slots.

I want to keep a 2.0 x16 slot to use more pcie cards, so your solution looks great.

If I resume, I have to buy an Amfeltec Squid PCI Express Gen 2 Carrier Board for M.2 SSD, if I want an AHCI ssd blade (like the Hyperx Predator 480GB model for boot), I can add an another one in Raid 0 and keep the two x4 ports for a Sas pcie card (like an Atto or Areca model, to expand the sata disks) and a USB 3.1 or 10GBE pcie card.

My questions are :
The custom M2 to x16 pcie cables are homemade or can we find them online, or asking them directly in a special order to Amfeltec?

I'm also searching a mini pcie 1.1 to x16 solution (like I don't own a wifi card), to use my old Radeon 4870 512MB card, for boot screen. Do you have a model to recommand?

Thanks in advance, you've been helpful with your tests. ;)
Guillaume
 
They are available cheaper (still not cheap though) from a public reseller, ask sales for longer:

http://www.era-adapter.com/m2-keym-cables-c-93.html

Base price for 6in/15cm in any angle config is ~56$, useful are probably no less than 30cm - shielding wise i've ordered longer and it works fine still. 30cm probably comes out at 75$ ea there or so. Most of the cost is soldering/small QTY order, not cable.

The pictures are mostly renders but the quality is very good, i also have some M.2 e-key and PCIe things from the same factory.

For PCie -> x1 open ended (x16 slot is useless overall) i used these:

https://www.amazon.com/Mini-Express...539&sr=8-3&keywords=mini+pcie+to+pcie+adapter

There is also this in high quality but probably impossible to route from the wifi slot:

http://www.era-adapter.com/mini-pcie-to-pcie-x1-extender-cable-r6s-r1f-p-321.html


The Squid should work fine with 2 SSDs and 2 expanded slots - with 3 expanded i had issues to get 3 GPUs to detect behind it. It also will not work with most PLX based cards like the more expensive (dedicated 5Gbps per port) USB 3.0 cards (the PLX for slot 3/4 however does, this is a setting/workaround but not field reprogrammable on the squid, for us at least).
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.