Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What about Dell 4x m2 NVMe Drive PCIe Card?
Seems it's have similar functionality and cost only 89$


Dell-4x-m2-NVMe-Drive-PCIe-Card-600x374.jpg


https://www.servethehome.com/the-dell-4x-m-2-pcie-x16-version-of-the-hp-z-turbo-quad-pro/
https://hardforum.com/threads/dell-quad-m-2-pcie-card.1894309/

P/N:
Dell PCIe x16 M.2 SSD card – PN: 414-BBBJ
Dell PCIe x8 M.2 SSD card – PN: 400-AKSO
 
No PCI switch onboard. No go.
Yes, i realize this. But board works on Windows somehow (with SM951 NVMe blades)
[doublepost=1489957559][/doublepost]P.S. Hires images of the board. There some controller on back side.

http://www.sl-digital.com/forums/cardfront.jpg
http://www.sl-digital.com/forums/cardback.jpg
[doublepost=1489958440][/doublepost]P.S.2 I read more carefully forum link, and understood, that this board bring 4 independent drives to system (not one raid 0 as Amfeltec).

Can we combine these drives to one via softRaid?
 
Last edited:
Yes, i realize this. But board works on Windows somehow (with SM951 NVMe blades)
[doublepost=1489957559][/doublepost]P.S. Hires images of the board. There some controller on back side.

http://www.sl-digital.com/forums/cardfront.jpg
http://www.sl-digital.com/forums/cardback.jpg
[doublepost=1489958440][/doublepost]P.S.2 I read more carefully forum link, and understood, that this board bring 4 independent drives to system (not one raid 0 as Amfeltec).

Can we combine these drives to one via softRaid?


You are totally not getting it. Read the article again. It explains why it would not work in the cmp.

Also review what raid is and how it works and how it relates to the amfeltec solution. You do not understand those things either.
 
My three Samsung XP941s in an Amfeltec were throttling quite severely when running AJA benchmark on loop.
Having looked at some thermal imaging of both the XP941 and SM951 SSDs it became very apparent that the culprit is the controller on the M.2 board. They get white hot.
So, I bought some Raspberry Pi (low profile) copper heatsinks and stuck them on all three XP941 controllers and I'm pleased to report that the throttling has gone!
Thought this might help others.
https://www.amazon.co.uk/gp/product/B00IUEEMWA/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Squuiid, have you seen kryoM.2 micro heatsink? Look nice, and cost only 10 euros

https://shop.aquacomputer.de/product_info.php?language=en&products_id=3660

View attachment 696444
Too big unfortunately, definitely not low profile enough. There is barely enough room between the GPU in Slot 1 and the M.2 cards on the Amfeltec for the slim copper heat sinks I listed above.

The KryoM.2 would simply not allow a loaded Amfeltec to be next to a GPU.
Thanks anyway though. Good to know these exist.
 
On a MBP, not on a cMP. Way, WAY different EFI and overall generations ahead.

I now for other use split my Squid by M.2 extender cables to 4 x4 slots and use PCIe cards in them. Works great as well, as expected with a PLX chip.

Powers a 10G NIC (x4), a quad port USB 3.0 card (x4, dedicated 5Gbit per port), a SATA M.2 RAID card (x2, for testing only) and has one HyperX as my boot drive.

https://prnt.li/f/719e105de856afb5368680c65d6e4928-wom6gahrie.html (Warning: large images; 50MB+)
 
On a MBP, not on a cMP. Way, WAY different EFI and overall generations ahead.

I now for other use split my Squid by M.2 extender cables to 4 x4 slots and use PCIe cards in them. Works great as well, as expected with a PLX chip.

Powers a 10G NIC (x4), a quad port USB 3.0 card (x4, dedicated 5Gbit per port), a SATA M.2 RAID card (x2, for testing only) and has one HyperX as my boot drive.

https://prnt.li/f/719e105de856afb5368680c65d6e4928-wom6gahrie.html (Warning: large images; 50MB+)

It looks like a "back to the future" special edition cMP to me :D
 
  • Like
Reactions: Synchro3
I just did this Amfeltec upgrade. First I got the 2-slot card and two SM951's. But I was so happy with the performance I wanted more (Isn't that how it always goes?). I wanted to dump the rest of my SATA SSD's and spinners. So I bought the 4-slot card and two more SM951's. The whole thing was pretty pricey and Amfeltec gives no choice on shipping. So on top of the initial price it cost $44.00 for shipping Priority FedEx and another $16.50 for a paypal processing fee.

The pain of the purchase will fade eventually =).

But I now have (4) 512GB smoking fast drives that can boot to macOS Sierra or Win 10 with no hacks or issues.

(1) SQUID PCIe Gen2 Carrier Board for up to 4 M.2 PCIe SSD modules - $427.00

(4) Samsung SM951 512GB PCIe AHCI M.2 SSD MZHPV512HDGL - $300.00 each = $1,200.00

Total Cost: $1,627.00 (OUCH!)
 
Last edited:
Because 1 lane of PCIe 2.0 - 5Gbit - is rather useless, even more if split up to 2 at just 2.5Gbit each. Before overhead at around 10%. And you waste 3 lanes on the cMP slot 3/4 (or 15 on 1/2). Or you get only 2.5Gbit and waste none by using the airport mini-pcie which however cuts to even half of this due to being PCIe 1.1 (2.5Gbit per lane, thus you get 1.25G/port).

Further, most Pericom - notably nearly all (if not all) PCIe 1.1 and most 2.0 switches like this 2 i have here (see attached pic #1) are not detected in a cMP at all (and also no devices behind them).

The device you linked uses the unsupported PI7C9X2G (to be exact PI7C9X2G304SLBFDE, 3 being port count) chip same as the dual port one in my pic, 4 lane 3 port PCIe 2.0 - Datasheet here

The other you see in pic #1 uses a rather rare/custom order 4 port 4 lane edition of the same series (PI7C9X2G404SLAFDE) and does not work in a cMP either.

Switches that are used in splitting mode only - like the Squid board or the Amfeltec x4 to 4x x1 GPU mounts works if they use newer chips - eg. the Amfeltec GPU splitter using a PLX PEX 8608 is one that works fine in a cMP (Datasheet here, 8 lanes PCIe 2.0 with 8 port limit, 5 ports used). See Pic #2 attached, i removed the heatsink so one can read the part no.

The last one is a PLX RDK based on a PCIe 1.1 PLX PEX 8508 (Datasheet here, 8 lanes PCIe 1.1 with 8 port limit, 5 ports used) which does not work in a cMP.

IMG_20170907_205944482 (1).jpg IMG_20170907_212353636 (1).jpg IMG_20170907_212517120 (1).jpg
 
  • Like
Reactions: dlwg
I would like to understand the difference in performance between the
Squid PCI Express Gen 2 Carrier Board for M.2 SSD modules
and the
Squid PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules
in the classic Mac Pro.

The barefeats.com article at http://barefeats.com/hard220.html concluded that the Gen 3 version has less performance than the Gen 2 version.


For the Gen 2 version:

1) The single flash drive result of 1494 MB/s requires a 5 GT/s x4 link (2 GB/s).
2) The four flash drives result of 5461 MB/s requires a 5 GT/s x16 link (8 GB/s).
Nothing unexpected with those results.


For the Gen 3 version:

1) The single flash drive result of 2093 MB/s suggests that each flash drive has a 8 GT/s x4 link (the four downstream ports of the PCIe bridge). This means that the 5 GT/s x4 link of the Gen 2 card is a bottleneck for the M.2 SM951 AHCI blades.

2) the two, three, and four flash drives result of 3069 MB/s suggests that the upstream port of the PCIe bridge is not 5 GT/s x16 (8 GB/s). I think it must be 2.5 GT/s x16 (4 GB/s). For some reason, the link negotiation skips 5 GT/s and falls back to 2.5 GT/s. Either the bridge chip only supports 8 GT/s and 2.5 GT/s, or the link negotiation fails for 5 GT/s.


There are two positive things about the Gen 3 version in a classic Mac Pro:

1) It has a fan.
2) You can get full PCIe 3.0 x4 bandwidth in a PCIe 1.0 x16 or PCIe 2.0 x16 slot from any one of the four NVMe ports. You can use a NVMe to PCIe 3.0 x4 adapter to get max performance from a PCIe 3.0 card in a computer that doesn't have PCIe 3.0 slots.


Does anyone disagree with those findings? Can someone post the output of the following commands to confirm this? First install lspci V1.1.pkg, then run "update-pciids".
Code:
lspci -tvnn
lspci -vvnnxxx

The output will identify what bridge chip is used, the maximum link speed and link width, and the negotiated link speed and link width. The PCI Express Capability (ID 10) has a Link Capabilities 2 register at offset 2C containing a Supported Link Speed Vector that should list the supported link speeds.
 
I doubt 2) works, PLX charges a premium for link upgrade capability.

Amfeltec support tells you what chip it uses, but it likely is a PLX 8732 (The old Squid is a 8632).
 
Looks like you did not read anything, or you are not willing too.

Hi Carpsafari,

Thank you for your response.

I think there's a lot of confusion about what devices work and how well they work, in part because this threaded forum layout is not a good organization for a body of knowledge but also there's misleading or contradictory information across multiple threads.

I can see how you might of thought that I had not read the last 194 messages in this thread because I asked a question about compatibility with the Aplicata adapter. But I assure you, I had already read through this entire thread as well as much of the 'SATA Express meets the '09 Mac Pro - Bootable NGFF PCIE SSD' thread as well, and reading these threads only served to increase my confusion.

Let me ask a different, more definitive question, that once answered may also serve to clear the confusion for other users: Does the Mac Pro (4,1 & 5,1) support PCI Bifurcation, and if so, in passive or only active forms?

Thank you for your thoughtful reply,
Your most humble servant,
Eksu.
 
I doubt 2) works, PLX charges a premium for link upgrade capability.
What's a link upgrade and are those prices documented somewhere? The barefeats results show that the PCIe 3.0 x4 device bandwidth is being transmitted over the PCIe 2.0 x16 slot and NVMe to PCIe adapters are passive.

Amfeltec support tells you what chip it uses, but it likely is a PLX 8732 (The old Squid is a 8632).
The product overview doesn't explicitly say that the PLX 8732 supports 5.0 GT/s and I don't know where you would get a real data sheet from. Are you saying that 8732 has a link upgrade version and a non-link upgrade version?

Would this Aplicata adapter work? http://www.tomshardware.com/reviews/aplicata-m.2-nvme-ssd-adapter,5201.html

There's a 16x passive and a 8x active version. Which version would work with a Classic Mac for having separate bootable SSD's?
I think other people in the thread have stated that bifurcation doesn't exist on the Classic Mac Pro so you would need to use the x8 version. However, if the upstream port of the bridge chip on the x8 only works at 2.5 GT/s (like the amfeltec) then it would only be capable of up to 2 GB/s, half the speed of the amfeltec gen 3 x16, (or the same speed as the amfeltec gen 2 with one drive).
 
Just as a note, I have had problems with some Pcie M.2 SSD adapters (HyperX predator) on some Mac Pro 5.1 Slot #2, but Amfeltec cards has always been working 100% fine on our Mac Pro builds.
 
Just as a note, I have had problems with some Pcie M.2 SSD adapters (HyperX predator) on some Mac Pro 5.1 Slot #2, but Amfeltec cards has always been working 100% fine on our Mac Pro builds.

You mean the whole "HyperX Predator with the stock / native Kingston PCIe card" combo has problem?
 
You mean the whole "HyperX Predator with the stock / native Kingston PCIe card" combo has problem?

Probably, but I wouldn't exclude that my troubles were just because of these Mac Pro slot 2 themselves.

Had half-speed on more than one of them, but other cards worked fine.

It might be something going wrong in the negotiation of the lanes? I am no expert in this regard though
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.