Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.

w1z

macrumors 6502a
Aug 20, 2013
692
481
I thought the same, I’ll run it as soon as I’m back home from work and report back. Thanks for the support.

You mentioned that you bought it off eBay as a used item.. so unless you have tested it before with 2 or 3 drives (in raid0) and got higher speeds than your current ones then it might be a dud.
 

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
You mentioned that you bought it off eBay as a used item.. so unless you have tested it before with 2 or 3 drives (in raid0) and got higher speeds than your current ones then it might be a dud.

I’ve tested it with 2 drives in Raid 0 and got significantly faster speeds. Will report both tests later
 
  • Like
Reactions: w1z

MarkSt

macrumors newbie
Nov 2, 2018
11
0
Seattle
The new fan, BFB0512MA-C, arrived this morning and I just finished installing it. I also replaced the copper heatsink with a new smaller copper heatsink which I used in my initial mod as the larger heatsink was collecting dust around the edges. I hate dust.

Previous results with 32.5 dB(A), part no. BFB0512HA-C - 3.5 CFM, 5100RPM

- PCIe ambient @ 29C and NVMe drive @ 36~38C peaking at 41C under heavy write with the PCIe fan spinning at 800rpm (default SMC control) and room ambient temp at 28C (APC Environment Temp Sensor)

Current results with 28 dB(A), part no. BFB0512MA-C - 2.9 CFM, 4300RPM

- PCIe ambient @ 31C and NVMe drive @ 37~39C peaking at 41~42C under heavy write with the PCIe fan spinning at 980rpm (default SMC control) and room ambient temp at 30C (APC Environment Temp Sensor)

The PLX switch's copper heatsink temperature is 39~41.5C (measured by infrared thermometer)

Photos:

View attachment 831250

View attachment 831253 View attachment 831252

The 28dB(A) fan is much quieter than the previous 32.5dB(A) that I actually can't hear it over the mac pro fans' noise.

I am happy with these results and with my once again near silent mac pro. I highly recommend this mod as a good solution to the Highpoint 7101-A's loud stock fan.

w1z I'm sorry to dig such a long way back in the thread - but I had wanted to ask you what app you are using for your temperature monitoring in post #389. I had seen earlier that you were using iStat Pro but I don't think this is that :)

I assume the only way to get the PLX switch temp is by physically measuring it?

Thanks
 

tsialex

Contributor
Jun 13, 2016
13,454
13,601
w1z I'm sorry to dig such a long way back in the thread - but I had wanted to ask you what app you are using for your temperature monitoring in post #389. I had seen earlier that you were using iStat Pro but I don't think this is that :)
Bjango iStat Menus

I assume the only way to get the PLX switch temp is by physically measuring it?

Thanks
Yes, only can be measured with a probe, no know way to access it via software.
 

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
That's very strange. Can you also try benchmarking using AmorphousDiskMark? link: https://apps.apple.com/app/amorphousdiskmark/id1168254295

Attached are my test results
Screen Shot 2020-07-14 at 8.51.26 PM.png
 

w1z

macrumors 6502a
Aug 20, 2013
692
481
w1z I'm sorry to dig such a long way back in the thread - but I had wanted to ask you what app you are using for your temperature monitoring in post #389. I had seen earlier that you were using iStat Pro but I don't think this is that :)

I assume the only way to get the PLX switch temp is by physically measuring it?

Thanks

I use both iStat Menus and TG Pro - the first for monitoring electricals and the latter for temps and refined fan control.

As for PLX temp reading, this is only done by infrared thermometer or wired temp probes.
 

w1z

macrumors 6502a
Aug 20, 2013
692
481
So I uninstalled the HP drivers and here's the results. Doesn't seem like any difference

View attachment 934058

There's a difference for sure but not the 6000MB/s+ you're looking for.

The card is populated with 4 NVMe blades with each blade negotiating a PCIe 2.0 x4 link external to the PLX switch .... The switch reports internally negotiated link speeds ie. PCIe 3.0 x4.

The cMP slot1 or slot2's max number of lanes is 16 (16 lanes with 4 lanes per blade) so for the raid-0 3-drive setup you should expect to get 1500MB/s (PCIe 2.0 x4 link) x 3 when a fourth drive negotiates another PCIe 2.0 x4 link - that's a total of 4,500MB/s. If ALL drives were setup in raid-0 then you can easily hit 6,000 MB/s (theoretical max speed is 8000MB/s)

I mentioned the speed drop in previous posts when using 4 blades. If you want to achieve maximum speed then setup and only use a 3-drive raid0 as the PLX switch will utilize remaining link bandwidth to push your raid speeds to 6000MB/s.

It's a limitation of the cMP and not the card. If you install the card in PCIe 3.0 x16 slot with 4 blades in raid-0, depending on the blades, you would easily see 12000MB/s+ speeds.

Hope this helps.
 
  • Like
Reactions: MP39

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
There's a difference for sure but not the 6000MB/s+ you're looking for.

The card is populated with 4 NVMe blades with each blade negotiating a PCIe 2.0 x4 link external to the PLX switch .... The switch reports internally negotiated link speeds ie. PCIe 3.0 x4.

The cMP slot1 or slot2's max number of lanes is 16 (16 lanes with 4 lanes per blade) so for the raid-0 3-drive setup you should expect to get 1500MB/s (PCIe 2.0 x4 link) x 3 when a fourth drive negotiates another PCIe 2.0 x4 link - that's a total of 4,500MB/s. If ALL drives were setup in raid-0 then you can easily hit 6,000 MB/s (theoretical max speed is 8000MB/s)

I mentioned the speed drop in previous posts when using 4 blades. If you want to achieve maximum speed then setup and use a 3-drive raid0 as the PLX switch will utilize remaining link bandwidth to push your raid speeds near to 6000MB/s.

It's a limitation of the cMP and not the card. If you install the card in PCIe 3.0 x16 slot with 4 blades in raid-0, depending on the blades, you would easily see 12000MB/s+ speeds.

Hope this helps.

This definitely helps and thank you for being so thorough and clear with your explanation.

Moving forward should I not use the drivers from HighPoint? I noticed once I uninstalled then it doesn’t allow me to use their RAID Management in Safari any longer. Is this expected? Should I be concerned?

a5167d8967d6b95bcb716c7b7675ec3f.jpg
 

w1z

macrumors 6502a
Aug 20, 2013
692
481
This definitely helps and thank you for being so thorough and clear with your explanation.

Moving forward should I not use the drivers from HighPoint? I noticed once I uninstalled then it doesn’t allow me to use their RAID Management in Safari any longer. Is this expected? Should I be concerned?

a5167d8967d6b95bcb716c7b7675ec3f.jpg

Yes, it's expected as you uninstalled the drivers and the WebGUI is dependent on the drivers to work properly.

Uninstall the HPT WebGUI too as you don't need it seeing that you are using Apple Raid to setup the raid volume.

I have stayed away from HPT drivers - Apple's are better.
 

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
Yes, it's expected as you uninstalled the drivers and the WebGUI is dependent on the drivers to work properly.

Uninstall the HPT WebGUI too as you don't need it seeing that you are using Apple Raid to setup the raid volume.

I have stayed away from HPT drivers as Apple's are better.

Ok awesome. Thanks so much man. Really appreciate the help and speedy responses.
 

w1z

macrumors 6502a
Aug 20, 2013
692
481
Ok awesome. Thanks so much man. Really appreciate the help and speedy responses.

You're welcome.

Other setup options include:

- 4 blades setup in raid-0 on the HighPoint with macOS installed on a SSD or 5th NVMe blade installed in slots 3 or 4 (if you want extra storage in the array)

- 3 blades setup in raid-0 on the HighPoint with macOS installed on the 4th NVMe blade which you can connect to a regular NVMe PCIe card installed in slot3 or 4 of the cMP (if you want to still utilize the 4th blade for macOS)
 

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
You're welcome.

Other setup options include:

- 4 blades setup in raid-0 on the HighPoint with macOS installed on a SSD or 5th NVMe blade installed in slots 3 or 4 (if you want extra storage in the array)

- 3 blades setup in raid-0 on the HighPoint with macOS installed on the 4th NVMe blade which you can connect to a regular NVMe PCIe card installed in slot3 or 4 of the cMP (if you want to still utilize the 4th blade for macOS)

I’ll definitely consider these options as I’ll be using this machine for video editing.

It’ll probably be the former option. I have an RX580 that I put in slot 2 because of its size so I’ve lost Slot 3 already. Can’t seem to get the HighPoint in slot 2 like I wanted with the 580 in slot 1. These little rivets on the bottom of the card are way too close to the GPU
 

zachek

macrumors member
Jun 11, 2020
42
7
Los Angeles
Received the SSD7101A-1 yesterday (along with 3 new 970 Evo Plus 1tb blades to go along with the one I already have.) It appears my card is a v3.10; I haven't seen that posted here before. The metal cover came with two thick thermal pads, one for the area where the blades install and one for over the PLX chip. Not sure if the previous versions included those.

I immediately ran into the problem @dorianclive mentions in the post above, (previously unforeseen for me) when I realized my Sapphire RX 580 Pulse (in slot 1) was a bit too thick and the protrusions from the bottom of the HP card (where the set screws lock down the blades on the other side) in slot 2 just above were contacting the fans on the RX 580. Swapping the 580 to slot 2 would mean I would have to bump my Inateck USB 3.1 AIC which was a non-starter for me. I ended up removing the two bottom screws on the 580 backplate and adding a plastic washer (~2mm thickness) I had leftover from a recent new TV installation mounted with one layer of 1.5mm thermal pad on the bottom left (if you're looking at the back of the 7101A-1 card with the pins pointing down) protrusion. This washer fit snugly over the round protrusion so it stayed in place as I was inserting the HP into slot 2.

This was enough to prevent the fan blades on the RX 580 from contacting the HP card on its own, but I also added two of the aforementioned washers with a thermal pad sandwich to the outside edge of the cards (see photo) just to be safe.

This will likely be a temporary fix until I find a GPU that isn't so thick. Saw someone mention that the Radeon 5700 XT FE (I'm assuming 50th anniversary edition?) was thinner than the RX 580. I don't do A LOT of GPU heavy tasks at this time, so I'm not terribly worried about overheating, but the problem is on my radar and if I have to get a different GPU I'd rather it be an upgrade over the 580.

Ultimately, I'm probably going to ditch the Sonnet Allegro Tempo I have in Slot 4 for the Titan Ridge card, at that point I would be able to move a GPU to slot 2, but the idea of wasting a slot makes my head hurt...

Anyhow, back to the Highpoint. My plan is to run 1 blade as my boot drive and the other 3 in a RAID 0. If I'm reading what @w1z is saying correctly since I have a 5,1 the highest speeds I can expect with the 3 Evo Plus blades in RAID 0 is ~4500mbps and in this case, since I'm speed limited by the architecture of the 5,1, I would be better off NOT installing the High Point drivers and NOT using the WebGUI to setup the RAID (and just using Apple RAID via Disk Utility instead), is that correct?
 

Attachments

  • 1.jpg
    1.jpg
    492 KB · Views: 192
  • 3.jpg
    3.jpg
    494.8 KB · Views: 143
  • 6.jpg
    6.jpg
    405.2 KB · Views: 139
  • 7.jpg
    7.jpg
    463.1 KB · Views: 129
  • 8.jpg
    8.jpg
    566.1 KB · Views: 144
  • 9.jpg
    9.jpg
    378.8 KB · Views: 139
Last edited:

tsialex

Contributor
Jun 13, 2016
13,454
13,601
The card is populated with 4 NVMe blades with each blade negotiating a PCIe 2.0 x4 link external to the PLX switch .... The switch reports internally negotiated link speeds ie. PCIe 3.0 x4.

The cMP slot1 or slot2's max number of lanes is 16 (16 lanes with 4 lanes per blade) so for the raid-0 3-drive setup you should expect to get 1500MB/s (PCIe 2.0 x4 link) x 3 when a fourth drive negotiates another PCIe 2.0 x4 link - that's a total of 4,500MB/s. If ALL drives were setup in raid-0 then you can easily hit 6,000 MB/s (theoretical max speed is 8000MB/s)
What you wrote is for cards that have Intel Bifurcation support with a newer PC that support it. With PCIe switched cards the arrangement works totally differently, if was the way you wrote you would be limited to 1500MB/s with one blade.

The PCIe switch used on HPT SSD7101A-1 have a x16 connection to the PCIe slot, then internally it has four x4 PCIe v3.0 connections to the blades - the 8GT/s that SystemReport shows. Each one of blades connect to the PCIe switch at PCIe v3.0 x4, then the PCIe switch aggregate everything internally (the magic of PCIe switches) and connect to the PCIe slot at a x16 PCIe v2.0 connection. That's why a SSD7101A-1 can have around 6000MB/s with a MP5,1 x16 slot - the maximum usable throughput with a PCIe v2.0 x16 slot after all the overhead.

With just two 970PRO in a RAID-0 arrangement you already are limited by the total throughput of the PCIe v2.0 slot (again, 5900~6000MB/s). The poor throughput of a 3-blade array for now is a mystery, should be around 5900 to 6000MB/s.
 
  • Like
Reactions: MP39

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
What you wrote is for cards that have Intel Bifurcation support with a newer PC that support it. With PCIe switched cards the arrangement works totally differently, if was the way you wrote you would be limited to 1500MB/s with one blade.

The PCIe switch used on HPT SSD7101A-1 have a x16 connection to the PCIe slot, then internally it has four x4 PCIe v3.0 connections to the blades - the 8GT/s that SystemReport shows. Each one of blades connect to the PCIe switch at PCIe v3.0 x4, then the PCIe switch aggregate everything internally (the magic of PCIe switches) and connect to the PCIe slot at a x16 PCIe v2.0 connection. That's why a SSD7101A-1 can have around 6000MB/s with a MP5,1 x16 slot - the maximum usable throughput with a PCIe v2.0 x16 slot after all the overhead.

With just two 970PRO in a RAID-0 arrangement you already are limited by the total throughput of the PCIe v2.0 slot (again, 5900~6000MB/s). The poor throughput of a 3-blade array for now is a mystery, should be around 5900 to 6000MB/s.

I’m currently moving over the OS from one of the blades to an SSD I have. I’ll RAID-0 all four and see what happens.
 

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
Received the SSD7101A-1 yesterday (along with 3 new 970 Evo Plus 1tb blades to go along with the one I already have.) It appears my card is a v3.10; I haven't seen that posted here before. The metal cover came with two thick thermal pads, one for the area where the blades install and one for over the PLX chip. Not sure if the previous versions included those.

I immediately ran into the problem @dorianclive mentions in the post above, (previously unforeseen for me) when I realized my Sapphire RX 580 Pulse (in slot 1) was a bit too thick and the protrusions from the bottom of the HP card (where the set screws lock down the blades on the other side) in slot 2 just above were contacting the fans on the RX 580. Swapping the 580 to slot 2 would mean I would have to bump my Inateck USB 3.1 AIC which was a non-starter for me. I ended up removing the two bottom screws on the 580 backplate and adding a plastic washer (~2mm thickness) I had leftover from a recent new TV installation mounted with one layer of 1.5mm thermal pad on the bottom left (if you're looking at the back of the 7101A-1 card with the pins pointing down) protrusion. This washer fit snugly over the round protrusion so it stayed in place as I was inserting the HP into slot 2.

This was enough to prevent the fan blades on the RX 580 from contacting the HP card on its own, but I also added two of the aforementioned washers with a thermal pad sandwich to the outside edge of the cards (see photo) just to be safe.

This will likely be a temporary fix until I find a GPU that isn't so thick. Saw someone mention that the Radeon 5700 XT FE (I'm assuming 50th anniversary edition?) was thinner than the RX 580. I don't do A LOT of GPU heavy tasks at this time, so I'm not terribly worried about overheating, but the problem is on my radar and if I have to get a different GPU I'd rather it be an upgrade over the 580.

Ultimately, I'm probably going to ditch the Sonnet Allegro I have in Slot 4 for the Titan Ridge card, at that point I would be able to move a GPU to slot 2, but the idea of wasting a slot makes my head hurt...

Anyhow, back to the Highpoint. My plan is to run 1 blade as my boot drive and the other 3 in a RAID 0. If I'm reading what @w1z is saying correctly since I have a 5,1 the highest speeds I can expect with the 3 Evo Plus blades in RAID 0 is ~4500mbps and in this case, since I'm speed limited by the architecture of the 5,1, I would be better off NOT installing the High Point drivers and NOT using the WebGUI to setup the RAID (and just using Apple RAID via Disk Utility instead), is that correct?

Interesting. I’ve never seen that version number. Would be curious to see what, if any changes there are.

I really want to get my RX 580 back into slot 1 so I can regain Slot 2. How’s your airflow?
 

dorianclive

macrumors member
Nov 27, 2018
83
9
Bronx, NY
What you wrote is for cards that have Intel Bifurcation support with a newer PC that support it. With PCIe switched cards the arrangement works totally differently, if was the way you wrote you would be limited to 1500MB/s with one blade.

The PCIe switch used on HPT SSD7101A-1 have a x16 connection to the PCIe slot, then internally it has four x4 PCIe v3.0 connections to the blades - the 8GT/s that SystemReport shows. Each one of blades connect to the PCIe switch at PCIe v3.0 x4, then the PCIe switch aggregate everything internally (the magic of PCIe switches) and connect to the PCIe slot at a x16 PCIe v2.0 connection. That's why a SSD7101A-1 can have around 6000MB/s with a MP5,1 x16 slot - the maximum usable throughput with a PCIe v2.0 x16 slot after all the overhead.

With just two 970PRO in a RAID-0 arrangement you already are limited by the total throughput of the PCIe v2.0 slot (again, 5900~6000MB/s). The poor throughput of a 3-blade array for now is a mystery, should be around 5900 to 6000MB/s.

So I migrated my OS to a 860 Pro and set up the four 970 Pros in RAID-0 on my 7701 but the results are still poor. I don't know what else to do. Any additional thoughts/insight as to why this is happening?
Screen Shot 2020-07-15 at 6.49.07 PM.png
 

w1z

macrumors 6502a
Aug 20, 2013
692
481
What you wrote is for cards that have Intel Bifurcation support with a newer PC that support it. With PCIe switched cards the arrangement works totally differently, if was the way you wrote you would be limited to 1500MB/s with one blade.

The PCIe switch used on HPT SSD7101A-1 have a x16 connection to the PCIe slot, then internally it has four x4 PCIe v3.0 connections to the blades - the 8GT/s that SystemReport shows. Each one of blades connect to the PCIe switch at PCIe v3.0 x4, then the PCIe switch aggregate everything internally (the magic of PCIe switches) and connect to the PCIe slot at a x16 PCIe v2.0 connection. That's why a SSD7101A-1 can have around 6000MB/s with a MP5,1 x16 slot - the maximum usable throughput with a PCIe v2.0 x16 slot after all the overhead.

With just two 970PRO in a RAID-0 arrangement you already are limited by the total throughput of the PCIe v2.0 slot (again, 5900~6000MB/s). The poor throughput of a 3-blade array for now is a mystery, should be around 5900 to 6000MB/s.

Nothing wrong with what I wrote and at no point did I reference bifurcation or hint that this was the case with the HPT card. If 2 blades were used in raid0 on the HPT card in a cMP, then they are externally connected at PCIe 2.0 x8 x 2 speeds and internally at PCIe 3.0 x4 x 2 speeds (switching). With 3 blades in raid-0 setup, the PLX does more switching work to saturate a PCIe 2.0 x16 slot than it does with a 2 blade raid-0 setup. With 4 blades in raid-0 setup, the PLX does even more work to saturate a PCIe 2.0 x16 slot only if ALL 4 blades are used in raid-0.

Try to copy files between blades when all 4 slots are occupied on the HighPoint in single/standalone setups and watch the performance drop.

This is why a 2 to 3 blade setup is recommended for the cMP.
 

tsialex

Contributor
Jun 13, 2016
13,454
13,601
Nothing wrong with what I wrote and at no point did I reference bifurcation or hint that this was the case with the HPT card. If 2 blades were used in raid0 on the HPT card in a cMP, then they are externally connected at PCIe 2.0 x8 x 2 speeds and internally at PCIe 3.0 x4 x 2 speeds (switching). With 3 blades in raid-0 setup, the PLX does more switching work to saturate a PCIe 2.0 x16 slot than it does with a 2 blade raid-0 setup. With 4 blades in raid-0 setup, the PLX does even more work to saturate a PCIe 2.0 x16 slot only if ALL 4 blades are used in the raid-0.

Try to copy files between blades when all 4 slots are occupied on the HighPoint in single setups and watch the performance drop.

This is why a 2 to 3 blade setup is recommended for the cMP.
The output connection of the switch is always x16, independent of the number of the blades - there isn’t a split of the lanes at all. To be even more precise, the output ports of the switch will only use less lanes when you install with a slot with less lanes available, like slots 3 or 4 with MP5,1 or the x8 slots of MP7,1. See PLX diagrams and you will see what I’m talking about. There isn’t 1500MB connections at all, two 970PRO can saturate the x16 PCIe v2.0, see earlier benchmarks on the thread.

Sorry about the edits, iOS 14 keyboard disappears and makes me crazy.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.