Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
If I get the V2 or V3.1 HP7101A - should I just leave any modding or only change the fan?
You can change the rotation with HPT management first. If the fan noise pitch still makes you crazy, change to one of the two other low rotation that are compatible.
 
  • Like
Reactions: ncc1701d
You can change the rotation with HPT management first. If the fan noise pitch still makes you crazy, change to one of the two other low rotation that are compatible.

So the controller came much faster than I expected. Rushed out and got the Samsung blades. Production date 13th June 2020, they had the latest firmware, confirmed by a mate who had a windows computer with the Samsung magician software.

I believe I have followed all the instructions correctly for disabling T2, temporarily disabling SIP and using the Highpoint RAID management to set up 3x1TB evo plus RAID 0 and leaving 1TB to be used for windows parallels.

I think I'm getting the expected speeds for the configurations. (see below)

Fan noise... thankfully can adjust and I'll play around with it once fully set up. The HP software allows you to turn it off, low, med and high. "low" it isn't noticeable, but not sure what temps I'll be getting.

I have formatted all volumes using apples DU to Mac OS Extended (Journaled) (is that correct fro parallels on the 1xTB and the scratch drive of the RAID 0?)

One thing I'm a bit confused about is that both the HP software and DU say that the RAIDed drives are 3TB available - shouldn't it say 1.5TB since it's RAID 0?

3 x 1TB RAID 0
HighPoint Technologies, Inc. HPT NVME Device.png

1 x 1TB
HighPoint Technologies, Inc. HPT NVME Device.png

1 x 250GB (Mac OS)
APPLE SSD AP0256M.png
 
These are better than my numbers. Maybe someone more knowledgeable can answer: does it matter that @ncc1701d's tests were using 512 MiB and mine were in 1 GiB?


My 3x 1TB Samsung Evo Plus in slots 2-4 on the highpoint 7101A-1 in RAID 0 (HPT drivers+WebGUI RAID) HFS+ (MacOS Journaled), installed in PCIe slot #2

Samsung 970 EVO Plus (in HPT 7101A) 3x 1TB (3TB) RAID 0 .png



My 1TB Samsung Evo Plus boot drive in slot 1 on the HPT card with APFS

Samsung 970 EVO Plus 1TB APFS (in HPT SSD7101A-1) Boot.png
 
  • Like
Reactions: ncc1701d
I think I'm getting the expected speeds for the configurations.
Compare with "AJA System Test Lite.app" and "Blackmagic Disk Speed Test.app"? My Mac mini 2018 internal 128 GB drive has these sequential Read/Write results:
Amorphous Disk Mark: 1033/1639
AJA System Test Lite: 2030/911
Blackmagic Disk Speed Test: 2448/865
The numbers are all over the place...

Anyway, for your results, I would expect triple RAID to at least be better than twice the performance of a single drive. You are using a MacPro7,1 and not a classic Mac Pro so you should be able to get over 8 GB/s with some benchmark.
 
  • Like
Reactions: ncc1701d
These are better than my numbers. Maybe someone more knowledgeable can answer: does it matter that @ncc1701d's tests were using 512 MiB and mine were in 1 GiB?



My 3x 1TB Samsung Evo Plus in slots 2-4 on the highpoint 7101A-1 in RAID 0 (HPT drivers+WebGUI RAID) HFS+ (MacOS Journaled), installed in PCIe slot #2

View attachment 942528



My 1TB Samsung Evo Plus boot drive in slot 1 on the HPT card with APFS

View attachment 942529
It's not meaningful to compare MP7,1 results with a MP5,1. PCIe v3.0 vs PCIe v2.0.
 
  • Like
Reactions: zachek
Thoughts on this just released versus 7101A?

 
  • Like
Reactions: dabotsonline
Thoughts on this just released versus 7101A?

Very nice. Can connect eight NVMe devices. It's like the LQD4500 but not as compact (uses U.2 instead of M.2) and uses PCIe 3.0 instead of PCIe 4.0.

Actually, HighPoint has other cards that support eight devices and/or PCIe Gen 4:
 
Hi everyone, I'm trying to dig through this thread to estimate what my read write speeds should be in a 2019 7,1 Mac Pro. I just picked up this controller and 4 x 1tb Evo SSDs. Any guesses?

Also, would it be better to run Windows on 1tb by itself, and then run extra Mac storage on 3 x 1TB? Or would it be better to Raid 0 all 4 and setup Windows on a small partition? I think I only need 256gb-500gb max. Wondering which setup is ideal?
 
Last edited:
Hi everyone, I'm trying to dig through this thread to estimate what my read write speeds should be in a 2019 7,1 Mac Pro. I just picked up this controller and 4 x 1tb Evo SSDs. Any guesses?

Also, would it be better to run Windows on 1tb by itself, and then run extra Mac storage on 3 x 1TB? Or would it be better to Raid 0 all 4 and setup Windows on a small partition? I think I only need 256gb-500gb max. Wondering which setup is ideal?

run Windows by itself and the other 3 in raid 0 (if you have another means of backup)

can you boot off the single drive in the pcie card ?
 
Thoughts on this just released versus 7101A?


if you buy the PCIe 4 model will it down speed for the new Mac Pro or just not run at all?
 
  • Like
Reactions: dabotsonline
run Windows by itself and the other 3 in raid 0 (if you have another means of backup)
Right. Not easy to boot Windows or macOS in RAID unless it's a hardware RAID instead of a software RAID.

can you boot off the single drive in the pcie card ?
Should not be a problem. I think you need to enable external booting in Startup Security Utility.


if you buy the PCIe 4 model will it down speed for the new Mac Pro or just not run at all?
It should not down speed. The downstream M.2 devices should run at PCIe 4.0 x4. The upstream link is limited to the link rate and link width of the PCIe slot.
For a MacPro7,1 with PCIe 3.0 x16 slots, you can probably fill the upstream bandwidth with RAID 0 of two (maybe three) gen 4 M.2 devices.
For a MacPro3,1, MacPro4,1, MacPro5,1 with PCIe 2.0 x16 slots, you can probably fill the upstream bandwidth with RAID 0 of one (or two) gen 4 M.2 devices.
For a MacPro2,1 with PCIe 1.0 x16 slots, a single gen 4 M.2 device will fill the upstream bandwidth.

In any case, there should be no problem connecting 8 M.2 devices if the card supports 8 devices.

I don't think anyone has reported testing of gen 4 in any Mac Pro yet.
 
run Windows by itself and the other 3 in raid 0 (if you have another means of backup)

can you boot off the single drive in the pcie card ?

I was curious if raid 0 x 3 drives (with 1tb running windows) would be slower than raid x 4 drives too.

Ideally I'd like to do a cheaper single 512gb on windows since speed doesn't matter to me over there.
 
  • Like
Reactions: dabotsonline
I thought you couldn't use bootcamp on an external drive (?). I'm looking at VMware or parallels to be installed with all the files on the 1xTB drive and keeping the 3x1TB drives as scratch.
 
  • Like
Reactions: dabotsonline
I thought you couldn't use bootcamp on an external drive (?).
Partition the external drive manually if you can't use Boot Camp Assistant to do it. SATA and NVMe should be a normal Windows install for Macs like the MacPro7,1 that can boot Windows using UEFI. There are special instructions for installing Windows to USB. Once Windows is installed, then you can install Boot Camp drivers to support Apple devices and displays.
 
Partition the external drive manually if you can't use Boot Camp Assistant to do it. SATA and NVMe should be a normal Windows install for Macs like the MacPro7,1 that can boot Windows using UEFI. There are special instructions for installing Windows to USB. Once Windows is installed, then you can install Boot Camp drivers to support Apple devices and displays.
I'm using one the of the 1TB blades for windows - can I get someone with a PC to format it for windows, install a copy of Windows 10 on it, then put it back into the HP7101A and then use the bootcamp drivers? (it was looking too hard, so that's why I just thought I'd go with the Virtual apps)
 
  • Like
Reactions: dabotsonline
Just got my HighPoint 7101A-1 NVMe installed. I was really confused about BLOCK SIZE and SECTOR SIZE during the setup. I did 512k and 512k for both. Have no idea what these settings are supposed to be.

Speeds are good! 6-7k reads and writes on plain EVOs.

Also, I'm going to keep watching but it seems like the fans are on high since creating this RAID which is concerning.

Edit: I've read a lot here about fans. I have a 3 pin card with v3.1 on it. For some reason I'm not seeing any firmware version in the software, and I don't see anywhere to control the fans. Can anyone give me advice on how to keep this beast quiet?

Edit 2: Ahh. I DO have the fan control. I turned it to low and it's quiet. I'm hoping if it gets hot it will crank up. But I'm super happy. This is a good piece of hardware.
 
Last edited:
Just got my HighPoint 7101A-1 NVMe installed. I was really confused about BLOCK SIZE and SECTOR SIZE during the setup. I did 512k and 512k for both. Have no idea what these settings are supposed to be.

Speeds are good! 6-7k reads and writes on plain EVOs.

Also, I'm going to keep watching but it seems like the fans are on high since creating this RAID which is concerning.

Edit: I've read a lot here about fans. I have a 3 pin card with v3.1 on it. For some reason I'm not seeing any firmware version in the software, and I don't see anywhere to control the fans. Can anyone give me advice on how to keep this beast quiet?

Edit 2: Ahh. I DO have the fan control. I turned it to low and it's quiet. I'm hoping if it gets hot it will crank up. But I'm super happy. This is a good piece of hardware.
Thanks for sharing your most recent experience here on your 7101A-1. I, too, have the same question regarding BLOCK SIZE and SECTOR SIZE settings in HighPoint GUI. Mine are currently at 64k BLOCK SIZE and 4k SECTOR SIZE. See Photos of this setting and also my speed results -- I am very curious if my deviations from some of what you all posted earlier are due to my settings here (and my non-official HP driver currently being used, but that works).

I posted previously in this Forum but it was around the HighPoint SSD7103. At that time, @tsialex was helping me troubleshoot. I have 4 x Samsung 1TB 970s and was only able to format x3 using the HP GUI RAID Utility. Later, found that after contacting HighPoint, they did not technically have a driver for macOS...only Linux and PC. I just left it in HP RAID-0 x 3 using the HP driver and GUI Utility for another card that was similar they had for macOS and then used the other 1TB in APFS running macOS Catalina as boot disk and this HP RAIDx3 for data part. This has been working alright. I am also going to post my current Amorphous DiskMark results for the RAID x 3 using current HP drivers and the APFS boot with Catalina.

I plan to reformat all 4 x 970s, download the new HP driver and GUI Utility specifically released for my SSD7103 for macOS, and then use SuperDuper to restore back to RAID-0. Curious if I will get the Fan Speed Control setting now?

Also wondering if I should do RAIDx4 this way? Or should I keep it with the same setup? Was reading other earlier posts about saturating performance of x16 Slot-2 of my 5,1. Thoughts anyone?

Also curious to try out Big Sur and see any results there -- even though still Beta.

So yes, would like to hear from anyone about the peak performance I should be getting (assuming new macOS SSD7103 HP Driver now further optimizes my performance) in my cMP 5,1 2 x 3.46 6-Core.
 

Attachments

  • 3x1TB970s_RAID0_HPdriver.png
    3x1TB970s_RAID0_HPdriver.png
    121.7 KB · Views: 322
  • 1x1TB970_APFS_Nodriver_Apple.png
    1x1TB970_APFS_Nodriver_Apple.png
    117.5 KB · Views: 252
  • Screen Shot 2020-08-29 at 3.03.53 PM.png
    Screen Shot 2020-08-29 at 3.03.53 PM.png
    90.9 KB · Views: 292
  • Screen Shot 2020-08-29 at 3.04.30 PM.png
    Screen Shot 2020-08-29 at 3.04.30 PM.png
    67.6 KB · Views: 257
  • Screen Shot 2020-08-29 at 3.04.36 PM.png
    Screen Shot 2020-08-29 at 3.04.36 PM.png
    75.3 KB · Views: 245
  • Screen Shot 2020-08-29 at 3.04.57 PM.png
    Screen Shot 2020-08-29 at 3.04.57 PM.png
    93.5 KB · Views: 245
  • Screen Shot 2020-08-29 at 3.05.02 PM.png
    Screen Shot 2020-08-29 at 3.05.02 PM.png
    92.5 KB · Views: 255
  • Screen Shot 2020-08-29 at 3.05.09 PM.png
    Screen Shot 2020-08-29 at 3.05.09 PM.png
    92.7 KB · Views: 251
  • Screen Shot 2020-08-29 at 3.05.46 PM.png
    Screen Shot 2020-08-29 at 3.05.46 PM.png
    329.2 KB · Views: 272
Partition the external drive manually if you can't use Boot Camp Assistant to do it. SATA and NVMe should be a normal Windows install for Macs like the MacPro7,1 that can boot Windows using UEFI. There are special instructions for installing Windows to USB. Once Windows is installed, then you can install Boot Camp drivers to support Apple devices and displays.
I can't figure this out at all. The process to install to an external USB doesn't work because the 7101A isn't a USB and isn't detected as such by VMware during the install process. I've asked Highpoint and they said they will refer it to their engineers. I'm going to try and install a dummy macOS on the 1X1TB blade and then use its bootcamp to install Windows. Failing that, VMware is coming out with version 12 which is supposed to be free in October. I'll just use that.
 
  • Like
Reactions: dabotsonline
I can't figure this out at all. The process to install to an external USB doesn't work because the 7101A isn't a USB and isn't detected as such by VMware during the install process. I've asked Highpoint and they said they will refer it to their engineers. I'm going to try and install a dummy macOS on the 1X1TB blade and then use its bootcamp to install Windows. Failing that, VMware is coming out with version 12 which is supposed to be free in October. I'll just use that.


Are you trying to install Windows on the drive directly and have VMware run from that drive while in MacOS? For that you would need to enable passthru for the whole card so I don't think what you are trying to do will work. Your best bet would be to format the card that you want to use for your windows VM and then put your virtual machine's files on the card. If you are trying to do a dual boot to install windows then you will need to head into recovery mode and disable secure boot to be able to install windows to the blade on the card properly.

I can help with either, just need a little clarification on your goal and use case.
 
Hi y'all. Returning to my venerable cheese grater after a year or so of focusing on other projects--cool to see all the progress being made with OpenCore and Titan Ridge. Looking forward to diving into some updates!

I managed to snag a 7101A from Amazon Warehouse for super cheap--and seems it was never actually used. It's the updated 2.0 PCB with de-lidded PLX chip and 3-wire fan. Haven't installed the HP drivers to check the firmware revision yet, but did load in two Samsung 970 Evo blades and tested that the card works properly. Not really planning on any RAID (though glad to have the option). Just nice to have up to four blades for JBOD and only needing one PCI slot.

I skimmed through the thread but am wondering if anyone was ultimately successful in putting this in slot 2 with the monster Sapphire RX580 in slot 1. I know I can/should reverse them but would really rather not lose the use of slot 3. And I'm aware that putting the 7101A in slot 2 may present some airflow concerns with the GPU. But I would at least like to try for a while and will just keep an eye on the temps--that is if I can manage to get the fans to clear the underside of the 7101A.

In my first attempt I was not able to get enough separation between the cards. I wonder if this will be harder with the Rev 2 card because it has all the additional standoffs for 42/60mm m.2 blades--guess I could try to remove them.

But before reinventing the wheel I just thought I would ask if anyone is successfully running with this configuration and has any tips--removing screws/brackets, adding spacers, etc.

Thanks!
 
  • Like
Reactions: dabotsonline
Hi y'all. Returning to my venerable cheese grater after a year or so of focusing on other projects--cool to see all the progress being made with OpenCore and Titan Ridge. Looking forward to diving into some updates!

I managed to snag a 7101A from Amazon Warehouse for super cheap--and seems it was never actually used. It's the updated 2.0 PCB with de-lidded PLX chip and 3-wire fan. Haven't installed the HP drivers to check the firmware revision yet, but did load in two Samsung 970 Evo blades and tested that the card works properly. Not really planning on any RAID (though glad to have the option). Just nice to have up to four blades for JBOD and only needing one PCI slot.

I skimmed through the thread but am wondering if anyone was ultimately successful in putting this in slot 2 with the monster Sapphire RX580 in slot 1. I know I can/should reverse them but would really rather not lose the use of slot 3. And I'm aware that putting the 7101A in slot 2 may present some airflow concerns with the GPU. But I would at least like to try for a while and will just keep an eye on the temps--that is if I can manage to get the fans to clear the underside of the 7101A.

In my first attempt I was not able to get enough separation between the cards. I wonder if this will be harder with the Rev 2 card because it has all the additional standoffs for 42/60mm m.2 blades--guess I could try to remove them.

But before reinventing the wheel I just thought I would ask if anyone is successfully running with this configuration and has any tips--removing screws/brackets, adding spacers, etc.

Thanks!

Congrats on the find and the reminder that I need to sell a spare Highpoint 7101-a and Syba/IOCrest SI-PEX 40129 adapters from my test bench.

From my experience and what I've read here ... The best combination for this approach is:
Slot #1 = Highpoint 7101
Slot #2 = Video card

Swapping that configuration around blocks too much of the video cards airflow while heating up the 7101.

There is also a thread somewhere in this forum where someone placed their video card in slot #4 with a little modification and little performance penalty.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.