Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
Is the 6200+ achieved with a RAID? If so, can you share how you configured the RAID? I have a cMP5,1 with Mojave, boot ROM is 144.0.0.0.0, and a 7101A in slot 1. One Samsung 970 Pro 512GB runs at 3200 (read) but two 970s in HFS+ RAID 0 (created with diskutil) run at 3000 (read).

I created the RAID with `diskutil appleRAID create stripe RAID HFS+ disk0s2 disk1s2`

Here's my System Info:

qZUFe7q.png

I have a dual 970pro 1tb/Highoint 7101a setup at the office with a spare partition on each drive that can be bound together in a software raid0 partition. I'll run some benchmarks in AmorpheousDiskMark, Quickbench and AJA System Test to see where performance sits with latest version of Mojave on Firmware 144.

This time around, I'll be booting from one of the 970Pro's so there will be some performance lost to overhead.
 
Thanks handheldgames. I ran my benchmarks with Amorphous.

Single 970:

mogfxul.png


Two 970s in RAID 0:

qCxRf1e.png
 
Last edited:
Hello all,

due to some capacity constraints I did run into with the size of my FCP files... I "had to" install a 3rd 970EVO to my Raid-0 work drive.

Can you tell me how you configured your RAID? I am finding that a Raid-0 is slower than a single stick! I created my Raid with `diskutil appleRAID create stripe RAID HFS+ disk0s2 disk1s2`

Here is my Raid config:

MzmbCFu.png
 
Last edited:
Can you tell me how you configured your RAID? I am finding that a Raid-0 is slower than a single stick! I created my Raid with `diskutil appleRAID create stripe RAID HFS+ disk0s2 disk1s2`
You only showed the info for one of the NVMe drives. Show the info for both, also show PCIe info and diskutil. Use Terminal.app and copy text instead of screenshot to save space, make it easier to read, and easier to edit and quote. Wrap the text results in [ code] [ /code] tags (remove spaces from the tags).
Code:
diskutil list
system_profiler SPNVMeDataType SPPCIDataType
I am wondering if the card is running at PCIe 1.0 speed or PCIe 2.0 speed. If you have the latest firmware from Mojave, then it should be running at PCIe 2.0 speed. You can use "sudo lspci -vvnn" if you have pciutils installed.
 
Code:
$ diskutil list
/dev/disk0 (external):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                         512.1 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                 Apple_RAID                         511.8 GB   disk0s2
   3:                 Apple_Boot Boot OS X               134.2 MB   disk0s3

/dev/disk1 (external):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                         512.1 GB   disk1
   1:                        EFI EFI                     209.7 MB   disk1s1
   2:                 Apple_RAID                         511.8 GB   disk1s2
   3:                 Apple_Boot Boot OS X               134.2 MB   disk1s3

/dev/disk2 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS RAID                   +1.0 TB     disk2

/dev/disk3 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *250.1 GB   disk3
   1:                        EFI EFI                     209.7 MB   disk3s1
   2:                 Apple_APFS Container disk4         249.8 GB   disk3s2

/dev/disk4 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +249.8 GB   disk4
                                Physical Store disk3s2
   1:                APFS Volume server4 HD              13.3 GB    disk4s1
   2:                APFS Volume Preboot                 24.0 MB    disk4s2
   3:                APFS Volume Recovery                507.3 MB   disk4s3
   4:                APFS Volume VM                      20.5 KB    disk4s4

$ system_profiler SPNVMeDataType SPPCIDataType
NVMExpress:

    Generic SSD Controller:

        Samsung SSD 970 PRO 512GB:

          Capacity: 512.11 GB (512'110'190'592 bytes)
          TRIM Support: Yes
          Model: Samsung SSD 970 PRO 512GB
          Revision: 1B2QEXP7
          Serial Number: S463NF0M604377X
          Link Width: x4
          Link Speed: 8.0 GT/s
          Detachable Drive: No
          BSD Name: disk0
          Partition Map Type: GPT (GUID Partition Table)
          Removable Media: No
          Volumes:
            EFI:
              Capacity: 209.7 MB (209'715'200 bytes)
              File System: MS-DOS FAT32
              BSD Name: disk0s1
              Content: EFI
              Volume UUID: 0E239BC6-F960-3107-89CF-1C97F78BB46B
            disk0s2:
              Capacity: 511.77 GB (511'766'216'704 bytes)
              BSD Name: disk0s2
              Content: Apple_RAID
            Boot OS X:
              Capacity: 134.2 MB (134'217'728 bytes)
              File System: Journaled HFS+
              BSD Name: disk0s3
              Content: Apple_Boot
              Volume UUID: BE90AE98-061C-31B4-B71C-2F3EF602BAAE

    Generic SSD Controller:

        Samsung SSD 970 PRO 512GB:

          Capacity: 512.11 GB (512'110'190'592 bytes)
          TRIM Support: Yes
          Model: Samsung SSD 970 PRO 512GB
          Revision: 1B2QEXP7
          Serial Number: S463NF0M411349H
          Link Width: x4
          Link Speed: 8.0 GT/s
          Detachable Drive: No
          BSD Name: disk1
          Partition Map Type: GPT (GUID Partition Table)
          Removable Media: No
          Volumes:
            EFI:
              Capacity: 209.7 MB (209'715'200 bytes)
              File System: MS-DOS FAT32
              BSD Name: disk1s1
              Content: EFI
              Volume UUID: 0E239BC6-F960-3107-89CF-1C97F78BB46B
            disk1s2:
              Capacity: 511.77 GB (511'766'216'704 bytes)
              BSD Name: disk1s2
              Content: Apple_RAID
            Boot OS X:
              Capacity: 134.2 MB (134'217'728 bytes)
              File System: Journaled HFS+
              BSD Name: disk1s3
              Content: Apple_Boot
              Volume UUID: C98419D7-6DFF-30CF-950B-816F14DD7F8E

PCI:

    AMD Radeon HD 7xxx:

      Name: ATY,AMD,RadeonFramebuffer
      Type: Display Controller
      Driver Installed: Yes
      MSI: Yes
      Bus: PCI
      Slot: Slot-2
      Vendor ID: 0x1002
      Device ID: 0x679a
      Subsystem Vendor ID: 0x103c
      Subsystem ID: 0x6616
      Revision ID: 0x0000
      Link Width: x16
      Link Speed: 5.0 GT/s

    pci1002,aaa0:

      Type: Audio Device
      Driver Installed: No
      MSI: No
      Bus: PCI
      Slot: Slot-2
      Vendor ID: 0x1002
      Device ID: 0xaaa0
      Subsystem Vendor ID: 0x103c
      Subsystem ID: 0xaaa0
      Revision ID: 0x0000
      Link Width: x16
      Link Speed: 5.0 GT/s

    pci144d,a808:

      Type: NVM Express Controller
      Driver Installed: Yes
      MSI: Yes
      Bus: PCI
      Slot: Slot-1@8,0,0
      Vendor ID: 0x144d
      Device ID: 0xa808
      Subsystem Vendor ID: 0x144d
      Subsystem ID: 0xa801
      Revision ID: 0x0000
      Link Width: x4
      Link Speed: 8.0 GT/s

    pci144d,a808:

      Type: NVM Express Controller
      Driver Installed: Yes
      MSI: Yes
      Bus: PCI
      Slot: Slot-1@10,0,0
      Vendor ID: 0x144d
      Device ID: 0xa808
      Subsystem Vendor ID: 0x144d
      Subsystem ID: 0xa801
      Revision ID: 0x0000
      Link Width: x4
      Link Speed: 8.0 GT/s

And:

Code:
Model Name:    Mac Pro
  Model Identifier:    MacPro5,1
  Processor Name:    6-Core Intel Xeon
  Processor Speed:    3.46 GHz
  Number of Processors:    2
  Total Number of Cores:    12
  L2 Cache (per Core):    256 KB
  L3 Cache (per Processor):    12 MB
  Hyper-Threading Technology:    Enabled
  Memory:    96 GB
  Boot ROM Version:    144.0.0.0.0
  SMC Version (system):    1.39f11
  SMC Version (processor tray):    1.39f11
 
Thanks handheldgames. I ran my benchmarks with Amorphous.

Single 970:

mogfxul.png


Two 970s in RAID 0:

qCxRf1e.png

I just ran a series of benchmarks on a HFS Raid 0 with a 32KB chunk size and your initial observation was on target. What 's going on?? Comparing legacy benchmarks, it looks like there has been a significant drop in write performance while using Firmware 144 and the latest version of Mojave. Hmmmm.

While read performance is still scaling to the near the limits of PCIe 2.0 of 6000MB/s, the drop in write performance is the biggest surprise. It's not even hitting the speeds possible with a single 970 Pro, under 3000MBS in all cases.

window8-1-196.38.08 AM.png

window8-1-196.48.47 AM.png
 
Your Amorphous run above says "Raid 0 APFS". Is it an HFS volume or APFS? (And was the QuickBench run HFS or APFS?) And if APFS, how did you create an APFS RAID volume??

My runs are with an HFS+ volume.
 
Code:
$ diskutil list
/dev/disk0 (external):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   2:                 Apple_RAID                         511.8 GB   disk0s2

/dev/disk1 (external):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   2:                 Apple_RAID                         511.8 GB   disk1s2

/dev/disk2 (external, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS RAID                   +1.0 TB     disk2

$ system_profiler SPNVMeDataType SPPCIDataType
NVMExpress:

    Generic SSD Controller:

        Samsung SSD 970 PRO 512GB:

          Link Width: x4
          Link Speed: 8.0 GT/s
          BSD Name: disk0

    Generic SSD Controller:

        Samsung SSD 970 PRO 512GB:

          Link Width: x4
          Link Speed: 8.0 GT/s
          BSD Name: disk1

PCI:

    pci144d,a808:

      Type: NVM Express Controller
      Slot: Slot-1@8,0,0
      Link Width: x4
      Link Speed: 8.0 GT/s

    pci144d,a808:

      Type: NVM Express Controller
      Slot: Slot-1@10,0,0
      Link Width: x4
      Link Speed: 8.0 GT/s
Need pciutils to verify upstream link speed of slot 1.
I just ran a series of benchmarks on a HFS Raid 0 with a 32KB chunk size and your initial observation was on target. What 's going on?? Comparing legacy benchmarks, it looks like there has been a significant drop in write performance while using Firmware 144 and the latest version of Mojave. Hmmmm.

While read performance is still scaling to the near the limits of PCIe 2.0 of 6000MB/s, the drop in write performance is the biggest surprise. It's not even hitting the speeds possible with a single 970 Pro, under 3000MBS in all cases.
Both of your benchmark results are from your RAID? The QuickBench shows proper read performance but AmorphousDisk does not?
 
Need pciutils to verify upstream link speed of slot 1.

Both of your benchmark results are from your RAID? The QuickBench shows proper read performance but AmorphousDisk does not?

Amorphous is set to test with QD4 whereas quickbench performs the test with what’s equivalent to QD1. The variance in queue depth probably effected throughput in amorphous.

Both tests were completed with HFS+ to deliver the fastest speeds possible. PCI tools eval will have to wait until tomorrow.
[doublepost=1564695176][/doublepost]
Your Amorphous run above says "Raid 0 APFS". Is it an HFS volume or APFS? (And was the QuickBench run HFS or APFS?) And if APFS, how did you create an APFS RAID volume??

My runs are with an HFS+ volume.

Amorphous takes the name of the disk. Quickbench takes the name of the drive.
I started with 2 hfs partitions that were raided and formatted at hfs.
[doublepost=1564695233][/doublepost]One more thing. It looks like I posted the wrong amorphous test. I’ll post the hfs version.

I have a HFS version of BlackMagic... Not Amorphous... Since I attempted to format a APFS drive to HFS, Disk Utility is locking up and I may need to pull a SSD out of the highpoint to get it to stop freaking out about the drive. I'll be able to grab Amorphous tomorrow.

The 2nd pull on Amorphous / APFS was faster, but still too slow.
window8-1-196.41.05 AM.png

window8-1-196.47.30 AM.png
 
Last edited:
OK.. I had a couple spare minutes before me end of day... Reformatted to HFS Raid 0.


Note on the 1st test... read speeds and write speeds flipped in performance....
window8-1-192.47.50 PM.png
window8-1-192.46.55 PM.png
window8-1-192.46.00 PM.png
 

Attachments

  • image.png
    image.png
    433.9 KB · Views: 144
Last edited:
Amorphous is set to test with QD4 whereas quickbench performs the test with what’s equivalent to QD1. The variance in queue depth probably effected throughput in amorphous.
I thought higher queue depth is supposed to give higher results. Maybe it's behaving like random by not performing multiple I/Os sequentially. Maybe this is a bug that should be reported to amorphous developer. Maybe it's not a bug - I did some tests with a Thunderbolt RAID (Mac Mini 2018, Disk Utility, HFS+ Journaled, Striped 32K, one NVMe drive per Thunderbolt controller, 950 Pro and 960 Pro, 960 Pro has poor writes over Thunderbolt - I think direct PCIe is much better), I get best sequential read results using queue depth 8.
1: 2111/1163
2: 4009/1162
4: 4745/1202
8: 5049/1191
16: 4816/1225
32: 3787/1207
64: 3637/1154
128: 3608/1189
256: 3536/1860
512: 3465/1846
1024: 3383/1718

Both tests were completed with HFS+ to deliver the fastest speeds possible.
SoftRAID driver may have better RAID results than Disk Utility RAID. At least it did in the NVMe RAID tests I did on my MacPro3,1 with Amfeltec Gen 3 card.

PCI tools eval will have to wait until tomorrow.
Since you're getting more than 4000 MB/s, you must be getting the correct PCIe 2.0 speed. It's unlikely anyone with a MacPro5,1 and the 144 rom is getting PCIe 1.0 speeds anymore. Maybe pciutils can show PCIe errors but I don't know how to read that.
 
Last edited:
Looking for more consistent data I reran speed test with Amorphous, AJA, and Blackmagic, all using 1GB test files. All volumes are HFS+. RAID was created with diskutil. First is a run with a single Samsung 970 Pro 512GB for reference. Then a run with two Samsung 970 PROs as a striped RAID.

Single stick:

pc0mL77.png


RAID:

JmZ406X.png


So now I'm curious what the difference is with how these tests are run. Amorphous shows the RAID faster than a single stick with its Seq test. With all other Amorphous tests the RAID performs worse than a single stick.

Also, since I'm not hitting the theoretical max of 6000+ with two sticks I wonder if a 3-stick RAID would be faster than a 2-stick.
 
And to make me even more curious about Amporphous, I reran the same using a SoftRAID JHFS+ volume:

qqC5AF7.png


No change from AJA and Blackmagic, but suddenly Amorphous reports very different numbers for QD4 and QD32. I seem to have gained 2x-6x Read with a small loss of Write for the deep queue case.

Note that I tried varying the "Optimize for" setting in SoftRAID with no noticeable different in results.

Anyone else want to give SoftRAID a try (there's a free trial) and see what you get?
 
Last edited:
A new driver has been released with fixes for 10.13/14 and support for Catalina.

5. Driver Revision History
--------------------

v1.1.3 07/31/2019
* Set ProbeScore to 300.(support macOS 10.14.6 and 10.15)

v1.1.2 05/06/2019
* Fix potential problem with multiple pci bridge devices.

v1.1.1 04/15/2019
* Support 4Kn disk.
* Support Synchronize cache.
* Support Intel 660P Series SSD.

v1.0.9 06/22/2018
* Improve read performance for 512K block size RAID5.

v1.0.8 06/08/2018
* First release.

Apple appears to have changed the hardware identification code or is using additional identifiers to recognize "Apple SSD Controllers" as the HighPoint 7101A is now showing as "Generic SSD Controller" after applying the recent supplemental update (or 10.14.6 - not sure exactly when that changed)

When I try to install on a cMP5,1 with 10.14.6 I get:

hGy5gPO.png

You need to go into "Security & Privacy" and click Allow next to the HighPoint security prompt.

Edit: Corrected Apple SSD Controller. The controller is still showing as 'Generic' with HighPoint drivers installed.

Edit 2: After properly installing and loading the driver, the controller disappears from under Hardware/NVMExpress section. One other thing I noticed after installing the recent supplemental updates was that my system was lagging during login (around 15 to 20 extra seconds) but with the HighPoint driver installed, logging in is near instant.
 
Last edited:
I have a couple of questions about the 7101a. I'm planning to use two of them in sonnet chassis connected via TB2 to two nMPs 6,1.
I know that TB2 will be the limiting factor, but for my needs the speed would suffice I think. It's more about the amount of storage with still quite good speed that makes it interesting for me.
I read in the thread that it worked fine in the TB3 edition, so I assume, since the TB2 chassis is identical in built, the card will fit and work. Anyone here who tried it?
I also wanna share the two drives via 10GbE between the two Macs. I think this means another drop down in speed, but if it's above 700MB/s I don't mind.
I read online that the 7101a doesn't support TRIM in Raid 5, is this problematic? Will speed drop over time because of this?

Thanks a lot in advance!
 
@flygbuss you may have better luck for specifics with TB2/TB3 and chassis within a dedicated MP6,1 thread. The majority of the information in this thread is geared toward MP5,1 (and MP4,1>MP5,1) with PCIe slots.
 
  • Like
Reactions: flygbuss
I have a couple of questions about the 7101a. I'm planning to use two of them in sonnet chassis connected via TB2 to two nMPs 6,1.
I know that TB2 will be the limiting factor, but for my needs the speed would suffice I think. It's more about the amount of storage with still quite good speed that makes it interesting for me.
I read in the thread that it worked fine in the TB3 edition, so I assume, since the TB2 chassis is identical in built, the card will fit and work. Anyone here who tried it?
I also wanna share the two drives via 10GbE between the two Macs. I think this means another drop down in speed, but if it's above 700MB/s I don't mind.
I read online that the 7101a doesn't support TRIM in Raid 5, is this problematic? Will speed drop over time because of this?

Thanks a lot in advance!

So you have 2 mac pro's that are accessing storage over 10Gbe... What applications, disk bandwidth, response do you need? TBII ~ 1300MB/s...

Highpoint SSD7101's can deliver up to 6000 MB/s @PCIe2.x, 12000MB/s@PCIe3,x, but you only need 700MB/s ~ 55% of TB2 bandwidth...
 
  • Like
Reactions: flygbuss
I have a couple of questions about the 7101a. I'm planning to use two of them in sonnet chassis connected via TB2 to two nMPs 6,1.
Which Sonnet chassis?

I know that TB2 will be the limiting factor, but for my needs the speed would suffice I think. It's more about the amount of storage with still quite good speed that makes it interesting for me.
I read in the thread that it worked fine in the TB3 edition, so I assume, since the TB2 chassis is identical in built, the card will fit and work. Anyone here who tried it?
It should work. They are just PCIe slots. The 7101A is an x16 card. Some Sonnet expansion boxes don't have x16 or only have one x16.

I also wanna share the two drives via 10GbE between the two Macs. I think this means another drop down in speed, but if it's above 700MB/s I don't mind.
What is the host for the two drives that will share them over 10 GbE?
 
  • Like
Reactions: flygbuss
So you have 2 mac pro's that are accessing storage over 10Gbe... What applications, disk bandwidth, response do you need? TBII ~ 1300MB/s...

Highpoint SSD7101's can deliver up to 6000 MB/s @PCIe2.x, 12000MB/s@PCIe3,x, but you only need 700MB/s ~ 55% of TB2 bandwidth...

Thanks for your reply, It would be great to see more then 700 MB/s,
The TB2 Limit around 1300 MB/s would be great.

I assume you think it’s a waste of money since my planned setup contains that kind of bottlenecks?


Which Sonnet chassis?


It should work. They are just PCIe slots. The 7101A is an x16 card. Some Sonnet expansion boxes don't have x16 or only have one x16.


What is the host for the two drives that will share them over 10 GbE?

I have two Sonnet Echo Express III R. They contain a single Avid HDX card and a 10GbE card. I want to share the volumes between the two machines via a direct Ethernet connection (afp or smb, depending on what performs better) or a small network including a 10GbE switch.

This is just to be able to access projects across the two machines from time to time.
The highpoint would be for storage only. I would continue using the internal SSUBX as boot drives.

Thanks again for your replies.
 
Last edited:
Thanks for your reply, It would be great to see more then 700 MB/s,
The TB2 Limit around 1300 MB/s would be great.
TB2 will give 1300 MB/s.
10 GbE is limited to around 500 MB/s.

I have two Sonnet Echo Express III R. They contain a single Avid HDX card and a 10GbE card. I want to share the volumes between the two machines via a direct Ethernet connection (afp or smb, depending on what performs better) or a small network including a 10GbE switch.
The Sonnet has no CPU - it's just external PCIe chassis, so it has to be connected via TB2 to a Mac Pro. Then that Mac Pro can enable file sharing using 10 GbE. It's also possible to do Thunderbolt networking but I don't think the performance is much better than 10 GbE, or it might be worse, have to run some benchmarks to be sure.
 
TB2 will give 1300 MB/s.
10 GbE is limited to around 500 MB/s.


The Sonnet has no CPU - it's just external PCIe chassis, so it has to be connected via TB2 to a Mac Pro. Then that Mac Pro can enable file sharing using 10 GbE. It's also possible to do Thunderbolt networking but I don't think the performance is much better than 10 GbE, or it might be worse, have to run some benchmarks to be sure.

Why do you say 10GBe is limited to 500MB/sec. I added a $99 sonnet card to my 5,1 and regularly see 950MB/sec from my Synology box (it has 7 rusty spinners).
 
Why do you say 10GBe is limited to 500MB/sec. I added a $99 sonnet card to my 5,1 and regularly see 950MB/sec from my Synology box (it has 7 rusty spinners).
I guess I must have read some unoptimized benchmark somewhere. 950 MB/s is just about as good as USB 3.1 gen 2 which is way more than I expected from Ethernet. Very nice.
 
  • Like
Reactions: ZombiePhysicist
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.