Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

amarmot

macrumors member
Original poster
Jul 1, 2009
66
9
Seattle
Hey all. I'm planning to run a OWC Accelsior 4M2 in Mac Pro 5,1 with software RAID 1+0. As I understand it, this card can split the x16 slot into 2x8, but not 4x4. My question is whether it matters which of the 4 nvme blades are assigned to the mirror pair. Should each active blade (and its mirror) be assigned to one of the x8 branches, or should the active pair be assigned to one branch, and the mirror pair to the other.
 
Hey all. I'm planning to run a OWC Accelsior 4M2 in Mac Pro 5,1 with software RAID 1+0. As I understand it, this card can split the x16 slot into 2x8, but not 4x4.

This not even make sense, for two reasons.

The first one is that all M.2 devices are x4, or less lanes. The second one is that OWC 4M2 ASMedia ASM2824 PCIe switch is a x8 upstream switch, so, the card will use 8 lanes and the other 8 lanes from the Mac Pro x16 PCIe will be completely unused. The PCIe switch will internally share the x8 upstream to the four x4 downstream ports for each one of the blades.

There is no split with PCIe switches, lanes are split only with bifurcation, which no Mac supports it, not even 2019 Mac Pro or 2023 AS Mac Pro. While the 2019 Mac Pro have a Intel chipset that the hardware supports it, the Mac Pro does not have the firmware side required for it to work.
 
In the Accelsior 4m2, each SSD is given 4 lanes of PCIe 3.0. However, the upstream is x8, regardless of if you stick it in a x16 slot on the Mac. If you are only accessing 1x SSD, then you will get the full four lanes of that drive. If you are accessing two, you will get the full performance of both drives. If you are accessing 3 or more, you will get x8 maximum.

It should not matter which drive you use for the 1 vs 0 part of a RAID10, as long as all the drives are the same.
 
In the Accelsior 4m2, each SSD is given 4 lanes of PCIe 3.0. However, the upstream is x8, regardless of if you stick it in a x16 slot on the Mac. If you are only accessing 1x SSD, then you will get the full four lanes of that drive. If you are accessing two, you will get the full performance of both drives. If you are accessing 3 or more, you will get x8 maximum.

It should not matter which drive you use for the 1 vs 0 part of a RAID10, as long as all the drives are the same.
Ah, OK - this makes a lot more sense now. I realized each blade is x4, but did not understand the 4M2 is limited to x8 upstream. This implies the speed difference between Mac Pro 2019 and Mac Pro 5,1 is merely PCI 3.0 vs. 2.0 (not pipe width)? In any case, I tested it today, and found it makes no difference where the primary and mirror disks are (ABAB vs AABB).

I did, however, find that this card has some very weird throttling behavior. In RAID 1+0 (with 5 GB files), it oscillates back and forth every few minutes between slowish performance (~600 MB/s write, ~600 MB/s read), and a more expected performance (~1300 MB/s write, ~2600 MB/s read). This pattern repeats indefinitely, jumping back and forth between higher and lower speed.

iStat menu reports steady fans and temps ~50C at the blades (no obvious heating and cooling cycle at the SSD). So is the throttling likely occurring as a result of switch temperature? The cooling solution for the switch is not very sophisticated - it just uses a thick thermal pad to connect the switch to the the main sink covering the blades. So I could easily understand if this posed a thermal bottleneck. (Not sure why they didn't put a proper heat sink on it).
 
Last edited:
I'll add that I had been running a Highpoint R1104, which was very fast, but burned out (hard failure) after 6 months. This is consistent with an unfavorable thermal environment...
 
You're using the main black heatsink that covers the entire card? Unlikely that the switch is overheating. There is plenty of thermal mass to dissipate heat.

It's hard to help diagnose what is going on with little information.

What drives are you using in it?
What MacOS are you running?
Are there other PCIe cards in the Mac Pro right now?
Are you doing a RAID10 through SoftRAID or AppleRAID?
What are you using to measure the performance?

A few tests that would help you narrow this down:
1. Try a RAID0. Just for testing. Does the same oscillation occur when you stress test the drive? If you are using SoftRAID, try in Apple RAID as well.

Use AJA to test. 4k full, 16 bit RGBA, 64GB file size.

ac Pro 2019 and Mac Pro 5,1 is merely PCI 3.0 vs. 2.0
Of note: On a Mac Pro 5,1 when you use PCIe 4.0 drives, they link/revert down to PCIe 1.0 (I know, oof). PCIe 3.0 drives link at PCIe 2.0 speeds.
 
You're using the main black heatsink that covers the entire card? Unlikely that the switch is overheating. There is plenty of thermal mass to dissipate heat.

It's hard to help diagnose what is going on with little information.

What drives are you using in it?
What MacOS are you running?
Are there other PCIe cards in the Mac Pro right now?
Are you doing a RAID10 through SoftRAID or AppleRAID?
What are you using to measure the performance?

A few tests that would help you narrow this down:
1. Try a RAID0. Just for testing. Does the same oscillation occur when you stress test the drive? If you are using SoftRAID, try in Apple RAID as well.

Use AJA to test. 4k full, 16 bit RGBA, 64GB file size.


Of note: On a Mac Pro 5,1 when you use PCIe 4.0 drives, they link/revert down to PCIe 1.0 (I know, oof). PCIe 3.0 drives link at PCIe 2.0 speeds.
The IHS makes contact to the big black heat sink through a fairly thick thermal pad over a small area. So I'm skeptical that the controller gets much of the available cooling.

The blades are Gen 3 Crucial 4TB.
OS is 12.7.4
There are cards in all the PCIe slots. But the graphics card is independently powered from the PSU (Pixlas).
Softraid 7.6.1 in Raid 10, optimized for video.
Speeds measured using ATTO and BlackMagic, 5 GB video files (same behavior, though numbers vary a bit).
I can't destroy the data on the disk, so would need to use alternate blades to do the tests you propose.
These are gen 3 blades, but in any case speed would be consistently slow if PCI 1.0. This does not explain oscillations

What I might try is inserting a thermocouple under the pad, and see if oscillations correlate with temperature. More likely I will send to OWC to to that, since it seems more like their job than mine...
 
Here's what it looks like in AJA. I had to try it a few times to catch the jump on both the read and write. Seems like an integer ratio of 2 on write, 4 on read. Not sure how thermal throttling works on this card, but this behavior is inconsistent with continuous variation in rate with temperature (like a CPU does). It's more like dropping to PCI 1.0. But then wouldn't reads also be 2x slower, not 4x? It's like it drops to 1.0 and also stops reading from both primary and mirror. Weird, no?
Screen Shot 2024-03-27 at 8.47.01 PM.png
 
it oscillates back and forth every few minutes between slowish performance (~600 MB/s write, ~600 MB/s read), and a more expected performance (~1300 MB/s write, ~2600 MB/s read).

Could it have anything to do with the internal SSD cache filling up? See "Not great for large transfers" in the reviews.

 
Could it have anything to do with the internal SSD cache filling up? See "Not great for large transfers" in the reviews.
I don't think this could explain slow reads. Also I have a 1M2 with the same type of blade in it, and it's fine (1500 write/read under same test conditions). Keep in mind this is PCIe 2.0, so SSD speed is not the limiting factor.
 
Hmm... here's some AJA measurements with a Sonnet 4x4 using the same 4 blades in the same slot. For these measurements the WiFi was off. (This card is x16, so divide by 2 to compare to Accelsior 4M2).
Screen Shot 2024-03-29 at 9.01.07 PM.png
 
So...I'm now wondering if all the slow-downs I've been seeing are caused by background tasks interrupting disk bandwidth with random seeks. Does anyone have suggestions on how to test disk speeds without background tasks interfering?
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.