Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
Happens with both x16 slots? Do you have another Mac Pro to check the card with it?

Yea I just tried with slot 2 and these were the results. Don’t have another cMP to test with unfortunately.

85cdc1e1712ff98823fa0d27438d9b4f.jpg
 
Did you even read my post?
Yes, I read it fully, go back to my previous post where I added what is wrong with yours.
[automerge]1594858462[/automerge]
Yea I just tried with slot 2 and these were the results. Don’t have another cMP to test with unfortunately.

85cdc1e1712ff98823fa0d27438d9b4f.jpg
Did you installed setpci to check if the PCIe slot is not initializing correctly? @joevt has posts showing in detail what you have to look.
 
  • Like
Reactions: crjackson2134
Yes, I read it fully, go back to my previous post where I added what is wrong with yours.
[automerge]1594858462[/automerge]

Did you installed setpci to check if the PCIe slot is not initializing correctly? @joevt has posts showing in detail what you have to look.

I tried and kept getting errors in terminal so i was never able to generate the pctree.sh thing (if that’s what it’s called). I’ll try again
 
The output connection of the switch is always x16, independent of the number of the blades - there isn’t a split of the lanes at all. To be even more precise, the output ports of the switch will only use less lanes when you install with a slot with less lanes available, like slots 3 or 4 with MP5,1 or the x8 slots of MP7,1. See PLX diagrams and you will see what I’m talking about. There isn’t 1500MB connections at all, two 970PRO can saturate the x16 PCIe v2.0, see earlier benchmarks on the thread.

Sorry about the edits, iOS 14 keyboard disappears and makes me crazy.

I'm having the same issue with iOS 14 so it's ok.

Back to the subject on hand - we now have 2 users, @zachek and @dorianclive that have similar setups and are reporting the same speed drops.

I also had their setups when I first got the card but decided to go with 2 single blade setups - 1 for macOS and the other for Windows, after noticing the speed drops. My recommendation was and is still based on actual usage/experience.

The x16 output link, which I have already touched on several times, is shared by 4 blades. When 1 blade is used as standalone and the other 3 are in raid-0 setup, do you really think you will still be able to achieve 6000MB/s with 3 blades when the fourth is connected to / has negotiated a link with the switch?
 
So I migrated my OS to a 860 Pro and set up the four 970 Pros in RAID-0 on my 7701 but the results are still poor. I don't know what else to do. Any additional thoughts/insight as to why this is happening? View attachment 934223

That's not supposed to happen.

Can you try 2 more tests with raid-0 3 and 2 disk setups?
 
I'm having the same issue with iOS 14 so it's ok.

Back to the subject on hand - we now have 2 users, @zachek and @dorianclive that have similar setups and are reporting the same speed drops.

I also had their setups when I first got the card but decided to go with 2 single blade setups - 1 for macOS and the other for Windows, after noticing the speed drops. My recommendation was and is still based on actual usage/experience.

The x16 output link, which I have already touched on several times, is shared by 4 blades. When 1 blade is used as standalone and the other 3 are in raid-0 setup, do you really think you will still be able to achieve 6000MB/s with 3 blades when the fourth is connected to / has negotiated a link with the switch?
Unless it’s being simultaneously used, probably yes, the star topology of the switch makes this possible. If no indexing is being done, mds takes a lot of bandwidth, or any heavy accces on the boot disk, you probably can saturate the x16 PCIe link with just an array of two 970PRO.

Something is very wrong with @dorianclive hardware setup and it’s not BootROM related, I’ve reconstructed it recently with 144.0.0.0.0. Four 970PRO blades in a RAID0 array should get the full bandwidth of the x16 connection, he is getting the same lowball results as 3.

Right now I have an array of 4 SM951-AHCI with my SSD7101A-1 v1.01, it’s almost full and I‘ll need to move my data to do benchmarks again, but my earlier ones were very similar to @handheldgames earlier on the thread, a little better if I remember correctly, with my 512GB blades.
 
Last edited:
Unless it’s being simultaneously used, probably yes, the star topology of the switch makes this possible. If no indexing is being done, mds takes a lot of bandwidth, or any heavy accces on the boot disk, you probably can saturate the x16 PCIe link with just an array of two 970PRO.

Something is very wrong with @dorianclive hardware setup and it’s not BootROM related, I’ve reconstructed it recently with 144.0.0.0.0. Four 970PRO blades in a RAID0 array should get the full bandwidth of the x16 connection, he is getting the same lowball results as 3.

It’s super frustrating. I appreciate the collective effort in helping me out! Do you think the card itself could be the culprit?
 
It’s super frustrating. I appreciate the collective effort in helping me out! Do you think the card itself could be the culprit?
Maybe your PCIe slots have some damage, that’s why I asked about setpci. It’s not uncommon to have backplanes with not fully working PCIe slots, damaged by RX 480 and other GPUs that didn’t follow the 75W spec limit. Maybe it’s the new HPT firmwares…
 
  • Like
Reactions: crjackson2134
Maybe your PCIe slots have some damage, that’s why I asked about setpci. It’s not uncommon to have backplanes with not fully working PCIe slots, damaged by RX 480 and other GPUs that didn’t follow the 75W spec limit. Maybe it’s the new HPT firmwares…

Really? I have the Sapphire RX 580 that  recommended though
 
Really? I have the Sapphire RX 580 that  recommended though
Yes, reference RX 480 can use up to 97W from the PCIe slot, several Mac Pros here now have not fully working PCIe slots. It was a scandal back in the day and AMD had to totally re-design the Polaris power plane circuit. Dell didn't redesign the cards, but redesigned the workstation motherboards to support the out-of-spec power draw, the Dell cards are MP5,1 killers and are very common on the used market.
 
Yes, reference RX 480 can use up to 97W from the PCIe slot, several Mac Pros here now have not fully working PCIe slots. It was a scandal back in the day and AMD had to totally re-design the Polaris power plane circuit.

Pardon me if I’m not understanding you but I have a 580, not a 480.
 
Pardon me if I’m not understanding you but I have a 580, not a 480.
I'm just explaining why several MP5,1 here have damaged backplanes, these cards are the most common x16 PCIe killer, but other cards do the same - I'm not writing that it's your problem, just showing that can happen.
 
It’s super frustrating. I appreciate the collective effort in helping me out! Do you think the card itself could be the culprit?

Some thoughts....

What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?

With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?

Also. What CPU do you have? NVMe performance is CPU related.
 
Some thoughts....

What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?

With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?

Also. What CPU do you have? NVMe performance is CPU related.
Yep, don't enable any encryption at all, it will degrade your throughput even with a 2019 Mac Pro.
 
  • Like
Reactions: crjackson2134
Interesting. I’ve never seen that version number. Would be curious to see what, if any changes there are.

I really want to get my RX 580 back into slot 1 so I can regain Slot 2. How’s your airflow?

Seems fine to me. However, your question made me realize I do so little GPU intensive work at the moment that I rarely even notice the fans on the 580. I had to run a geekbench gpu test to get them to spin up just now. I know that I can't read the RX 580 temps via software, is there another good way to gauge the situation there?
[automerge]1594865356[/automerge]
Back to the subject on hand - we now have 2 users, @zachek and @dorianclive that have similar setups and are reporting the same speed drops.

With due respect, I am not reporting the same speed drops as @dorianclive as I haven't even proceeded with setting up the RAID 0 yet because I thus far haven't been able to determine through my reading whether I should be using HighPoint drivers + WebGUI RAID mgmt or system drivers + appleRAID via DU.

My apologies if my post was confusing, I was merely chiming in about @dorianclive's other issue with the HP not fitting in slot 2 above his RX 580 in slot 1 and my attempts at fixing that problem.

Re: drivers & raid controller, up until today, I was under the impression that with my system it was preferable to go with HP's drivers and RAID to get the full potential of the Evo Plus's and SSD7101A-1, until I read your comments and now am not sure.

I haven't had a lot of time to devote to it today, but I think I will try to build the RAID with DU first and do some speed tests and then go from there.

I've been having some weird issues where my system will hang and freeze while using DU to erase and mount drives, so I was putting it off while I finished backing everything up.
 
Last edited:
@zachek Thanks for clearing that up. Would still be great to see your benchmark results.

As for HPT drivers - I have stopped using them as I noticed a small performance impact in both random 4K reads and writes on APFS formatted drives. Haven't tested HFS+ ... Give them a try and see what's best suited for your setup.
 
Some thoughts....

What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?

With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?

Also. What CPU do you have? NVMe performance is CPU related.

NVMe disks all report 8.0GT/s at x4 Link Width. All formatted HFS+. As for the HighPoint card I’m not using any HighPoint drivers. Running tests now and will let you know the rest
[automerge]1594907433[/automerge]
Some thoughts....

What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?

With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?

Also. What CPU do you have? NVMe performance is CPU related.

Here's my results:

One SSD HFS Journaled
one ssd.png
Two SSDs HFS Journaled - 64k stripe size
two ssd raid-0.png
Three SSDs HFS Journaled - 64k stripe size
three ssd raid-0.png

My CPU setup is 2 x 3.46 GHz 6-Core Xeon
[automerge]1594907487[/automerge]
I'm just explaining why several MP5,1 here have damaged backplanes, these cards are the most common x16 PCIe killer, but other cards do the same - I'm not writing that it's your problem, just showing that can happen.
Understood. Thank you
 
Last edited:
NVMe disks all report 8.0GT/s at x4 Link Width. All formatted HFS+. As for the HighPoint card I’m not using any HighPoint drivers. Running tests now and will let you know the rest
[automerge]1594907433[/automerge]


Here's my results:

One SSD HFS Journaled
View attachment 934376
The one blade test is from a 970PRO? This is a too low result, get setpci working and let's se what is really wrong.
 
This is what I'm getting to this point. When I run the pcitree.sh, it's not reading out any results.

View attachment 934394
I'm at work and can't help you now - did you install pciutils? You need pciutils working to run the script.

Edited to remove the pciutils homebrew tap link, it's not working anymore.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.