Happens with both x16 slots? Do you have another Mac Pro to check the card with it?
Yea I just tried with slot 2 and these were the results. Don’t have another cMP to test with unfortunately.
Happens with both x16 slots? Do you have another Mac Pro to check the card with it?
The output of the switch is always x16.
Yes, I read it fully, go back to my previous post where I added what is wrong with yours.Did you even read my post?
Did you installed setpci to check if the PCIe slot is not initializing correctly? @joevt has posts showing in detail what you have to look.Yea I just tried with slot 2 and these were the results. Don’t have another cMP to test with unfortunately.
Yes, I read it fully, go back to my previous post where I added what is wrong with yours.
[automerge]1594858462[/automerge]
Did you installed setpci to check if the PCIe slot is not initializing correctly? @joevt has posts showing in detail what you have to look.
The output connection of the switch is always x16, independent of the number of the blades - there isn’t a split of the lanes at all. To be even more precise, the output ports of the switch will only use less lanes when you install with a slot with less lanes available, like slots 3 or 4 with MP5,1 or the x8 slots of MP7,1. See PLX diagrams and you will see what I’m talking about. There isn’t 1500MB connections at all, two 970PRO can saturate the x16 PCIe v2.0, see earlier benchmarks on the thread.
Sorry about the edits, iOS 14 keyboard disappears and makes me crazy.
So I migrated my OS to a 860 Pro and set up the four 970 Pros in RAID-0 on my 7701 but the results are still poor. I don't know what else to do. Any additional thoughts/insight as to why this is happening? View attachment 934223
Unless it’s being simultaneously used, probably yes, the star topology of the switch makes this possible. If no indexing is being done, mds takes a lot of bandwidth, or any heavy accces on the boot disk, you probably can saturate the x16 PCIe link with just an array of two 970PRO.I'm having the same issue with iOS 14 so it's ok.
Back to the subject on hand - we now have 2 users, @zachek and @dorianclive that have similar setups and are reporting the same speed drops.
I also had their setups when I first got the card but decided to go with 2 single blade setups - 1 for macOS and the other for Windows, after noticing the speed drops. My recommendation was and is still based on actual usage/experience.
The x16 output link, which I have already touched on several times, is shared by 4 blades. When 1 blade is used as standalone and the other 3 are in raid-0 setup, do you really think you will still be able to achieve 6000MB/s with 3 blades when the fourth is connected to / has negotiated a link with the switch?
That's not supposed to happen.
Can you try 2 more tests with raid-0 3 and 2 disk setups?
Unless it’s being simultaneously used, probably yes, the star topology of the switch makes this possible. If no indexing is being done, mds takes a lot of bandwidth, or any heavy accces on the boot disk, you probably can saturate the x16 PCIe link with just an array of two 970PRO.
Something is very wrong with @dorianclive hardware setup and it’s not BootROM related, I’ve reconstructed it recently with 144.0.0.0.0. Four 970PRO blades in a RAID0 array should get the full bandwidth of the x16 connection, he is getting the same lowball results as 3.
Maybe your PCIe slots have some damage, that’s why I asked about setpci. It’s not uncommon to have backplanes with not fully working PCIe slots, damaged by RX 480 and other GPUs that didn’t follow the 75W spec limit. Maybe it’s the new HPT firmwares…It’s super frustrating. I appreciate the collective effort in helping me out! Do you think the card itself could be the culprit?
Maybe your PCIe slots have some damage, that’s why I asked about setpci. It’s not uncommon to have backplanes with not fully working PCIe slots, damaged by RX 480 and other GPUs that didn’t follow the 75W spec limit. Maybe it’s the new HPT firmwares…
Yes, reference RX 480 can use up to 97W from the PCIe slot, several Mac Pros here now have not fully working PCIe slots. It was a scandal back in the day and AMD had to totally re-design the Polaris power plane circuit. Dell didn't redesign the cards, but redesigned the workstation motherboards to support the out-of-spec power draw, the Dell cards are MP5,1 killers and are very common on the used market.Really? I have the Sapphire RX 580 that recommended though
Yes, reference RX 480 can use up to 97W from the PCIe slot, several Mac Pros here now have not fully working PCIe slots. It was a scandal back in the day and AMD had to totally re-design the Polaris power plane circuit.
I'm just explaining why several MP5,1 here have damaged backplanes, these cards are the most common x16 PCIe killer, but other cards do the same - I'm not writing that it's your problem, just showing that can happen.Pardon me if I’m not understanding you but I have a 580, not a 480.
It’s super frustrating. I appreciate the collective effort in helping me out! Do you think the card itself could be the culprit?
Yep, don't enable any encryption at all, it will degrade your throughput even with a 2019 Mac Pro.Some thoughts....
What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?
With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?
Also. What CPU do you have? NVMe performance is CPU related.
Interesting. I’ve never seen that version number. Would be curious to see what, if any changes there are.
I really want to get my RX 580 back into slot 1 so I can regain Slot 2. How’s your airflow?
Back to the subject on hand - we now have 2 users, @zachek and @dorianclive that have similar setups and are reporting the same speed drops.
Some thoughts....
What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?
With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?
Also. What CPU do you have? NVMe performance is CPU related.
Some thoughts....
What does system information say?
- for the NVMe disks
- for the highpoint 7101a.
What is the speed with one ssd formatted as HFS journaled?
With a 64k stripe size....
What is the speed with two ssds formatted as HFS journaled?
What is the speed with three ssds formatted as HFS journaled?
Also. What CPU do you have? NVMe performance is CPU related.
Understood. Thank youI'm just explaining why several MP5,1 here have damaged backplanes, these cards are the most common x16 PCIe killer, but other cards do the same - I'm not writing that it's your problem, just showing that can happen.
The one blade test is from a 970PRO? This is a too low result, get setpci working and let's se what is really wrong.NVMe disks all report 8.0GT/s at x4 Link Width. All formatted HFS+. As for the HighPoint card I’m not using any HighPoint drivers. Running tests now and will let you know the rest
[automerge]1594907433[/automerge]
Here's my results:
One SSD HFS Journaled
View attachment 934376
It's one 970PRO? This is too low result. Get setpci working and let's se what is really wrong.
Search for setpci from user @joevt:Yea, all the blades are 970 Pro 512GB. Are the instructions for setpci earlier in the thread?
Search for setpci from user @joevt:
I'm at work and can't help you now - did you install pciutils? You need pciutils working to run the script.This is what I'm getting to this point. When I run the pcitree.sh, it's not reading out any results.
View attachment 934394