Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

chrissomos

macrumors member
Original poster
Feb 16, 2018
57
29
Hello folks. I am having an issue with networking speeds involving a CMP 5,1 and I’m hoping someone can help. I am sharing a nvme network drive on my CMP as well as some other volumes. When sharing the nvme drive I am expecting 800-1000 MB/s (similar to others on this forum with a fast NAS) but I am getting an average of ~350 MB/s. The drive is capable of 3500 write ans 4500 read. I am testing transmission in both directions using a MacBook Pro, blackmagic disk speed test, and copying a 10GB file and monitoring the network transfer speed with various tools. Speed is the same in both directions. The macbook’s SSD is the faster one and writes about 1000 and reads about 1200.

The network adapters are the Aquitania controllers native to macOS (pcie card in the CMP, sonnet thunderbolt 2 10g adapter for the 2015 MacBook Pro). The switch is a dumb Netgear XS505M, which supports jumbo frames up to 9000. The network adapters on both macs are set manually to Duplex (no flow control), jumbo frames 9000, according to sonnet’s documentation for optimizing the adapter in macOS. The share is SMB, and I have disabled SMB signing on both machines. The cable is cat8 and the runs are only ~10 feet from the switch. The switch and adapters both signal a 10g connection is made. Not sure what else to do here. How are you guys getting better speeds?
 

AlexMaximus

macrumors 65816
Aug 15, 2006
1,233
577
A400M Base
there are a couple issues you can check. #1 Your SSD-Pcie Port bandwith bottleneck. First make sure that your pcie adapter card is located in slot 2, just over your gpu. This is the only fast slot you have apart from the gpu slot.
However this will speed it up only to a certain level (1350 - 1450MB/s).
In order to have full NVME speeds, (+2500 MB/s) You need a different pcie adapter card. Either a Sonnet, I/O-Crest or one from Amfeltech will do. These are pcie bridge cards that doubles your pcie port bandwith. I have the I/o crest card and it’s the least expensive one that works. It’s a dual card. Only at that level your nvme ssd quality comes into the mix. There are some speed differences as well (regular nvme to 970 Pro) - So if you have your current nvme in slot 3 or 4, move it to slot 2 And test your system. Since most likely your network adapter needs a fast slot as well, you may consider to move your gpu to slot 3 (which will block slot 4) and have nvme in slot 1 and network adapter in slot 2. If that doesn’t show higher speeds, you may consider that bridge card. If nothing affect your speed mentioned above, your problem could be somewhere else.
 
  • Like
Reactions: foliovision

chrissomos

macrumors member
Original poster
Feb 16, 2018
57
29
there are a couple issues you can check. #1 Your SSD-Pcie Port bandwith bottleneck. First make sure that your pcie adapter card is located in slot 2, just over your gpu. This is the only fast slot you have apart from the gpu slot.
However this will speed it up only to a certain level (1350 - 1450MB/s).
In order to have full NVME speeds, (+2500 MB/s) You need a different pcie adapter card. Either a Sonnet, I/O-Crest or one from Amfeltech will do. These are pcie bridge cards that doubles your pcie port bandwith. I have the I/o crest card and it’s the least expensive one that works. It’s a dual card. Only at that level your nvme ssd quality comes into the mix. There are some speed differences as well (regular nvme to 970 Pro) - So if you have your current nvme in slot 3 or 4, move it to slot 2 And test your system. Since most likely your network adapter needs a fast slot as well, you may consider to move your gpu to slot 3 (which will block slot 4) and have nvme in slot 1 and network adapter in slot 2. If that doesn’t show higher speeds, you may consider that bridge card. If nothing affect your speed mentioned above, your problem could be somewhere else.
Thanks for the reply. The speed of the ssd shouldn’t be the bottle neck. The share is on RAID 0 array of 3 nvme SSDs on a pcie bifurcation controller card (io crest I think) which can pull about 3500 MB/s write and 4500 MB/s read using blackmagic or Aja disk speed test. The card is in slot 2 above the Gpu. The SSDs are the cheap intel ones but since the computer is only pcie 2.0 the ssds are fast enough to saturate the connection.
 

riggieri

macrumors member
Apr 30, 2012
41
9
install Brew and get Iperf. Run iperf between computers and see what your speeds are.
What OS Version? High Sierra needs some tweaks to get full speed 10G.
 

chrissomos

macrumors member
Original poster
Feb 16, 2018
57
29
install Brew and get Iperf. Run iperf between computers and see what your speeds are.
What OS Version? High Sierra needs some tweaks to get full speed 10G.
My bad. I am running 10.14.6 on the Mac Pro and the most recent Big Sur (can’t remember the build) on the MacBook. I have heard of iPerf and I will try that. May I ask what is Brew?
 

Soba

macrumors 6502
May 28, 2003
451
702
Rochester, NY
My bad. I am running 10.14.6 on the Mac Pro and the most recent Big Sur (can’t remember the build) on the MacBook. I have heard of iPerf and I will try that. May I ask what is Brew?
Just to verify, what is the version of the Boot ROM in your Mac Pro? You can find this in the Hardware Overview screen within the System Information app.
 

chrissomos

macrumors member
Original poster
Feb 16, 2018
57
29
Just to verify, what is the version of the Boot ROM in your Mac Pro? You can find this in the Hardware Overview screen within the System Information app.
Thanks for the reply. I am running boot rom 144.0.0.0.0. Isn't that the latest?
 

Soba

macrumors 6502
May 28, 2003
451
702
Rochester, NY
Thanks for the reply. I am running boot rom 144.0.0.0.0. Isn't that the latest?
That is indeed the latest. I asked because version 138.0.0.0.0 introduced major PCIe performance improvements and I wanted to make sure your system was taking advantage of those. Since you're on 144.0.0.0.0, that is definitely not the source of your issue.
 

two-mac-jack

macrumors member
Mar 25, 2016
38
17
Part of your performance problem may be with your Netgear XS505M switch and your MTU settings.

Check out my post on experiences with a similar setup.

My initial problem was that some of those switches had a firmware bug and did not support jumbo frames -- check your serial number.

Netgear sent me a replacement, and it does support jumbo frames, but only up to MTU size 8870 -- not the 8900 you are using. I did a lot of testing with iperf3, and 8870 was as high as I could go. It's possible that Netgear made further updates to their firmware to fix this, but maybe not.

With El Capitan (10.11.6) I was seeing 700+ MB/sec throughput in a Blackmagic disk speed test using SMB without signing. I have since upgraded to Mojave (10.14.6) on 2 of the 3 machines, but I have not measured their 10g performance after the upgrade.
 

chrissomos

macrumors member
Original poster
Feb 16, 2018
57
29
a replacement, and it does support jumbo frames, but on
Part of your performance problem may be with your Netgear XS505M switch and your MTU settings.

Check out my post on experiences with a similar setup.

My initial problem was that some of those switches had a firmware bug and did not support jumbo frames -- check your serial number.

Netgear sent me a replacement, and it does support jumbo frames, but only up to MTU size 8870 -- not the 8900 you are using. I did a lot of testing with iperf3, and 8870 was as high as I could go. It's possible that Netgear made further updates to their firmware to fix this, but maybe not.

With El Capitan (10.11.6) I was seeing 700+ MB/sec throughput in a Blackmagic disk speed test using SMB without signing. I have since upgraded to Mojave (10.14.6) on 2 of the 3 machines, but I have not measured their 10g performance after the upgrade.
hey two-mac-jerk. Thanks for the tip. I checked my serial and it is not affected. in your testing if you set the frames above 8870 did the performance degrade are just cap out?
 

two-mac-jack

macrumors member
Mar 25, 2016
38
17
My notes (from 2018) just say it "failed" -- don't remember where I saw the errors, but probably iperf3. I presume the failure was some dropped packets, which would mean a performance degradation due to re-transmission -- but I did not measure it. My notes say the test worked at 1500 (non-jumbo), so I just kept increasing MTU size until I narrowed down that it failed at 8872 but worked at 8870.

Then I found this in the Smalltree driver documentation:

NOTE: In order to work around a hardware bug with flow control, the adapter should be configured with a Maximum Packet Size (MTU) that is “2 modulus 4”, meaning some value that is two bytes more than an even multiple of four bytes. For example, use an MTU of 8998 bytes instead of 9000 bytes. Failure to do so could cause the adapter to drop a few packets every 1-2 minutes under heavy load.

And 8870 is 2 mod 4, so that's where I set my MTUs for my Intel NICs.
 

HDFan

Contributor
Jun 30, 2007
7,290
3,342
I am expecting 800-1000 MB/s (similar to others on this forum with a fast NAS) but I am getting an average of ~350 MB/s.

Don't know if this is related, but on Big Sur I am working a similar issue with Apple, Netgear and QNAP.

Iperf3 running on My iMac Pro to my QNAP NAS shows only ~270 MB/s

[ 5] 0.00-1.00 sec 258 MBytes 2.16 Gbits/sec

where going the other way, QNAP to iMac is closer to the theoretical limit of 1.2 GB/s

[ 5] 0.00-10.00 sec 11.1 GBytes 9.56 Gbits/sec 256

We bypassed the switch, connecting the iMac directly to the QNAP. Same results, mac to QNAP transfers are almost 4 times slower than the other direction.

This has been referred to QNAP development. Apple told me to get back to them once I get a QNAP analysis.
 

two-mac-jack

macrumors member
Mar 25, 2016
38
17
When using unsigned SMB without jumbo frames (MTU 1500) I was getting 275 MB/s write and 280 MB/s read. Using jumbo frames with MTU 8870 I was getting 725 MB/s write and 712 MB/s read. This was with the Blackmagic disk test mounting a remote PCIe SSD.

Just using iperf3 to measure raw throughput, the difference was not that much -- at 1500 MTU I got 8.99 Gbits/sec, and with MTU at 8870 I got 9.46 Gbits/sec.

I have a Netgear XS505M switch connecting two 2010 cMP5,1 machines and one 2006 cMP1,1 using Intel X540-T2 cards and cat6 cable (with two long runs of 50 feet and 70 feet). When I ran the test, all machines were on El Capitan, now the 2010 machines are running Mojave.
 
Last edited:

chrissomos

macrumors member
Original poster
Feb 16, 2018
57
29
So i installed iperf3 on a set of Macs in my house and i am only getting about 6 gigabits per second. Actual network SSD speeds are less than that. Looks like some further testing and tuning is needed.
 

walkin

macrumors newbie
Jun 7, 2021
17
25
So i installed iperf3 on a set of Macs in my house and i am only getting about 6 gigabits per second. Actual network SSD speeds are less than that. Looks like some further testing and tuning is needed.
Did you ever resolve this? I'm getting the same speeds in iperf3 with an OWC 10G PCIe card (Marvell AQC113CS) in a Thunderbolt 2 enclosure on my Mac Pro 6,1 under Monterey. When I put the card in my Intel NUC 12 Extreme, it works at nearly 10 gigabits. I also have another 10G card that only worked up to Mojave, and it also gets nearly 10 gigabits in the same enclosure under the older system, so I'm at a loss as to the cause of the slow speed with the new one.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.