I'm happy to report another success with Intel X540-T2 cards -- thank you, Squuiid!!
I searched carefully to find cards with the blue "Yottamark" hologram sticker on the back. I saw tons of fakes on Amazon and eBay -- if you see a card pictured with a fan mounted on the board, it is not genuine. It may or may not work with this procedure, but it is not a genuine Intel card.
I had 3 cards to convert, so I installed
Ubuntu 18.04.1 LTS on a spare SSD, and it worked perfectly. My cards had different names (ens4f0 and ens4f1), but other than that everything looked exactly the same as described by Squuiid.
My commands were:
sudo ethtool -E ens4f0 magic 0x15288086 offset 0x48e value 0x0a
sudo ethtool -E ens4f1 magic 0x15288086 offset 0x48e value 0x0a
Tested with iperf3, and using standard 1500 MTU I got 8.99 Gbits/sec between two machines through a 10g switch, and with MTU set to 8870 I got 9.46 Gbits/sec.
Blackmagic test results mounting a 960 EVO blade remotely with AFP were 680 MB/sec write and 880 MB/sec read. With SMB (signing turned off), results were 725 MB/sec write and 712 MB/sec read. I did not try NFS.
===
I can also provide some info about the Akitio
NBASETNC-A01, which I tried first. Their web site said the driver would work with El Capitan 10.11.x, which I need to stay on until a project is finished. But it failed with my MacPro5,1 running 10.11.6 -- no PCI Ethernet port ever appeared in the Network tab of System Preferences. After some impressively quick investigation by Akitio tech support, I got a note from a developer saying that 10.11.6 is using an older version of the KPI library than the Tehuti driver requires for 5-speed operation. They have since updated the system requirements on their web page saying it needs at least 10.12.5 for use with the cMP. I had High Sierra 10.13.6 on a spare SSD, so I booted it up and can verify that the Akitio does work with
Tehuti driver v1.52. I wasn't able to test performance between two machines, but the lights on the card and the switch both reported establishing a 10g link.
I'm networking together 3 cheesegraters (cMPs) with one of them acting as a file server. I already had cat6 riser cable (
this stuff) connecting all 3 machines to a 1g switch through wall plates - max cable run is about 70 feet. So I upgraded to 10g with the little Netgear XS505m switch, and got 3 Intel NICs to run under 10.11.6, and I’m seeing terrific speed between the boxes. Large file performance is improved considerably by turning on jumbo frames, and dramatically by turning off SMB signing.
The XS505m was plug-and-play, with one hiccup. After I enabled jumbo frames on the NICs I saw a DECREASE in speed! Contacted Netgear, and I was unlucky enough to have one from
a batch that doesn’t support jumbo frames. They RMA’ed me a replacement, and it works fine -- my ping tests show it passes up to 8870 packet size.
===
Wake-on-LAN does NOT appear to be supported by the Intel x540-t2 on cMP. I could not wake a remote machine from sleep by invoking Go > Connect to Server to mount a drive, and I'm pretty sure that used to work over 1G with the built-in Ethernet adapter. I also tried the
wakeonlan.py script with no luck.