Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That's a known issue with Alpine Ridge and Titan Ridge thunderbolt chips, the old intel chips were limited at about 22Gbits. There is a new USB4 chip by asmedia the ASM2464PD used in the Zike drive that supports upto 32Gbits, which is only achievable on newer intel chipsets, on apple it's limited to about 25Gbits. There are no pcie enclosures that use the new asmedia chip yet but you could make one using a riser on the Zike drive.

links
Great info perplx. Out of interest, how do you know apple limits it to 25gbps? Have you tested on an M1/M2?
 
Great info perplx. Out of interest, how do you know apple limits it to 25gbps? Have you tested on an M1/M2?
That was the limit of the ssd test on the Reddit link. I have an asmedia enclosure but haven’t had time to test it.
 
ABLVV84vA6eB4tYLsXTSWS5wsQ8nvFYy4cISl1pmU_hy3tQAApCUGZW_Oa9Mn5ESZ8HsK--ErEvM_A_wPaFqzlYL7H0AUdzMju-rKqw_KTJjoooED7wCYfbSLF-NwUC3LooSd_nGb4hc8WhfjcQ87nYkx_FrpyBeGjpBHsvUdLJ2WzBaM07NtDrYlzdp7yMH-HaKMYfAnDDHBakaY52BfvpcTVDkNNKxSGZ3o_c5cuz2NRm19VKHJgA8O8gefNTJyfpC8wyGGFS6gjZG_s3QQz9UzD49j4MUi6bQ3wubto-zkQWE9xJQKTrve5N3WFaa_NX6XGrRT_NnmfTImLUJ6d2oxNIn0Loas2qQqj28CiVh9y0lYxKccsMm1vYXzzjstNwiYlwCol-n4rbYpIzSJM8LjTvul4HS9SKp4NZ0xB9MW25rFc8I7zX4JJ8_FMvGrvT-aZXU1eNR0h8XGlK_J5pKN36UfJltUjmI_J9WNZdfduZMbcgeRiB7MKfIruJZ-OHpEglVv0hS6NKCy2K9dEK6j5ea6DhWTp2aqt3PaE3iPQQMwHaO5x84izDbYkU5RWVcbHM_XDvmSlvHJBvfLGoGsU6X1fXBj3ihNVyALxLNKBYESxvHT0CWVjnVlWMLBVkdjnZMr-57-3CIt_9y7P_6e5FEfr0w5aUJhhtx-ww62h2RBKBHMj4E84cxbDplUyrsVRn6N59kKskuialzs8PXUHBvQxDdxy8mVh2Hs9yL9BEBPydTibifTeu45Tp6cptk2kWqCFn8yHX38skRGFTpZBwyRPTjuHboR4QXWapBLrTXFsMjfufxgedfhffZFuC99-v2Eb4XKvXLA4AbzqtV1JIGrYC2MJXfuJzfTGnxOgAuS4QzRiKRuuyq-zMs30OwfZ4=w1808-h2400-s-no


Thanks to this thread I was emboldened to import Mellanox ConnectX-4 (MCX4121A-ACAT) from China for cheap. Happy to report that my experiment was a success! It's plugged into a JHL6340-based PCIe -> Thunderbolt adaptor, also imported from China.

The card initially identified as a Huawei card and came with Hauwei firmware. Using a Linux machine & Mellanox firmware tools downloaded from NVIDIA's website, I was able to re-flash the card to latest version of the Mellanox firmware. Interestingly, the card worked on my Macs with the mlx5 driver even with the Hauwei firmware.

I've tested the card on my 16" M1 Pro, 16" M2 Pro and 15" Intel Macbook Pro from 2019. On my M2 machine running Ventura 13.5, running an iPerf from my Mac to my server (connected to the network via 10gb SFP), I get a perfect, consistent 9.41gb/s. Running in reverse mode, I get the same. However, on my M1 machine running Sonoma 14.2, I only get 6-7gb/s. When running iPerf infinitely, it can occasionly rise to full speed, only to quickly fall back down to 6-7gb/s. I can't figure out why. Increasing MTU on my Mac and server did not help. In reverse mode, the M1 machine can run at full speed no problem - pulling 9.41gb/s from my server consistently.

Anyone know why this might be?

I also have an Intel X710-DA2 in my server and out of curiousity I also tried that on my Macs, swapping the Mellanox into my server. The Intel card iPerf'd at 9.41 in both directions on both the M1 and M2 machines, but was a little unstable. I was seeing occasional drops to ~3gb/s and sometimes even below 1gb/s on both Macs. I don't see this issue using the card in my server, so I'm guessing the Mac drivers for the Intel cards are a little shaky.
 
Last edited:
  • Like
Reactions: srgz and etorix
ABLVV84vA6eB4tYLsXTSWS5wsQ8nvFYy4cISl1pmU_hy3tQAApCUGZW_Oa9Mn5ESZ8HsK--ErEvM_A_wPaFqzlYL7H0AUdzMju-rKqw_KTJjoooED7wCYfbSLF-NwUC3LooSd_nGb4hc8WhfjcQ87nYkx_FrpyBeGjpBHsvUdLJ2WzBaM07NtDrYlzdp7yMH-HaKMYfAnDDHBakaY52BfvpcTVDkNNKxSGZ3o_c5cuz2NRm19VKHJgA8O8gefNTJyfpC8wyGGFS6gjZG_s3QQz9UzD49j4MUi6bQ3wubto-zkQWE9xJQKTrve5N3WFaa_NX6XGrRT_NnmfTImLUJ6d2oxNIn0Loas2qQqj28CiVh9y0lYxKccsMm1vYXzzjstNwiYlwCol-n4rbYpIzSJM8LjTvul4HS9SKp4NZ0xB9MW25rFc8I7zX4JJ8_FMvGrvT-aZXU1eNR0h8XGlK_J5pKN36UfJltUjmI_J9WNZdfduZMbcgeRiB7MKfIruJZ-OHpEglVv0hS6NKCy2K9dEK6j5ea6DhWTp2aqt3PaE3iPQQMwHaO5x84izDbYkU5RWVcbHM_XDvmSlvHJBvfLGoGsU6X1fXBj3ihNVyALxLNKBYESxvHT0CWVjnVlWMLBVkdjnZMr-57-3CIt_9y7P_6e5FEfr0w5aUJhhtx-ww62h2RBKBHMj4E84cxbDplUyrsVRn6N59kKskuialzs8PXUHBvQxDdxy8mVh2Hs9yL9BEBPydTibifTeu45Tp6cptk2kWqCFn8yHX38skRGFTpZBwyRPTjuHboR4QXWapBLrTXFsMjfufxgedfhffZFuC99-v2Eb4XKvXLA4AbzqtV1JIGrYC2MJXfuJzfTGnxOgAuS4QzRiKRuuyq-zMs30OwfZ4=w1808-h2400-s-no

Thanks to this thread I was emboldened to import Mellanox ConnectX-4 (MCX4121A-ACAT) from China for cheap. Happy to report that my experiment was a success! It's plugged into a JHL6340-based PCIe -> Thunderbolt adaptor, also imported from China.
Interesting. Why aren't you getting 25gbps tho? The adapter supports 40gbps (25gb for data) and the card is 25GBE
 
Interesting. Why aren't you getting 25gbps tho? The adapter supports 40gbps (25gb for data) and the card is 25GBE
I'm using my home server as the host for the iPerf test. Both devices are connected via a 10gb/s switch, and the server has a 10gb/s SFP card (Intel X710-DA2), hence the 10gb/s maximum transfer speed between the two machines.
 
  • Like
Reactions: canhaz
Thunderbolt is 40 Gb/s total. 25 GbE is 25 Gb/s both ways, i.e. 50 Gb/s total. You can only get (a little under) 20 Gb/s from a 25 GbE NIC is a Thunderbolt enclosure.
 
Thunderbolt is 40 Gb/s total. 25 GbE is 25 Gb/s both ways, i.e. 50 Gb/s total. You can only get (a little under) 20 Gb/s from a 25 GbE NIC is a Thunderbolt enclosure.
Understood. The issue is that I'm not even getting 10gb/s one way (upload) on my M1 pro machine. It can do 10gb/s upload, as well as simultaneous 7gb/s upload and 10gb/s download, so it's not a thunderbolt bandwidth issue.

My M2 pro can do 10gb/s upload and simultaneous 10gb/s upload/download no problem using the same setup.
 
I’m having similar problem on Sonoma 14.2.1 m1 mini — using Intel 540…

I also have an Acquantia to test with but that’s gonna be a pain :(

I’ll have to test with iperf
 
So I'm using IXGBE and I can't put the MTU over 2034, it's hard coded...
iperf performance is 6.79Gb/s max
 
I obtained this kext from Ventura beta1. At that time, the driver was still a kext instead of a dext. You can adjust the MTU using the command ‘networksetup -setMTU en4 9000’ if the NIC is driven by the kext.

If you have a Hackintosh, follow these steps:

  1. Disable SIP, add the boot args: dk.mlx5=0 to block the mlx dext driver.
  2. Load the attached mlx kext through OpenCore, reboot and use the command to adjust the MTU; it should work.
  3. Use ‘ifconfig’ to check.
The kext might work on Apple Silicon as well, since it still has Apple's signature. I have a rough idea, but I haven't tried it:

  1. Disable SIP.
  2. Enter ‘sudo nvram boot-args=dk.mlx5=0’ to block the mlx dext.
  3. Place the attached kext in /Library/Extensions, but it may not work; reboot Mac, give it a try.
  4. If step 3 fails, use `kmutil` to load the kext directly.
  5. Verify the kext is loaded using `kextstat | grep "AppleEthernet"` and ensure mlx is displayed.
  6. Use the command to adjust the MTU.
  7. In the terminal, enter ‘ifconfig’ to check the MTU.
Hi xjn819,

Very exciting to see the thread, I own 40G NIC (MCX4131A-BCAT) but failed to load by using your method, just wondering shall I use external docking like JHL7440 or ATTO, or just insert into Hackintosh PCIE?

My macos has upgraded to Sonoma 14.3 but feel unstable, maybe will re-install downgrade to Venture and have a try.

Your comment is appreciated, thank you.
 
Last edited:
Finally I got the mcx4131a bcat and it worked for one time on a HACKINTOSH. Ad than I rebooted the computer and it showed disconnected. I can't find the issues for I had done nothing to it. It looks like that it has not load the drivers. Does anyone know what the issue is and how to solve it?View attachment 2205109
Hi markfrog,

Do you insert MCX4131A-BCAT into Hackintosh PCIE and using Ventura macos version? Just wondering have you upgrade to Sonoma yet and still get NIC working? Thanks.
 
40G CX 4131 Test on macOS Ventura ,use native drive just work

Hi daemonpw,

May I know what thunderbolt dock you use? Just double confirm, so you are using MCX4131 with thunderbolt dock in Ventura macOS, is that correct?

My situation is, insert MCX4131A into hackintosh PCIE directly, load AppleEthernetMLX5.kext (what xjb819 found) but no lucky, and I saw many discussion mentioned Apple has native support CX4 from Ventura, but I am still fail to get it, so I am thinging does it need thunderbolt dock to drive? If yes (Thunderbolt OK, PCIE NG) ... I'd like to but one JHL7440 and have a try.

Just to clarify, thanks. Any comment is appreciated.
 
@bangkuo Ventura has support for CX4 from DriverKit, which requires Apple VT-d. So the key point for a hackintosh is to have VT-d enabled (Disable IoMapper: false).
 
@bangkuo Ventura has support for CX4 from DriverKit, which requires Apple VT-d. So the key point for a hackintosh is to have VT-d enabled (Disable IoMapper: false).

Finally it works! Thanks etorix.

It still fails after I enable VT-d in the BIOS, but I found something different checking by ifconfig and IORegiteryExplorer, I tried many different combo settings, here is the result:

1) BIOS - Enable VT-d.
2) OpenCore - Uncheck DisableIoMapper; Check DisableIoMapperMapping; Remove AppleEthernetMLX5.kext; Remove boot-args=dk.mlx5=0.

The resilt is simple but take me much time, thank you all and this thread, really help me very much!

截圖 2024-02-01 下午9.50.24.png
 
Last edited:
  • Like
Reactions: etorix
Congratulations! You know have native support from DriverKit.
DisableIoMapperMapping:true is not "something different", it is a complement to DisableIoMapper:false in some cases. (The name made sense to developpers, but I fear it is rather obscure to end users.)
 
  • Like
Reactions: bangkuo
I would like to try to get the mlx5 driver to work and could use some help. I haven't used a mac since about... 1985... so I'm extremely inexperienced with macos.

I'm running Ventura in a VM (qemu/libvirt) and am using the virtio driver for the network interface. Works fine. The VM host has a ConnectX-4 in SR-IOV mode and I'm using the virtual function (VF) interfaces in my other VMs (linux/windows). A VF interface appears like a regular PCI device with the same Mellanox vendor id (0x15b3) but with a different product id (0x1014) instead of the primary (physical function / PF) id (0x1013). My other VMs all detect it as a Mellanox nic but they are aware that it's a VF and not a PF.

Ventura doesn't seem to be loading the driver. Is there something I need to do to enable the MLX5 driver? Or is it automatic?

Here's what 'System Information' shows me:

View attachment 2202399

I'm wondering if perhaps the driver isn't loading because it doesn't know how to init VFs... or if it has something to do with the fact that it's being detected as 'ExpressCard'.

I saw mention of the class code but don't see where I can see that.

Any assistance would be greatly appreciated. Thanks!
Were you able to sort it out? Did anyone succeed running their Mellanox as a VF inside a VM? Or maybe an Intel, perhaps?

EDIT: I tested passing through a PF and it also doesn't work on my end. There's something more serious going on here.
 
Last edited:
But no network iface :(. Checking the kernel logs:

Code:
(com.apple.DriverKit-AppleEthernetMLX5.dext) mlx5: QUERY_HCA_CAP : type(0) opmode(1) Failed(-536870212)
(com.apple.DriverKit-AppleEthernetMLX5.dext) mlx5: failed to handleHCACap in load:111
I don't know if this is the driver though or a problem with the card itself. I'll post again when I know more.
Did you investigate this any further? I am also on mackintosh with a 4 LX and seeing same exact issue. The card works just fine with other VMs. It's also supposed to work natively with mlx5 under macOS. I did not need to spoof device ID. So if anything, the fact we're both having same issue would indicate it's something to do with mackintosh-ing.

Do you run your mackintosh bare metal or in a hypervisor? I am running mine in Proxmox, passing through the PF (physical function) in this case.
 
Does anyone know if the cx4111a acat can work? I don't need dual ports and the cx4111a is cheaper.
 
Hi, I have just ordered a Mellanox MCX354A-QCBT card, but then I noticed that where you mention the QCBT model the text is striked thru - does this mean that I would not be able to make this card working?
 
Just to confirm... plain old Intel cards are working with no added drivers. Specifically, I'm running a Mac Studio (M1 Ultra) with Sonoma 14.0 Beta (build 23A5337a, public beta 6). I put an Intel XL710 (40Gbe, full model number XL710-QSR1) in a Sonnet Echo Express SE IIIe Thunderbolt chassis. MTU looks to be limited to a range of 1280-2034. It's going to be talking to a FreeBSD server with an identical card, direct connected with fiber. All I've done so far is ping, so no idea of performance. I know Thunderbolt is not going to let the 40Gbe card get anywhere near full performance but I found these cards and the fiber on eBay for pretty cheap ($150/ea for the cards including Intel transceivers so I just went ahead and didn't bother with 25Gbe cards).
Is this still working on Sonoma? I have a 7,1 with a XL710 and getting this:
Untitled.jpg

Screenshot 2024-04-01 at 7.59.35 PM.png
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.