I was more thinking abut the DP return to the system via the MPX slot. My understanding is the MPX bay is DMI x16 to exchange data with the GPU, and then the second set of pins reroutes the second PCI Slot in an MPX bay (Slot 2 & 4) when an MPX GPU is connected to supply TB peripheral bandwidth, and bring DP-Out back into the system, via the Switched A & B PCI pools.
Like I said, DisplayPort is completely separate from PCIe. MPX slots have different lines for the DisplayPort signals that are sent to the top or I/O card's Thunderbolt controller.
MPX doesn't use DMI. DMI is the connection from the CPU to do PCH. The PCH has the other devices not connected to the PCIe switch (SATA, NVME, Ethernet, WiFi, USB, etc).
Slot 8 can't be changed off Pool B, but switching the x16 Afterburner to Pool B sets it to 150%
That's logical. If Pool B is 150% then Pool A is reduced to 0%?
I think what I'm unclear on, is whether the PCI switch actually works to manage a greater number of lanes than the processor supports - especially if things are idle. For example, I'm not using any displays plugged into the top or I/O cards - all my displays are via Type-C to DP connectors, so do the lanes that would be supplying Slot 2 (which AFAIK are assigned to the MPX DP-Return) get returned to the pool?
The CPU has 64 lanes. 16 for Slot 1, 16 for Slot 3, 16 for Pool A, and 16 for Pool B, This is on page 11 of the Mac Pro White Paper.
Pool A and Pool B are controlled by the 96 lane PCIe switch. Since 32 lanes are used for the upstream connections (16 for Pool A and 16 for Pool B), there remains 64 lanes for the downstream slots and devices.
The Mac Pro therefore has 128 total usable lanes controlled by 64 lanes from the CPU.
A display does not use PCIe lanes unless it is a Thunderbolt display with PCIe devices (USB controller, Ethernet controller, ...).
If things are idle then it doesn't matter if a Pool is 100%, 200%, or 300%. Over allocation becomes a problem only if you happen to be sending > 126 Gbps at any given moment in one direction.
An MPX module in slot 1 will assign the PCIe lanes of slot 2 to the PCIe lanes of the Thunderbolt controllers of the MPX module. I'm not sure if slot 3 is the same - slot 4 has 16 lanes so does it get changed to x8? The White Paper says slot 4 gets disabled.
The Radeon Pro 580X MPX and Radeon Pro W5500X modules don't have any Thunderbolt controllers and are only double wide, so they shouldn't affect slot 2 or slot 4.
Only the quad wide MPX modules have Thunderbolt controllers.
As I understand it, for a TB-equipped MPX GPU, TB Bus 0 is the HDMI, TB Port 1, the top and the I/O card. Bus 1 & 2 are the 4 remaining TB ports on the card. So Bus 0 should account for x4 lanes.
I don't know what the bus numbers are. Each Thunderbolt controller is a separate Thunderbolt bus. There can be between 2 and 6 Thunderbolt buses. Each Thunderbolt bus has two Thunderbolt ports.
- I/O card
- Mac Pro top Thunderbolt ports
- 1st Thunderbolt controller of MPX module in slot 1
- 2nd Thunderbolt controller of MPX module in slot 1
- 1st Thunderbolt controller of MPX module in slot 3
- 2nd Thunderbolt controller of MPX module in slot 3
HDMI is separate from Thunderbolt Bus. What you're confusing here is the DisplayPort outputs of the GPU. A GPU has up to 6 DisplayPort outputs. A Thunderbolt bus has 2 DisplayPort inputs.
The W5700X has a switch (MUX) for one of the DisplayPort outputs of the GPU. The switch switches the DisplayPort output between a DisplayPort to HDMI converter and a DisplayPort input of one of the Thunderbolt controllers. In this case, there are 7 display outputs to choose from but only 6 are useable because the GPU has only 6 DisplayPort outputs and one of them is switched.
So there's 32 DMI lanes for MPX GPUs, leaving 32 CPU lanes, distributed across a potential 60 lanes on the non-DMI slots, that are managed through a 96 lane switch.
Not DMI. They are all PCIe lanes. 64 lanes from the CPU. 32 to the PCIe switch. 32 to slot 1 and slot 3. There's 64 downstream lanes from the PCIe switch but the white paper only shows 56 of them.
Currently I have 16 for the Afterburner (100% of Pool A), and 4 for the I/O card (50% of Pool B) - theoretically that should be 12 CPU lanes remaining...
I don't know if the I/O card is using 50% or 25%. Disconnect it to find out. Remember there are also PCIe lanes going to the Thunderbolt controller for the Mac Pro's top Thunderbolt ports.
Yeah, i mean it's sitting there using 16 lanes, and is completely idle... it really does irk me that Apple didn't make it dynamically reconfigurable. "It's a software reconfigurable FPGA, for which we'll never offer a reconfiguration"
If it's idle then it's not using bandwidth and doesn't affect anything. All the bandwidth can be used by something else while it's idle.
I can't recall if it's been discussed here previously, but I'm still unclear on how the 96 lane PCI switch functions to manage bandwidth over 32 physical lanes. As in did they use a 96 lane switch because there wasn't a 32 lane one of sufficient performance, or are you supposed to be able to over-subscribe those lanes 3:1.
We'll see what Apple engineering say. If I only have 4 lanes remaining, I might just put a single SSD on a card, and put my photo library on a 4TB SATA SSD, rather than an M.2 🤷♂️
The PCIe bus is like a network. You can connect a 100 devices to a single PCIe lane using PCIe switches like a network switch.
Similar to USB. You can connect many devices to a single USB port. A USB hub handles moving traffic to the proper USB device.
Don't worry about over allocation. Think about what devices you are going to be using at the same time. Can they together send 126 Gbps? Or receive 126 Gbps? If so then shuffle them around if possible.