Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

alanmadzar

macrumors newbie
Original poster
Jul 12, 2019
21
5
San Jose
Hi there,

Could anyone provide some insight on whether the new Mac Pro 2019 is going to be compatible with third party PCIe expansion cards? I was looking to buy the low-end model which only has a 256GB SSD, and expand my storage performance via PCIe 3.0 x 16 RAID Controller cards... is this a viable option?

I would appreciate any insight.

I want to increase the performance of my future Mac Pro 2019 without having to spend a bunch more cash

Thanks!
 
Hi there,

Could anyone provide some insight on whether the new Mac Pro 2019 is going to be compatible with third party PCIe expansion cards? I was looking to buy the low-end model which only has a 256GB SSD, and expand my storage performance via PCIe 3.0 x 16 RAID Controller cards... is this a viable option?

I would appreciate any insight.

I want to increase the performance of my future Mac Pro 2019 without having to spend a bunch more cash

Thanks!
Check back after it's shipping - anything before then is most likely guessing.
 
You mean there's no way to find out about how I can use PCIe expansion cards until the Mac 2019 starts shipping out?
Yes, there's no way to know until it's shipping (or until early preview units are sent to Tom, Anand and Bare - and they try the expansion cards that you want). It's also possible that Apple will partner with certain third parties, and they will pre-announce support for specific cards. (for example, the Promise storage cards)

Physically, it would seem that most PCIe cards will plug into the MP7,1.

Will there be drivers that Apple allows to load? Will there be 32-bit utilities that Catalina will kill?

Really, today nobody can do anything but guess about support.
 
Last edited:
Yes, there's no way to know until it's shipping (or until early preview units are sent to Tom, Anand and Bare - and they try the expansion cards that you want). It's also possible that Apple will partner with certain third parties, and they will pre-announce support for specific cards. (for example, the Promise storage cards)

Physically, it would seem that most PCIe cards will plug into the MP7,1.

Will there be drivers that Apple allows to load? Will there be 32-bit utilities that Catalina will kill?

Really, today nobody can do anything but guess about support.


Sorry... could you explaine what MP7,1 stands for? I'm a noob here.
 
I see no reason why the SSD7101A will not work (as they work in MP5,1 and via PCIe TB adapters), unless it's an OS restriction or driver related issue with macOS Catalina or beyond. This is not really an "expansion" however. It's just a PCIe card providing storage.

True (basic) PCIe expansion will also absolutely be possible. It's a "function" based on the standard. Whether this is limited in functionality or capability is what we need to wait and see. There is likely a limit for lanes via expansion, such as if you have nearly single slot filled in the MP7,1 and leave one for PCIe expansion with plans to add 6-8 more PCIe devices. The math may not work out properly, or the expansion may be limited to x1 or x2 devices only. It's highly unlikely you can load up the tower with x16 GPUs and then plan to expand with eight more x16 GPUs - the math simply does not work out.
 
You mean there's no way to find out about how I can use PCIe expansion cards until the Mac 2019 starts shipping out?

At this point in time. probably the more pertinent question to be asking the PCI-e card vendors is if they support macOS 10.15 ( Catalina) or not. If it is not listed on the current card's specs ask whether they are working on something for 10.15 or not. 10.15 is in beta so probably won't be part of most card's standard marketing webpage or documentation. But if they aren't tracking 10.15 then highly likely not serious about the new Mac Pro.

Pragmatically that is a proxy whether they are supporting the Mac Pro 2019 ( 7,1) or not.
 
What do you all think about GPU expansion with the Mac Pro 2019? I see some conflicting dynamic with expanding due to the 64 PCI lane limitation of the CPU. There's 8 PCIe slots on the new Mac Pro, but with those modules installed, how do they expect any more room for expansion of third-party PCIe cards? Can someone give me some insight here?
 
I’d expect PCIe switches (these are logical equivalent of an ethernet switch)and also some sort of dynamic lane allocation.

A few mouse clicks will very likely allow you to remove lanes from slots and attach them to other slots. A fair part of it could be easily be done automatically when a card is detected in a slot.

I also expect one slot to be connected to the PCH(chipset). Some people will disagree with me on this.

Not many PCIe cards use full x16, and those that do use x16 will generally run only a tiny bit slower on x8.

It will be interesting to see how dynamic lane allocation is and how it’s implemented, but I don’t think it’s a terribly great engineering challenge for Apple to make it work well.
 
I’d expect PCIe switches (these are logical equivalent of an ethernet switch)and also some sort of dynamic lane allocation.

A few mouse clicks will very likely allow you to remove lanes from slots and attach them to other slots. A fair part of it could be easily be done automatically when a card is detected in a slot.

I also expect one slot to be connected to the PCH(chipset). Some people will disagree with me on this.

Not many PCIe cards use full x16, and those that do use x16 will generally run only a tiny bit slower on x8.

It will be interesting to see how dynamic lane allocation is and how it’s implemented, but I don’t think it’s a terribly great engineering challenge for Apple to make it work well.
I was looking at this card - with increased GPU from the MPX modules that they plan to implement, there does not seem like a lot of PCI lanes left for a module such as the HighPoint model I linked. It takes away port availability as well when you install the MPX module. With increased GPU, I would expect a necessity for more SSD storage space. I'm trying to think of a good configuration of increased GPU and third-party PCIe expansion...
 
I was looking at this card - with increased GPU from the MPX modules that they plan to implement, there does not seem like a lot of PCI lanes left for a module such as the HighPoint model I linked. It takes away port availability as well when you install the MPX module. With increased GPU, I would expect a necessity for more SSD storage space. I'm trying to think of a good configuration of increased GPU and third-party PCIe expansion...

Running that Highpoint card(assuming it works in MacOS) on x8 would incur a 50% theoretical penalty. Realistically, it wouldn’t matter for 99% of use cases.

With two MPX GPU modules configured to use x16 each, also block 4 slot spaces. I expect the top slot that’s occupied by the IO card to use lanes from PCH, not CPU. That leaves 3 slots for the remaining 32 CPU lanes. All this is assuming there are no PCIe switches.

If there is a x16 to x32 switch, then the possibilities increase dramatically. With two x16 to x32 switches, it becomes almost unimaginable that you’d have a shortage of lanes.
 
Last edited:
Invert your thinking. Graphics cards get 92% of their bandwidth requirements from PCIe 3.0 x4.

When you "think different", you quickly realise that you really want the 64 available lanes (minus 4 lanes for a GPU) allocated to storage cards. I've got a couple of Highpoint PCIe 2.0 (x16) cards that give me a decent number of SATA III interface points (32 in total with concurrent transfer speeds at 400MB/s per lane). Even without updated Highpoint RAID controller drivers for Catalina - I can still use SoftRAID to create a volume that transfers at about 12500 MB/s. With a couple of SSD7101-A (or equivalent) in the other slots with NVME typical transfers at 2500 MB/s you can create a second SoftRAID volume (8x m2 sticks in RAID0) to transfer at 18000 - 20000MB/s.
At this point - you should feel overwhelmingly like the "king of the lanes". If not then just transfer your 300GB movie library/folder between the 2 volumes 582 times a day and witness the hardware taking full advantage of all those lanes.. Lanes that would otherwise gather dust if you stuffed your new 7,1 full of expensive new graphics cards with their anaemic appetite for all that precious lane bandwidth. Remember the old proverb - happiness can only be found within..... PCIe lane bandwidth saturation.

METATAG: 'lanes'
 
.
....
With two MPX GPU modules configured to use x16 each, also block 4 slot spaces. I expect the top slot that’s occupied by the IO card to use lanes from PCH, not CPU. That leaves 3 slots for the remaining 32 CPU lanes. All this is assuming there are no PCIe switches.

Slot 5 ( where Apple canonically wants to place the Afterburner card ) probably has access to full x16 PCI-e bandwidth. ( when some of the other 6-8 slots aren't using it ).

If someone wanted to use Slot 5 for an external PCI-e enclosure connectors then could add more high power drawing cards that way to the overall system ( just not internally).

The I/O care is probably not using the PCH. One, the PCH has to have the T2 connected ( it has both SSD and Power Management IC (PMIC) duties). That is where Apple has put all the other T2 in Mac systems. So there isn't much space DMI bandwidth there at all.

Further more still haven't accounted for the two 10GbE connections. Those combined with be another x4 PCI-e v3 worth of bandwidth. If throw yet another x4 PCI-e v3 at the PCH you are basically at 3 time oversubscription on the DMI link. I don't think Apple did that. ( Wi-Fi aiso on the PCH. Not huge but yet another piling on of the oversubscription that likely isn't completely off in many contexts. ) And also may not have accounted for the "top" Thunderbolt controller either.



Apple probably put slot 5's x16 on a switch with a combination of some of the other x8's . And some of the x8s on a the same switch as the x4 slot 8. In the latter case a x16 that all of the thunderbolt controllers go through ( the two x8 worth on the two MPX connectors and the x4 along with the "alternative " x8 slots in the MPX bays that are covered up by MPX modules and not used at all. . ).


If there is a x16 to x32 switch, then the possibilities increase dramatically. With two x16 to x32 switches, it becomes almost unimaginable that you’d have a shortage of lanes.

Apple has far more lanes assigned to the sockets than they have lanes. It is matter of how many switches they have, not if they have one in this system.

However, if looking to stuff "full sizes" ( or large ) x16 cards the need lots of power folks will have to go outside though because power will far more likely be the limiting factor more than lanes once have two MPX modules in.

Double wide ( space/volume capacity) x16 RAID controller cards would also be an issue after the two MPX bays are filled. Something "thin" like the Afterburner card would fix though.
 
Last edited:
  • Like
Reactions: dabotsonline
With two MPX GPU modules configured to use x16 each, also block 4 slot spaces. I expect the top slot that’s occupied by the IO card to use lanes from PCH, not CPU. ...

P.S. Also forgot about this graphic from the revived slot configuration tool

macOS-Catalina-Expansion.jpg

https://9to5mac.com/2019/07/01/expansion-slot-utility-mac-pro/

https://www.macrumors.com/2019/07/01/macos-catalina-expansion-slot-utility-app/


The fact that you can checkbox the A/B buttons to blend bandwidth between slots 5 , 6 , and 8 extremely suggests that all three of the slots are on a single switch ( of one of the x16) . There is about zero probability that Apple has hooked slot 5 to the PCH. AFterbruner wouldn't work in that context. So since 8 is hooked to 5, then it is probably hanging of the CPU.


I would suspect that 2, 4, 7 , and the a pair on x4 on each of the MPX connectors are hooked to other x16. Also that 1 and 3 are each independently on a straight through x16 link. That rounds out the for x16.

The MPX links will exclude 2 and 4. 7 is probably often a "last to be filled" or "first to be filled" depending upon how the MPX bays make use of their Thunderbolt ports. ( If they are primarily used as DisplayPort then 7 has very good bandwidth. If they are heavily used then 7 isn't. And more useful for x4 (or less ) cards.

The top TB ports is are the more likely loaded onto the PCH rather than the huge crowd on the 2-4-7-MPX one. ( or some of the MPX x4's are loaded onto the PCH to bleed bandwidth demands off of the two switches. )
 
Last edited:
What do you all think are gonna be the most common applications for the new Mac Pro 2019 in terms of businesses buying them for workstations?

What kind of GPU/PCIe card expansion would be concurrent with these types of applications?
 
Aiden....
Do you disagree?

How many trash cans do you think were bought for non-technical pointy-haired managers who wanted something 'sexy'? I know of at least ten in my organization. (And no engineers asked for the MP6,1 - they wanted systems that would run CUDA, or lab servers with quad Quadro GPUs.)

Why would you think that it will be any different with the MP7,1? (Maybe that could help explain the horrible specs of the entry MP7,1 - it's the 'manager's edition'. ;) )
 
Last edited:
  • Like
Reactions: dabotsonline
Invert your thinking. Graphics cards get 92% of their bandwidth requirements from PCIe 3.0 x4.

When you "think different", you quickly realise that you really want the 64 available lanes (minus 4 lanes for a GPU) allocated to storage cards. I've got a couple of Highpoint PCIe 2.0 (x16) cards that give me a decent number of SATA III interface points (32 in total with concurrent transfer speeds at 400MB/s per lane). Even without updated Highpoint RAID controller drivers for Catalina - I can still use SoftRAID to create a volume that transfers at about 12500 MB/s. With a couple of SSD7101-A (or equivalent) in the other slots with NVME typical transfers at 2500 MB/s you can create a second SoftRAID volume (8x m2 sticks in RAID0) to transfer at 18000 - 20000MB/s.
At this point - you should feel overwhelmingly like the "king of the lanes". If not then just transfer your 300GB movie library/folder between the 2 volumes 582 times a day and witness the hardware taking full advantage of all those lanes.. Lanes that would otherwise gather dust if you stuffed your new 7,1 full of expensive new graphics cards with their anaemic appetite for all that precious lane bandwidth. Remember the old proverb - happiness can only be found within..... PCIe lane bandwidth saturation.

METATAG: 'lanes'
I had no clue that GPU only utilizes a small amount of bandwidth... so you're saying storage is where the lanes should be allocated for, and that those MPX Modules won't consume x16 bandwidth? Or is that for non MPX module GPUs?
[doublepost=1563580451][/doublepost]
Do you disagree?

How many trash cans do you think were bought for non-technical pointy-haired managers who wanted something 'sexy'? I know of at least ten in my organization. (And no engineers asked for the MP6,1 - they wanted systems that would run CUDA, or lab servers with quad Quadro GPUs.)

Why would you think that it will be any different with the MP7,1? (Maybe that could help explain the horrible specs of the entry MP7,1 - it's the 'manager's edition'. ;) )
haha thats the response i wanted. i dont disagree, just curious
 
  • Like
Reactions: dabotsonline
I had no clue that GPU only utilizes a small amount of bandwidth... so you're saying storage is where the lanes should be allocated for, and that those MPX Modules won't consume x16 bandwidth? Or is that for non MPX module GPUs?
What I'd say is that the system should have a small number of very wide PCIe switches, and let THE SYSTEM DYNAMICALLY ALLOCATE BANDWIDTH WHERE IT IS NEEDED.
 
  • Like
Reactions: dabotsonline
What I'd say is that the system should have a small number of very wide PCIe switches, and let THE SYSTEM DYNAMICALLY ALLOCATE BANDWIDTH WHERE IT IS NEEDED.
Assuming that you have the Mac Pro fully configured with the MPX modules and Afterburner, that leaves you with the top 3 slots, which are x4, x8, and x8. Do you think there would be a market for cards that can cross-sync the two x8 slots to create a x16 capable storage card? The program metal allocates GPU capability to the MPX and Afterburner modules. That leaves a lot of room for the CPU to function on those two x8 slots that are left on top. I was thinking this could be used for extra storage cards.

Do you think there'd be any use case for someone who needs that x16 capability on their system for extra storage?
 
  • Like
Reactions: dabotsonline
Maybe someone on here can provide some insight...

Assuming that you have the Mac Pro fully configured with the MPX modules and Afterburner, that leaves you with the top 3 slots, which are x4, x8, and x8. Do you think there would be a market for cards that can cross-sync the two x8 slots to create a x16 capable storage card? The program metal allocates GPU capability to the MPX and Afterburner modules. That leaves a lot of room for the CPU to function on those two x8 slots that are left on top. I was thinking this could be used for extra storage cards.

Do you think there'd be any use case for someone who needs that x16 capability on their system for extra storage?
 
Assuming that you have the Mac Pro fully configured with the MPX modules and Afterburner, that leaves you with the top 3 slots, which are x4, x8, and x8. Do you think there would be a market for cards that can cross-sync the two x8 slots to create a x16 capable storage card?

Cross sync isn't going to work if on two different PCI-e bus bundle pairs. The two slots will be at two different addresses. There may be some Rube Goldberg gyrations someone could go through, but it won't buy anything significant to merit a market for cards.

1. As pointed out in post #16 above the Apple configurator suggests that the last are not all on the same switch as slot 5. For the ones on the same switch as slot 5 there is no "cross sync" coming since pulling on the same source. For the one apparently on a different switch ( slot #7 )

2. if wanted to take two x8 storage controllers and make it look like an approximately x16 worth of bandwidth then something like SoftRaid ( or some other software RAID) would work far more easier and be far more flexible with whatever other chokepoints the two MPX modules put on the system overall bandwidth.

The program metal allocates GPU capability to the MPX and Afterburner modules.

Metal is more a library ( a set of calls that programs use ) than program itself. Technically it isn't really Metal that is doing the Afterburner work. It is another library further up the stack. That library may make some use of Metal calls to transfer data if there is some direct memory access transfers of data between cards.

Metal also isn't completely CPU usage free either.

.
That leaves a lot of room for the CPU to function on those two x8 slots that are left on top. I was thinking this could be used for extra storage cards.

With Afterburner and the GPGPU picking up lots of the computational grunt work then yes there should be enough CPU cycles around in most workloads for a Software RAID not to pose that much of an impact.

Probably is a normal context for Afterburner as it would be tough to keep that card fed with 3x 8K RAW data streams from the x4 PCI-e v3 bandwidth T2 internal drive. Or the two SATA connectors. Need something incrementally better than x8 PCI-e v3 to get that much RAW data streaming in parallel ( 8K 10-bit color , HDR , 24 fps ) .


Do you think there'd be any use case for someone who needs that x16 capability on their system for extra storage?

just need the aggregate bandwidth. Not everything has to come from the same physical drive.
 
  • Like
Reactions: dabotsonline
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.