Sure, Apple probably wants a graphics stack were all the assumptions about uniform , unified memory and specific features of Apple GPUs are deeply baked in (and optimized for) and assumed by almost all apps in the ecosystem. But there is whole another set big compute that isn't solely to produce video output.
And yes, all of those cards nominally have Linux drivers (and many of options in this new product area are OAM 'cards' not legacy PCI-e cards) . But why would any compute accelerator vendor even try to show up if Apple barking "get off my lawn" at all of them?
For the Mac Pro to ignore that whole segment is slightly 'nutty' if really looking at very long term viability for the product.
I agree with your premise, but disagree with the last statement. The thing is, Apple is very serious about the GPU compute — on their own hardware. They don't care about the users using compute solutions from other vendors, because they want them to use their own Metal.
The way I see it is that suggesting that Apple supports third-party compute stack is like suggesting that Nvidia supports AMD or Intel GPUs in CUDA. They can do it, but it would be detrimental to their business. CUDA offers a streamlined programming model with guaranteed feature support and consistent behaviour across GPU models. It's not something that you can support across multiple vendors, not at this level of abstraction. With Apple it's similar — they support certain streamlined programming model that offers strong guarantees (zero memory copy, large amounts of GPU-addressable fast RAM, fast tile bulk transfer, SIMD shuffle and fill operations etc.). They want the software to be developed and optimised for their hardware, not somebody else's.
There are two common counterarguments to this strategy, and I think both can be refuted.
1.
PCs support multiple GPU vendors, so should Apple. I think this argument falls flat from the beginning since Appel is not the PC business at all. They sell opinionated systems, not flexibly configurable boxes. There is no doubt that Windows/Linux on a generic PC offers much more flexibility and choice to the users, but that is simply not a concern for Apple. This was the case even when they were using third-party components, and is even more the case now, when they moved to their own hardware.
2.
Apple will lose pro users if they don't give them access to fast Nvidia GPUs. Yes, they will, and yes, they already have. I don't think this is the problem for them, because they are looking at a long-term business vision, not short-term. Again, we have a good precedent in the industry: Nvidia. They were slowly building their hardware and software (CUDA) stack, "seducing" users by the advanced functionality. There is no reason why Apple can't do the same. We know they have the technology, talent, and of course money. The latter is a big factor — they can afford to spend much more money on R&D and manufacture without it eating into their profit margins.
To sum it up, I believe the reason why Apple doesn't allow third-party GPUs is similar to why they don't support Vulkan. While this strategy might be detrimental in the short term, the potential payout in the long run is much higher. If they can offer compelling hardware, good tools, and a consistent, comfortable programming model, they will end up with a rich software ecosystem in the future. It's a calculated risk.
Minimally, Apple should at least put in some hypervisor/virtualization stack work to pass through access to a card to a guest operating system that does want to do the driver work.
I agree, PCI-e passthrough could be an interesting feature for a Mac Pro. Put an Nvidia GPU in, fire up a Linux VM, use CUDA. Of course, the GPU would be unusable under macOS. At the same time, there are also potential challenges with this, e.g. will Nvidia drivers even run on ARM Linux?