With a Mac Pro, I think Apple is going to have to have something undeniably competitive. M1 Max, and likely M1 Ultra, have too many gaps and places the performance doesn't hold up.
Kind of why I think, or hope, Apple does something discrete or MPX-y. By my math, Jade4c wouldn't cut it for something that is generally competitive with the AMD 7000 or Nvidia 4000 series. They'd need Jade8c for something that is generally competitive. Because much like M1 Max, I'm guessing the M1 Ultra vs 3090 benchmarking only holds up under certain workflows and not generally.
Most of AMD 7000 and Nvidia 4000 series would get covered. Generally both series are moving what was formerly at the top end down to the upper-middle range. So something akin to 3090 ---> 4070-4060 , 6900 --> 7700. and the lowest end stuff in the range falls mostly just falls off a cliff (or gets tagged as mobile only dGPUs). [ The iGPUs on the AMD and Intel processors for laptops are moving up in performance coverage. And APU/big-iGPU works for low end desktops for non gamers also. ]
So being in a 3090-3080 range would also Apple to compete with most of the next gen line up. And in terms of actual deployed units, extremely likely well over half of the actual units used by users. The numbers of 3090/6900 that Nvidia/AMD sell is way different than their lower half of the line up; much smaller.
Q1 '21. $12.5B aggregate and 11.8M in units average out to 1059. ( however, if look at the Workstation GPU average price can average is not going to be near the median. Large chunk of revenue comes from highly marked up workstation cards. We would need standard deviation to get better sense of the what the median is' but it is probably lower as low-mid range workstation cards grossly over contribute to the revenues. Just look at the price of the W5700X and W6600 modules ( $1,000 and $700).
Average GPU Pricing Skyrockets as Market Quadruples to $12.5 Billion, Report Says
GPU makers enjoy robust demand and absurdly high pricing
www.tomshardware.com
but even with the noise the average comes out between mid-range and high end cards. Exactly the stuff the next gen will iterate down to.
Apple would only need a 7800 or 4080 killer if wanted bragging rights ; not high unit sales. Similar to how Intel's initial stab at there Alchemist cards is stopping around the 3070/5700 range. Get the drivers on a solid track and application 'by in' . Next iteration take another notch. Next iteration take another notch after that. (ignoring the 'cost no objective' he-XPC ponte vechhio stuff. )
With M3 Apple will get to iterate on the GPU cores with TSMC N3 and perhaps LPDDR5 improvements, while Nvidia and AMD are on N5. They don't have to go discrete to increment. Don't have to crank the P core count up to 32 either if want a GPU 'killer' for the mid-upper range ( dial back on the P/E cores on an optional die combo. ) .
Apple wants to be king of the iGPU . They are probably never going to top the. A100/MI200/Xe-HPC ( computational class) mega aggregate die area cards with real HBM (or bleeding edge GDDRX ).
Seems like there would be way less R&D for Apple is just let the 7900 and mid range Intel GPUs in the door with signed drivers for at least GPGPU only modules. Would be able to get the GPU compute grunt for zero silicon expenditure. If gave them a framework API hooks so that they could add OpenCL/CUDA/OneAPI compute frameworks back in 100% on their own dime that would probably be useful too as compute modules. [ again though. WWDC 2022 will tell. IMHO Apple building even more low volume silicon to chase the lower volume silicon of AMD/Nvidia/Intel seems like a waste of time and effort. low-to-upper-mid range is strategic. The rest of that is a 'nice to have if cheap enough' icing on the cake. Even less so if there are less expensive options. ]
But if they did some sort of discrete modules that's just a ton of GPU cores shoved onto a card with a giant heat sink - like an MPX module - that seems obtainable.
technically doable but what are really buying? It is not volume. Apple is highly likely to slap some $2+ K price on it.
Also might make sense because the CPU performance that Apple's delivering would be quite good for a Mac Pro. It's just the GPU thats a problem.
If the on-die inter-function unit bus and memory bus and/or UltraFusion connector are a bottleneck then TSMC N3 and memory upgrade and incrementally bigger die (but still smaller than a 3090 die) would help uncork those by just following the technology improvement curve.
What AMD and Nvidia are doing at the top end of the 7000 and 4000 series is just throwing gobs of die area and power (and much higher prices) at the problem. I doubt Apple can actually compete there toe-to-toe in that with their "perf/watt" primary directive. Non uniform , non homogenous memory GPU from Apple is not going to synergize well with the rest of the line up.
Bragging rights dGPU card versus. VR/AR goggles lite and low power enough so no huge battery and bulky over-the-head strap. I'd would suspect Apple was far more interested in the latter than the former.