Guys, do you honestly not see that it is a problem that being on PC allows you 5-20x better performance in real world use cases? [...] A next gen m2 ultra might according to the most positive extrapolation double these values and a 4x version approaching a single 4090. i hope they reach these level since that would make the other benfits of a macpro the platform as a whole at least viable.
These are good and practical points. But please do consider that Apple is a newcomer to the desktop GPU market, while Nvidia has been doing it since forever. Nvidia's lead is the key areas is undisputed, but much of that lead is because they offer features that Apple yet lacks (like hardware RT), have iterated the hell out of their SIMT architecture since Tesla, and overall have a much more mature implementation. And of course, the fact that they can throw more die area and power at the problem doesn't hurt either. I mean, 4090 has 128 SMs (with a SM being more or less equivalent to an Apple GPU core) and is clocked much more aggressively to boot.
So while Nvidia's lead is, as said, undisputed at the moment, we should consider how things will develop for Apple going forward. M1 was still pretty much an iPhone GPU, just scaled up. Its register file was relatively small and there were some obvious problems scaling GPU clusters beyond certain size. Already the next iteration was a massive step forward. M2 Max is actually faster in Blender 3.5.0 than M1 Ultra — that's after one hardware iteration and roughly a year of software optimisations. If you look at Blender CUDA scores for Nvidia hardware (pur compute, without hardware raytracing), Apple is catching up very quickly — in fact, M2 Max performs similar to 15-16TFLOPS Nvidia GPUs while still being much more power efficient. If M2 Ultra solves the scaling issues, this would mean that Apple is only one generation behind in general-purpose compute.
Nvidia's trump card of course are still hardware ray tracing (benefiting Blender specifically) and larger high-end GPUs. But these advantages won't stay there forever. If Apple's next-gen GPUs come with competent hardware retracing (and there are good reasons to assume they might), Apple laptops at least could start posing some serious competition to Nvidia. And Apple still has ample opportunity to make their GPU cores larger or add more of them. My point is that while Nvidia will try to innovate, it might be more difficult for them because their architecture is already more optimised and refined. You can see this with Ada — impressive performance improvements, but almost all of that came from exploiting the process (more SMs, higher clocks) and overlocking the memory. So unless Nvidia comes up with something radically new and innovative, which will allow them to improve performance without increasing power consumption, they might start stagnating fairly soon. While Apple will undoubtedly have consistent GPU performance improvements for the next few generations.
... and AMD? Well, AMD seems to be in a bit of a dead end actually GPU-wise. They have their nice multi-chip tech that allows them to lower the manufacturing costs, but they seem to struggle when it comes to core feature innovation. In fact, they had to bold on some quick and dirty patches to pretend feature parity with Nvidia (like their very lazy RT implementation while they struggle to come up with a proper solution) or the limited VLIW of RDNA3 just so that they can claim 2x improvement in TFLOPS (which has limited practical impact for real code). I hope they have a new architecture that addresses these issues.