All of which brings me back to my point-- an estimate is only realistic if it can be supported by realistic assumtions and there's certainly nothing in this supposed "leak" that should change our expectations in any way.
I am 100% with you on this. And you are right that there are too many unknowns, which makes precise speculation difficult. What we can do is try to guess what kind of technological advancements new manufacturing process would enable (increasing complexity of design at same power consumption) and what business moves Apple might make.
The entire topic is too large and one can lose hours talking about details, so I'll just comment on a few things. And these are indeed comments rather than arguments because I don't find myself fundamentally disagreeing with things you say.
- Having built it, Apple is hobbling it's performance to dribble it out over 3 generations
Just a quick comment on this. This is standard business practice and something all companies do. Otherwise they would ship a very strong product and generate massive initial demand (likely without the means to satisfy this demand) and then suffer lacklustre sales for a while. That's not good business.
Three generations of 5nm only gained about 45% in single core performance over 7nm (GB6). Could 3nm give us a 60% improvement? I suppose, but how? I don't think you can look back in time and cherry pick a rate of improvement. Everything was significantly less mature back then so you'd expect a higher rate of improvement. Apple can certainly pull another rabbit out and bend the curve again, but there's no reason to think the earlier rate was more indicative of what we should see next than the current rate is.
I think this is a more complex question. Of course, it's not plausible to assume that merely by moving to 3nm such improvements become possible. But 3nm combined with a new design (wider and/or capable of higher frequencies) could do the trick. The thing is, if Apple wants to compete in the high end desktop/workstation segment, they need hardware capable of substantially better performance peaks than today. So yeah, I wouldn't be surprised if their upcoming designs end up 50-60% faster on some products, for those reasons.
Regarding the lower improvement rates with 5nm... if I remember the discussions correctly, 5nm is less of an improvement over previous generations than some other nodes, limiting what can be done somewhat. Also, Apple might well be experiencing diminishing returns with their designs — they have been steadily increasing the cache size and out-of-order execution capabilities of their processors, and they might have reached a peak. It's also interesting to note that the latest generation of x86 takes some hints from Apple (e.g. by implementing large caches). In other words, Apple might be running out usual tricks while others are quickly picking up some of their tricks.
But I think there is hope — from what I understand 3nm will come with some innovation (like flexible combination of performance-oriented and energy-efficient cells) that enables more design flexibility. And I am sure that Apple has more tricks up their sleeve (with the obvious one pursuing higher frequencies).
BTW, this is the main issue with this kind of speculation. We are dealing with way too may variables and factors.
So what a wise engineering team does is focus on a subset of potential areas for improvement and focus their attention on gains in a few areas at a time.
What Apple has been doing so far is interleaving the improvements in different areas. Previously they would ship a new cache system every two years and CPU backend improvements the other two years. Even even post A14, with slower performance gains they did tremendous work on E-cores, GPU, and most likely internal fabric (I have a suspicion that M2 Pro/Max is shipping with a new on-chip network based on patents and GPU scaling). So yeah, you area absolutely right.
And there's no reason I see to set 20% as an upper bound. I suspect this fancy VR headset is going to want a ton of performance at low power-- a lot will be in the GPU cores, but Apple may find that to be a reason to boost the overall system performance this generation to support the headset. I'm sure they see their AS line to be a competitive advantage in this space.
A quick comment on this — I think there is all reason to believe that the VR headset will be simply based on M2. VR is all about energy-(and bandwith-) efficient high-res rendering and Apple has been slowly building this kind of technology. A15 brings on-the-fly compression to render targets and Apple has had variable rate rasterization since A13. With these, they can probably achieve 6K-like quality while using 2K or lower actual render target.