Process is only part of the equation. Architecture is another one. We will need to wait for real-world benchmarks of Tiger Lake, but so far, it doesn't seem to be a major leap over it's Skylake-based predecessors.
Besides, "10nm", "7nm" is just marketing. Intel's 10nm is roughly comparable to TSMC 7nm.
I'm not sure what you are trying to say by stating it's "just marketing". 10nm and 7nm are hard physical measurements. You can't fudge your numbers to say 7 is equal 10. That's not how it works.
I think that linear scaling is a good enough assumption as own as one doesn't take it too far. My reasoning is as follows: we are talking about relatively "low" levels of performance here (if you compare to large desktop flagships), we can assume that both graphics and compute workloads are embarrassingly parallel and essentially "limitless" compared to the GPU capability, so scaling up the processing units will linearly reduce the processing time. This is further supported by benchmarks (look around, you will find that A12Z GPU is pretty much exactly twice as fast as A12 etc.).
And reality shows that linear scaling is not how it works. There is no assumption with regards to scaling. Twice the core count just doesn't equate to twice the performance. That only happens in an ideal model, and real world is not ideal.
The problem is not parallelism, but that there is no way to estimate the "weight" of a workload, so a linear distribution system just doesn't work. For instance, for a GPU, you can definitely segment rendering workload into sections of a frame and have them all execute in parallel, but certain sections will definitely render faster than others (say, if you're just rendering a background in one section versus in another section where you have to render models + shaders + background). So performance is more dependent on how efficient your scheduler is versus how many processing units you have. And you can't make a scheduler parallel. There's a reason why RTX 3080 with twice the core count can only achieve at most a 80% performance boost over 2080.
It's performance is very close to it's contemporary Nvidia 1050 GTX (Pascal) according to gaming benchmarks. Let's not forget that A12 is a 2 years old design by now. If you want to compare it to modern architectures, it's probably a bit slower than the 1650 Max-Q (35W Turing). And it's faster than AMD integrated graphics, although Intel Tiger Lake will likely change the equation now.
I am simply extrapolating from
existing benchmarks. I'd say that the data is encouraging. For example, if we look at compute benchmarks, the A12Z is somewhere around 3 times slower than the Navi 5500M (that's a 1536 ALU part), at around 1/5 power consumption. And that's compute, where it's just shader cores vs shader cores. In graphics A12Z has a major advantage since it's rendering approach is inherently more efficient.
We have had a fair amount of discussion on why benchmarks across architecture and platform are not reliable, so I think that part is basically up to interpretation. But let's just say A12Z is not as fast as Navi 5500M, and the 5500M is going to be usurped by Navi 2 for sure.
One argument you might bring is that Apple GPUs so far have been designed t operate at the lower end of the power consumption spectrum and therefore their ability to scale is not proven. This is absolutely correct. That is why I am very curious to see first Apple Macs.
Well, again, scaling is not truly linear. The reality is that Apple may end up with the same power consumption and thermal profile as AMD's last generation chip if they want to achieve the same level of performance. And even that would be an achievement in and of itself if they can reach it. In reality, I'd say both AMD and nVidia have been at the game far longer than Apple, and it shows even on the software side.
Apple's Metal API is nowhere near the maturity needed for efficiency and performance as Vulkan or OpenGL, for instance. The most one can say about Metal is... well, it gives Apple more control, and it requires less effort from developers, but that's about it.
Let's say I'm more skeptical that Apple can just do it.
If Apple were not confident about the performance, they wouldn't announce any transition whatsoever. I think we are in perfect agreement that Apple does not have chips ready to compete with higher-end mobile, and their desktop strategy is completely unknown at this point. The message they are sending is clear however: they are confident that they are going to beat any alternatives by the end of a two year period.
Well, Apple is already showing signs of intentionally throttling performance of Intel-based MacBooks (there's a very long 16" MacBook thread here on that, then there's a thread on the heatsink for the 2020 Air). So in the end, they can at least say that their Apple Silicon matches the performance of their Intel counterparts (under heavily thermal-throttled scenarios) and give much better battery life. That tactic is very obvious now to those of us who have had the chance to "sample" the 16" MacBook and the Air.
And I'd think even if they weren't confident in performance, they'll do it anyways for these reasons:
1. It gives them complete control over the hardware platform
2. More profit margins
3. They can bake in planned obsolescence and nobody can do anything about it
4. Mac OS will only work on Apple's hardware (no more Hackintosh)
There are only upsides for Apple. The downsides (performance, less control, less ownership, less software, etc...) are only on the customers' side. There literally is zero reason why Apple should not announce the transition. Even if performance is not up to par, it's not like it's the first time Apple has introduced a MacBook that doesn't perform well (read: MacBook 12" from 2015, 15" from 2016 to 2019).
As a software developer, Apple's approach is increasingly discouraging me from considering their platform as a main workhorse. I used to love Mac computers for their efficiency, reliability, and cross-platform compatibility (I can write my software on a Mac and deploy it to everything else). One of those reasons is now gone, and the other 2 are unknown, with only efficiency being the promise. Reliability has been very questionable ever since Catalina.