Better in what respect?
I mean why Alder Lake needs 2x power to beat AMD? Why its 11 gen is so bad comparing with Zen 3?
While it's hard to know for sure without the same designs being taped out on different fabrication nodes, 3rd party analyses estimate that indeed Intel's 10nm power per area and density are roughly equivalent to TSMC 7nm (maybe better in some respects, a little worse in others depending on the design) and Intel's 7 node (10nmESF or ++ in the old system) should therefore be better. This is what I've been referring to as the good news/bad news for Intel. Their fabrication nodes aren't as behind as people commonly think, but their microarchitecture (uarch) is - especially the P-core uarch. The reason the Intel E-cores exist on Alder Lake silicon is because the P-cores take up so much die area and run so hot that Intel has trouble matching AMD in core count and MT (multithreaded) performance. Thus the E-cores exist as another layer of almost SMT/HT (hyper threading) rather than what we see in the ARM space where the E-cores are true efficiency cores, meant to be actually low power and take care of background tasks. As I've written before, if we were to borrow the terminology from ARM, then this is huge.Medium rather than big.Little (while ARM themselves have transitioned to effectively big.Medium.Little, often Arm's "big core" is probably as small as Intel's "medium" if not smaller ).
Zen 3 was a remarkable piece of engineering. AMD didn't even change fabrication nodes and came up with a new design that kept the best parts of Zen 2 and improved everything else. Supposedly it even took AMD by surprise when they first realized just how big an improvement it really was - on their internal roadmaps they hadn't expected quite this level of IPC (Instructions Per Clock) increase going from Zen 2 to 3. Sufficed to say, there's a reason Zen 3 won the Anandtech Gold award.
It should be noted that the 2x power consumption is a bit of an exaggeration. As Andrei from Anandtech wrote on Twitter, ADL only actually breaches the 200W mark in AVX2 (vector) workloads were it does also manage to extend its performance lead over AMD. Otherwise it keeps in the same wattage regime as AMD and may beat it in perf/W in many “ever day” MT workloads (even in AVX2 loads).
You can also downclock the ADL i9 chip so that it'll only use about 150W total and only get minor drops in performance (like 15% or so) - past that the performance drop will be linear. This is the "elbow" on the power curve. Intel likes to push their cores this way as the default to claim performance wins, but it really does burn a lot of extra energy for every bit you push the core (AMD does this too, but less extreme).
Finally, part of Andrei's tweets above alludes to one of the few problems with the Zen 3 chips: while a lot of the main parts of the chip like the cores themselves are manufactured at TSMC 7nm, for any non-monolithic die, the IO die is not (things like the fabric connections with core-to-core communications and core-to-memory and PCIe). It's manufactured on GloFo's like 12 or 14nm I think. Space-wise it probably doesn't make much difference as you can only shrink down the stuff in an IO die so much, but with respect to power? It's blamed for why the AMD chips are not even more power efficient than they are. Why did AMD do this? Probably because it was cheaper aaaannnnd because they are contractually obligated to buy a certain number of wafers from GloFo since the split.
TLDR conclusion: Thus Intel enjoys a huge fabrication advantage in the IO die and probably a smaller one for the cores themselves. But AMD is still competitive since their core and chiplet designs are that much better. If Intel and AMD both keep to their roadmaps next year, they'll both be on new nodes and new designs (and there are rumors that Zen 4 IO will be on TSMC).
======
As for why 11th gen Intel was so bad - well Tiger Lake mobile actually wasn't that bad, but Rocket Lake desktop ... yikes. This comes down to Intel's manufacturing woes. Intel tied a lot of their core designs to specific fabrication processes believing they would always keep pushing fabrication forwards. When forward movement on those processes stalled, their entire strategy fell apart. Tiger Lake was what Intel was supposed to put out years ago. Even worse, it's believed that the initial Intel 10nm fabrication nodes weren't suitable for the high power needed for desktop chips. Intel couldn't pump out desktop processors on it and had to rely on 14nm with some ungodly number of pluses. Rocket Lake was the last of such chips and was particularly bad because they back ported Ice Lake on 10nm to 14nm. This back port was such a disaster that frankly I'm shocked that they actually launched it rather than simply reduce prices on Comet Lake and wait 6 months for Alder Lake. The only thing I can think of is that they felt needed something out there and given that Alder Lake was a huge risk given the design, they wanted something out in the wild in case ADL fell flat on its face. Some tech journalists have argued that this experience was good for Intel as if they need to do this again (i.e. backport a design), they'll have learned from it and future chip designs will be more fabrication node agnostic to make it easier. And maybe this is true, but yeah Rocket Lake was bad. Tiger Lake was just late rather than intrinsically bad, but even so TGL still does highlight how inefficient Intel's uarch designs really are.
Last edited: