This isn't really fair, since there are only 2 SoCs containing X1 cores in existence and they both use a significantly cut down implementation, a single X1 core per SoC and are only used in a couple of smartphones. Even with that they got to within 10-15% of the A13 single core perf which was state of the art at the time of X1 announcement. 6 months newer firestorm cores in A14 had a pretty amazing single core performance uplift, though, but I wouldn't be surprised if Apple pushed extra hard because they knew they would be using the same cores in M1. On another hand Qualcomm couldn't care less (probably because there's simply no market), just look at their pathetic showing with the "highest-performance" (lol) 8cx, which they're now "revising" for the third year in a row by slightly increasing frequency of the 4 years old cores.
Oh, I am not trying to dismiss the achievements of the ARM team, I think that X1 is a great core which is probably very close to Tiger Lake and Zen 2/3, depending on how high clock it can sustain. It's just that looking at the single-core benchmarks, I don't have the confidence that X1 can rival A14/M1. It seems to be roughly comparable to A12, which means that it's still a way to go for the ARM team until they reach the state of the art.
To be honest, I hoped that X1 would be better — that could give ARM the push it needs to challenge x86. Currently, ARM CPUs are still perceived as low-end devices that excel at power consumption but lack performance. If ARM manages to change that perception, more customers will be interested in ARM-based computers. But X1 is still not enough to challenge the latest x86 CPUs...
If memory serves me right, the motivation behind the X1 cores wasn't even really smartphones or other consumer devices but capturing the emerging ARM server market.
Wasn't it supposed to target the laptop market? I was under impression that Neoverse was the server stuff.
Similarly, I wonder how much of the incredible efficiency of M1 in comparison to the latest x86 CPUs comes down to different overall goals of the architecture. Increasing the frequency comes at an exponential increase in power consumption and I'm really curious to see how a higher clocked version of M1 would look like.
Yeah, it's very obvious that Apple deliberately trades the ability to achieve high clocks for the ability to execute many instructions simultaneously. This gives them very high baseline performance and (probably more importantly!) predictable power usage, but not much breathing space above. I do wonder whether Firestorm M1 is clocked conservatively or whether they could potentially push it higher. Anandtech did report that earlier cores (A12 I believe) showed a huge spike in power consumption at the end of their frequency curve. Still, if Apple is holding back (and I would expect anything from those sneaky folks), and they could ship stable Firestorm at 3.5-3.7ghz while still keeping the power consumption of 10-15W per core, it would look bad for Intel or AMD.
I've recently read about a few experiments, where people artificially limited the power draw of their last gen Ryzen CPUs to the one comparable to M1 and the single core performance only differs 20-30% in a scenario AMD has surely not optimized for.
Making an uneducated guess, I'd say that Zen3@5W would run at around 3.5-4.0ghz. That would indeed make it only 20-30% slower than Firestorm@5W. But those 20% are a huge obstacle to overcome in practical terms. Zen3 needs more than 3x power to do it. To put differently, just because they are only 20% slower at low power usage does not mean that they will be able to close that gap any time soon.
It does raise an interesting point though. Modern x86 CPUs can be quite efficient, but the quest for ever increasing performance and the fact that their main enthusiast customer was a desktop user led their evolution towards devices that could be put in overdrive, squeezing all that performance regardless of the power cost. Turbo boost as well as huge frequency and power range became the important buzzwords. Apple on the other hand, was always limited by the thermals. You only get that much cooling (and battery!) in a phone... so quite naturally, the evolution of their chips was towards devices that try to be as fast and smart as possible with absolute lowest power consumption. And since they are stubborn, sneaky people with a superiority complex, they really went out of their way to make something that's really fast. ARM on the other hand, they are serving customers. They design what is required, more or less. Now that they are confident in their ability and see trouble in the x86 land, they are smelling blood and starting to challenge Intel where it hurts the most — the server market.