There are tons of threads in this sub forum discussing exactly this...
To put it short: Apple is not switching to ARM. They are switching to in house designed chips since they have better tech than Intel.
I find these sort of semantic debates funny. It's like saying AMD wasn't x86 (Intel's ISA), or Intel isn't x64 (AMD's ISA). ARM is the architecture, Apple Silicon is the implementation of the architecture. Apple doesn't want to call it ARM precisely because they've been adding custom ASICs and compute units to their SoCs, stuff that competitors aren't investing in. The emphasis Apple wants is on what they bring to the table as a whole, where ARM is just a piece of the puzzle.
It's still an ARM SoC though.
The x86_64 chipmakers are doing power and energy cores in future too.
And I wonder how far out that is. Why wouldn't Apple jump now if they can be years ahead of AMD or Intel in this space? They've already been delivering this sort of power scaling with ultralight laptop level performance for a couple years.
Unfortunately, Googling for news on this front mostly brings up ARM developments, not x86 ones.
Part of this however is misunderstanding what TDP means. Intel defines TDP as power under sustained load running on full cores at base frequency. To put it differently: they guarantee that if you load up all the cores with work and limit maximal power dissipation to the TDP, the CPU will run at least at base frequency. This is why TDP is more of a marketing term rather then a technical one. Turbo boost is completely circumstantial.
But what TDP means is irrelevant to the point being made. Part of the problem with Intel at the moment is they are effectively trying to push boost clocks higher to remain competitive. So if you want to see the better performance, you have to rely on the skyrocketing power consumption to do so. Nonsense like that is partly why we have a battery in the 16" MBP that's at the limit allowed for air travel, with a power supply just below the 100W limit that USB-C PD supports. Even then, the CPU alone can spike into the 90-100W range under some loads, essentially demanding the whole output of the power brick to itself. Yeesh.
AMD has recently started releasing some incredibly competitive chips as of late. Some are just plain beastly, such as the Threadripper, and the SOC for the Xbox Series X.
The problem that AMD hasn't really addressed yet though is Machine Learning. Sure, you can just throw that on the GPU, but there's better ways to do it.
Yeah, on-die ASICs are definitely a strength of Apple's SoCs at the moment.
But I'd argue that while AMD pulls down less power at the high end, it's not doing as well as Intel during low loads/idle with the 3000-series. My 3600 pulls about 8W on idle. While the i5 in the Mac Mini pulls a little under half that. I'll admit that's comparing Windows where nothing is running vs macOS. Still, that's not exactly a great place to be if trying to court Apple. Nor does shipping chips that require a microcode update to not boost to full clockspeed and draw a lot of power every time Windows sneezes. That was a fun couple of months...
The thing is, AMD's chips are very competitive. But I'm not entirely sold on the reliability of AMD's design validation process right now.