This actually happened with the Pentium Pro in 1995 and was later adopted for Core, plus AMD uses the technique. This means both Intel and AMD CPUs are internally RISC-like. The more dense CISC instructions actually reduce bus bandwidth and make more efficient use of instruction and data caches.
It may be for these reasons that AMD's server/workstation EPYC CPUs tend to be faster than the RISC POWER9:
https://www.phoronix.com/scan.php?page=article&item=rome-power9-arm&num=1
Since both Intel and AMD CPUs are really covert RISC machines, and even more pure RISC machines like POWER9 are highly complex CPUs using out-of-order superscalar speculative execution, the original RISC advantages might seem to no longer apply. Contemporary RISC and CISC CPUs are both incredibly complex, and there is nothing "reduced" about either of them.
We must also keep in mind as ARM-instruction-set CPUs scale upward, it is not those vs Intel, but those vs x86 (which includes AMD from whom Intel licenses the x64 instruction set which was developed solely by AMD). IOW if there is an ARM or Apple Silicon advantage it will be manifest against x86 in general which includes AMD.
Ever since the Pentium Pro and succeeding x86 CPUs seemed to nullify the RISC performance advantage, for decades the traditional view was process and fabrication technology was the differentiator, not the instruction set. The corollary was if ARM could ever scale up to Xeon or EPYC levels it would burn just as much power as x86.
However in recent years Apple's Ax CPU development has been on an improved power/performance curve which currently seems to be holding. Already the A12Z CPU in an iPad Pro is roughly as fast as the 4.2Ghz quad-core Intel Kaby Lake CPU in a 2017 iMac 27. Nobody has yet authoritatively explained how this was achieved, when all previous RISC designs failed to compete. It is unlikely not simply due to the ARM instruction set itself or traditional RISC "advantages".