I don’t think so, though. Because Apple Silicon doesn’t have a power hungry decoder that takes up an inordinate amount of real estate on the chip. Get rid of that and Intel could be closer to Apple’s numbers, but get rid of that and you end up with a chip no one will buy.
We're looking at a few independent contributors. TSMCs process, I believe, is still better than Intel's, TSMC's process allows Apple to trade performance for power and it seems Apple has been leaning more towards power efficiency than all out performance, and the x86 architecture has an enormous amount of technical debt that makes everything Intel does less efficient.
The technical debt is real. If their process was competitive they'd still be behind. I don't know how anyone at Intel could read that x86S document and not be screaming into their pillow about how they've somehow convinced themselves what they've been doing is ok. As you say, the legacy stuff makes everything they do more complicated to design and then more complicated to execute. It means more logic needs to switch, burning power, more real estate, leaking power, and it also means that things take more time which means needing to run faster to do the same work burning power.
There are people on the forum who can probably describe exactly what those inefficiencies translate to as far as where the extraneous logic sits and how it limits prediction and cache efficiency and what other bottlenecks are introduced. I'm not familiar to that level of detail, but I'm familiar enough to develop a gut feel and my gut says architecture can't explain it all.
Even if Intel cleared their technical debt, I think they'd be at a disadvantage because of their process. In other words, if they opened as a foundry to Apple, I don't think Apple would choose to produce their Apple Silicon parts on Intel processes. The inefficiency of the architecture is a problem but I can't convince myself it fully explains how much better the M-series appears on a performance per watt basis.
As far as whether anyone would buy a simplified x86, I honestly can't see why they wouldn't. Intel makes a pretty compelling case themselves:
"Since its introduction over 20 years ago, the Intel® 64 architecture became the dominant operating mode. As an example of this evolution, Microsoft stopped shipping the 32-bit version of their Windows 11 operating system. Intel firmware no longer supports non UEFI64 operating systems natively. 64-bit operating systems are the de facto standard today. They retain the ability to run 32-bit applications but have stopped supporting 16-bit applications natively. "
I think they've fallen victim to their own marketing message that if you don't have "Intel Inside", you can't be sure it's going to be "compatible". They've become so fundamentalist about it that they're convinced themselves that to be truly compatible means being able to trace back all the way to the dawn of the microprocessor.
We don't need that. We live in a 64bit world with very capable translators, emulators, and virtualizers. Itanium learned that it was better to translate x86 through software than implement it in hardware. Apple's Rosetta runs close to native x86 speeds on Arm. Window NT provided x86 compatibility to PowerPC, MIPS and Alpha and, while trying to fact check myself on Alpha earlier, I'm seeing that the Alpha systems of the day were the fastest way to run x86 code because the processor was fast enough to hide the translation inefficiencies.
If Intel can make a faster, more efficient processor then I think they can stay relevant even if it means moving away from gate level compatibility with x86 instructions. It means giving up the ruse that only Intel can be compatible though. The competition came up quickly but Apple has a culture of keeping alternative idea alive in the lab so they can be quick to pivot-- maybe Intel's been doing the same...