Or, Apple doesn’t use SMT because it has negative impact on power-efficiency (see Intel droppping SMT from Atom for this reason) and it requires doubling of certain hardware resources which is less feasible given Apples already humongous register file and reorder buffers.
Sigh....
it doesn't require doubling of all the resources. You just primarily need more rename registers and some more logic in the dispatch and retirement areas. The number of the primary function units ( adders , mathops , loaders , etc.) stays exactly the same.
Apple's humongous "re-order" buffer puts very similar pressure on bulked up rename registers. Power gating stuff not using is a bigger issue why Apple's sysetm 'works' along with narrowing the workload metrics.
Intel didn't drop SMT from Atom as opposed to stop trying to recycle mainstream design under an Atom label. Atom didn't start off with SMT. It got it more so because Intel was being cheap rather than optimizing a specific Atom market design. The current ones don't have it because Intel went back to trying to build Atom architecture as a funded , distinct design.
Nobody in the core enterprise product groups at Oracle, IBM DB2 group , SQL server is having nighmare sleep about or "fear of missing out" drama over not having a port to M1. Apple not having SMT is a design choice with ramifications. If their "bread and butter" systems are phones that run on relatively small batteries with relatively small ( sub 1TB data sets ). That's a reasonable trade off to make. Not going to help run better over random access PB sized data sets though.
P.S. there was also some overlap with Atom development with Xeon D where Intel had server like duties, but wanted a "cheaper" core ( to both customers and themselves ). When coupled up to some higher end NAS/SAN/Network I/O data driven devices then SMT does make more sense.