Nah, man that's the whole point of why Moore's Law DIED. A Single Chip got shrunk as small as possible (and then a little bit more) and then the temps with speed, with power draw, got maxed out, scientifically speaking. Then you have RISC vs CISC, where single core RISC loses out a little, but now is smaller than the Intel CISC.
The true power of the whole future of Apple with the ARM line of SoC is that they are developing SERIOUS Concurrency APIs/Frameworks for BOTH CPU and GPU and ML units.
So they are gonna keep REALLY maxing out CPU cores and making sure people multi-thread their software as much as possible.
I have an App that I am working on, that when on my older Intel (15"-2019) with 8 core (16 core virtual), the render takes 5 minutes, with the MacBook Pro (Pro) 8 core the render takes 50 seconds! 6x IMPROVEMENT!!! and the Fans are all blaring and the memory is all nuts, on the Intel, on the M1P whisper quiet.
If I lower the settings a render that takes 60 seconds, takes 10 second. This is REALLY gonna make my damn DAY!!
Best of luck dudes!
(nb, unless something really kicks in and requests multiple cores, I am seeing the Efficiency Cores doing most of the work)