Could you explain it a bit more? Intel and AMD design their CPUs to have a very high clock speed, so it's not a problem.
Intel TRIED the "crank the frequency to the limit" strategy with P4. It was a failure.
IBM tried it with POWER6. It was a failure.
Just think what back-to-back issue of dependent instructions MEANS.
I'm not saying Apple couldn't go to a higher frequency if they wanted.
I am saying that
(a) you can't keep increasing frequency indefinitely by adding more pipeline stages
(b) the maximum frequency you can go is not the frequency at which performance will be maximized (because each stage no longer has time to engage in OoO smarts)
(c) there are certain operations in the pipeline which, if you split them over more than one pipeline stage, you might as well run at half the frequency because you have destroyed the performance of back-to-back instruction execution. The most important of these is scheduling, but another example is what value would it be if if took two cycles to execute the simplest single-cycle instructions (eg ADD)? You can boast that your CPU runs at 10GHz, but for practically every purpose, it might as well run at 5GHz and take one cycle, not two, for ADD, AND, NOT, etc.
(d) so even if you want to maximize
performance you don't get there by raw maximizing of GHz.
(e) and you probably don't want to maximize performance, you want to maximize some combination of performance and energy...
Right now the highest single core GB6 results (that aren't obvious nonsense) are around 3300 for Intel 19-13900KS. M2 Pro/Max are at around 2800. If we assume basic 20% improvement (10% from A16, 10% from A17) that gives us M3 at 3360. So it's hardly clear that Apple are following the wrong path...