Don't necessarily disagree with your assessment here. But when people are asking why the M1 isn't keeping up framerate-wise to a PS4 or an Xbox One, despite having a more powerful GPU, then it's worth pointing out the tricks consoles use to lighten the load on the GPU vs PC/Mac where those tricks are rare, as you point out.
And if your goal is to target a consistent framerate rather than a specific graphical quality (i.e. you need the quick responses more than the fidelity in a particular game), then I'd argue the trade offs are sometimes worth making. Or in the case of the Mac, where running the GPU at native resolution on a HiDPI display for heavy games is a fool's errand most of the time, and there's some benefits to at least providing some mechanism to provide something closer to the optimum resolution to work in to hit the framerate target without the player having to do that tweaking themselves. That said, I think Leman pointed out some other possibilities for Apple GPUs here that would probably be better than the more brute force approach consoles have used this last decade. My main point was more that the Mac with high resolution screens could benefit from any this sort of trick to lighten the load and make it easier to sustain 60fps.
Although when it comes to getting CPU bound, I'd assume if you are targeting a framerate, you'd mostly see getting CPU bound above that framerate (based on target hardware, yadda, yadda, yadda). With Microsoft's dynamic resolution tech at least, they were trying to keep things in balance to stay near the target framerate. Spend roughly the same GPU time per frame, cranking up fidelity on easier frames, and down on harder ones. So I'm not entirely sure what scenarios you are thinking of when you bring this point up.