This means more memory references can be changed quicker. Speed boost and with a faster transaction of retaining/releasing objects in memory can have an impact in RAM too. If RAM clears out faster, this allows room for other programs to use more RAM quicker. So a faster tradeoff on RAM usage.
I see what you're saying (I think), I hadn't thought of it that way. I still think the amount of ram 'used' or needed isn't changing, that there's no 'magic less ram usage' - i.e. it's mostly a speed effect.
But I'm thinking perhaps too much in a 'serial' world and with multiple cores/parallel processes, there would be more short-term spikes in usage that would result in ram-driven system slowdowns. (I.e. process one is unloading / clearing out chunks of ram and process two is trying to load data into ram, so a kind of collision that has the same effect as 'needing less ram' - if not a shortage, at least a major bottleneck solved).
I suppose that's somewhere in between (in a sense) - it's not some magical process that means the M1 is using/needing less total memory, but the much faster operations result in fewer transient spikes (or collisions) that drive the memory system to e.g. swap out or compress or whatever.
End result is the same, really: much faster M1/ram/chip/ssd compensates for having less 'extra' or buffer ram, and for the user, less need to increase ram for the same usage profile.
In a slightly different sense, it (possibly) shows that removing bottlenecks makes a big difference in perceived speed/system resource usage - related to speed but not exactly driven by just raw speed.
Is that a fair restatement of what you were saying? (Restatement in my words for my comprehension purposes...)