Enjoying all the conjecture but thought I would add my experience to date after 2 weeks of full-time use.
I am running an M1 Mac mini 8GB / 256GB as an always-on headless home server, hub, UniFi Controller, Plex Server, Apple Content Cache, dedicated transcoder, music server, 10 GbE node... and the list goes on.
Pretty much everything I have learned about x64 memory management can be left at the door when moving to an M1 system. There is nothing that can be read directly across from Intel systems and with a wave of the hand you can pretty much announce that like-for-like the M1 sips on raw memory capacity whilst enjoying a massive amount of memory bandwidth. Remarkable even. If I had to grab a figure I could almost say that 8GB = 32GB on my iMac Pro.
Is there a catch? - well yes, or possibly yes, and we all love a good etherial 'it depends' moment to ruin the mood.
Having thrown everything at the base M1 mini I find myself needing more RAM for just one of my applications.
For me the Achilles Heel is running a UniFi Controller through Rosetta 2. UniFi leans heavily on Java 8 and an old Mongo DB and taken together these seem to grab and hold onto quite a bit of memory. For whatever reason this combination of dependancies seems to sit uneasily within the Rosetta 2 framework and M1 memory management. It all works ok but by brute force memory-mismanagement, leaving less available for all the efficient apps and base OS.
So I've ordered a 16GB model, which will be delivered a number of weeks away apparently. I suspect that a better version of Java or a more optimised version of UniFi or putting Mongo DB out of its misery would restore sanity, but for now I have a specific and demonstrable need for 16GB.
So close, so very very close...