The use of UMA also probably freed up system memory that is required to be reserved for Intel based Macs using iGPUs. Having said that, if a workload requires more RAM, it'll need more RAM. No two ways about it. It's encouraging tho. that we have reports from many actual users reporting that the 8GB base models M1 Macs being sufficient for their needs.
It does free up some memory, but you won’t get it all back switching to UMA from Shared. First, you still need to use some of that for texture and frame buffer memory like you were before. Second, recent Intel iGPUs appear to actually use a dynamic allocation for GPU memory, adjusting to need, but maxing out around 1.65GB. But in most cases its using a lot less when just moving around macOS.
So yes, there are definitely savings, but I suspect it’s not quite as big as people expect. But those on 8GB systems are reaping the biggest benefit from the change, with 16GB systems seeing a smaller difference.
The problem with that approach is that the reason iOS is so frugal with RAM is that it suspends apps in the background after a few seconds and releases the memory if necessary. That wouldn’t be feasible or acceptable on MacOS
Correct. iOS tends to outright purge, and doesn’t use a swap file, last I checked. With iOS, developers have to save state explicitly and then restore that state on app launch. With macOS, swap is used as a way to keep apps launched and ready. One one hand, iOS writes less out to disk, so less I/O is potentially required to restore state. On the other, macOS doesn’t require applications restore their state manually.
M1 uses the macOS approach.
what bit is inaccurate?
Programs can run from storage (virtual memory is an example) programs and data is loaded into RAM so the CPU has faster access to it and not wasting cycles waiting for data from slow storage.
This bit is inaccurate for a start, since you misunderstand what virtual memory is, and the purpose of RAM. Yes, RAM is faster than storage, but the CPU can only directly address RAM. RAM is also the “working storage” for all processes. Any data I’m working on has to be in RAM for the CPU to be able to do anything with it. If I have application state, that has to exist in RAM as well.
Virtual memory is a neat trick to let the kernel step in and help manage RAM pages, enabling the swap file to exist, and for the system to use more memory than physically exists in the system as RAM. Yes, there are also other neat little features like memory mapped files, but keep in mind that when you access part of a memory mapped file that’s not already in RAM, the kernel has to step in, read the data from disk into RAM, and update the CPU’s memory map to let it know what it changed.
That said, one of the nice things about memory mapped application binaries that I like is that it means you don’t need the entire application code in RAM at the same time. For very large binaries like projects I’ve worked on, this means you can have something like 50MB in your TEXT segment (the bit that holds the compiled machine code), but then only have say, 16MB of it in RAM because the user is only using a fraction of what the application can do. I’ve worked on projects where we optimized our builds to take advantage of this fact back when stuff like the iPad 2 was still a common device folks used, and having only 512MB of RAM with a 50MB binary is a big deal.
Just witness how much more useable older machines are when the HDD is swapped with a speedy SSD, programs load into RAM faster and launch quicker.
True. A lot of this has to do with the much better latency of SSDs when having to read/write RAM pages. Because every time the kernel needs to “fault” in order to move pages into memory, processes are frozen. So good latency is a must.
have a read of RobbieTT's post
his 8GB M1 reduced its RAM pressure after 2 weeks of operation
I'm a few weeks in to the M1 experience and before getting into my thoughts on actual performance I must offer a few alibis & excuses. I am probably not the average user (whatever that is) - my machines have to work hard most of the time and work together with my wider network of Macs. My...
forums.macrumors.com
The M1's appear to be operating differently to what conventional wisdom says they should.
Anecdotes are not data. Especially when talking about a single data point with an unreproducible workload spread out over a week. There would need to be hundreds to thousands of runs of this sort of test to deal with the statistical noise.
I’ve observed similar behavior in Intel machines prior to Big Sur, as Apple has updated the macOS memory manager to be more aggressive pushing things to swap when not needed. A lot of that being driven by the easier access to NVMe SSDs which provide some very good latencies to enable this sort of behavior.
The M1 has no new trick up it’s sleeve to magically reduce memory pressure here. It’s using RAM, Swap and Memory Mapped files the same way as under Intel. Honestly, probably the biggest change to M1 memory management is that it now uses 16KiB memory pages instead of 4KiB which have been standard for ages. That makes operations where pages need to be read from, or written to, disk more efficient, and it will cut down on the number of page faults, improving performance that way.