Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

t0pher

macrumors regular
Original poster
Sep 6, 2008
134
228
UK
It’s interesting reading the discussions about M1 but I think people are miss understanding something effectively because it’s been done this way for decades.

Normally and for decades, programs and data are copied from storage into RAM and CPU’S process those programs and data from RAM writing to disk when the data needs to saved. The whole point of copying from storage to ram is to reduce the time the cpu has to wait to read the data it needs to perform its work. Traditionally reading from RAM was thousands of times quicker than reading from storage.

RAM is effectively a faster cache for the storage
RAM is also cache for the much faster caches on the cpu itself.

programs contain lots of data but most of that data isn’t used most of time, consuming RAM that could be used by something else.

As the storage on M1 is so much faster than traditional spinning disks or even most aftermarket ssd‘s and the M1 has the notion of unified memory, and also remember RAM is really where the CPU caches data from storage so it can access it quicker and process the instructions quicker, M1 is able to put into swap or just read from where it already is in storage bits of programs that would normally be read into RAM.


this all means that the M1 doesn’t need as much RAM as older systems like we’ve seen from Intel, AIM etc.

So long as there is a sensible amount of RAM (8 GB appears to be plenty but 16 stops pundits complaining) Storage should really now be seen as 2nd tier RAM.

Imagine buying a basic pc for £700 with 264GB of RAM with a cpu equivalent to an intel i9.

Some numbers for the M1
Storage reads / writes ~ 2.6GB/s
ram reads / writes ~ 34GB/s 4266 MT/s DDR4 speed Wiki

I can’t find a GT/s rating for M1.

so long as apple silicon and OS X is smart enough to only use RAM for what is actually used then less ram is needed to quickly feed the cpu and typically the disk is fast enough to swap into RAM any bits that are needed to the extent that the user typically won’t notice.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
It’s interesting reading the discussions about M1 but I think people are miss understanding something effectively because it’s been done this way for decades.

Normally and for decades, programs and data are copied from storage into RAM and CPU’S process those programs and data from RAM writing to disk when the data needs to saved. The whole point of copying from storage to ram is to reduce the time the cpu has to wait to read the data it needs to perform its work. Traditionally reading from RAM was thousands of times quicker than reading from storage.

RAM is effectively a faster cache for the storage
RAM is also cache for the much faster caches on the cpu itself.

programs contain lots of data but most of that data isn’t used most of time, consuming RAM that could be used by something else.

As the storage on M1 is so much faster than traditional spinning disks or even most aftermarket ssd‘s and the M1 has the notion of unified memory, and also remember RAM is really where the CPU caches data from storage so it can access it quicker and process the instructions quicker, M1 is able to put into swap or just read from where it already is in storage bits of programs that would normally be read into RAM.


this all means that the M1 doesn’t need as much RAM as older systems like we’ve seen from Intel, AIM etc.

So long as there is a sensible amount of RAM (8 GB appears to be plenty but 16 stops pundits complaining) Storage should really now be seen as 2nd tier RAM.

Imagine buying a basic pc for £700 with 264GB of RAM with a cpu equivalent to an intel i9.

Some numbers for the M1
Storage reads / writes ~ 2.6GB/s
ram reads / writes ~ 34GB/s 4266 MT/s DDR4 speed Wiki

I can’t find a GT/s rating for M1.

so long as apple silicon and OS X is smart enough to only use RAM for what is actually used then less ram is needed to quickly feed the cpu and typically the disk is fast enough to swap into RAM any bits that are needed to the extent that the user typically won’t notice.

The other part of this discussion (and a key factor that I feel many people have completely overlooked) is that most of the memory management capabilities of the M1 have been developed for years on both iOS and iPad OS. The iPhone 12 series runs smooth as silk, even though they have 1/3 the RAM of some "flagship" Android phones, such as the S20 Ultra. This is because Apple has refined the notion of memory management, while Android (and even Chrome OS) still rely on the more traditional model for RAM management that the x86 platform and Windows use. Since iOS/iPad OS are built upon the same Darwin kernel as Mac OS, it would not be terribly complicated to bring iOS features such as memory management to Mac OS, especially now that Apple is using a common ISA across all three product lines.
 

t0pher

macrumors regular
Original poster
Sep 6, 2008
134
228
UK
The other part of this discussion (and a key factor that I feel many people have completely overlooked) is that most of the memory management capabilities of the M1 have been developed for years on both iOS and iPad OS. The iPhone 12 series runs smooth as silk, even though they have 1/3 the RAM of some "flagship" Android phones, such as the S20 Ultra. This is because Apple has refined the notion of memory management, while Android (and even Chrome OS) still rely on the more traditional model for RAM management that the x86 platform and Windows use. Since iOS/iPad OS are built upon the same Darwin kernel as Mac OS, it would not be terribly complicated to bring iOS features such as memory management to Mac OS, especially now that Apple is using a common ISA across all three product lines.
I’m wondering if they can or will do something similar for x86.
Unencrypted Read / write speeds on my 2016 15” MBP are similar to the M1’s, but I suspect the intel cpu and associated components is designed / optimised to work the way things have for decades, requiring the program to be loaded into ram to be executed rather than a blend of RAM and fast storage. Apples implementation of Unified Memory & fast storage in a SoC has truly turned things on its head and is a game changer.
 
  • Like
Reactions: m-a and cool11

Phil A.

Moderator emeritus
Apr 2, 2006
5,800
3,100
Shropshire, UK
The other part of this discussion (and a key factor that I feel many people have completely overlooked) is that most of the memory management capabilities of the M1 have been developed for years on both iOS and iPad OS. The iPhone 12 series runs smooth as silk, even though they have 1/3 the RAM of some "flagship" Android phones, such as the S20 Ultra. This is because Apple has refined the notion of memory management, while Android (and even Chrome OS) still rely on the more traditional model for RAM management that the x86 platform and Windows use. Since iOS/iPad OS are built upon the same Darwin kernel as Mac OS, it would not be terribly complicated to bring iOS features such as memory management to Mac OS, especially now that Apple is using a common ISA across all three product lines.
The problem with that approach is that the reason iOS is so frugal with RAM is that it suspends apps in the background after a few seconds and releases the memory if necessary. That wouldn’t be feasible or acceptable on MacOS
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
M1 Macs use the same cheap consumer-grade SSDs as everyone else. You can buy 2 TB for $250 and get the same speeds as on M1 Macs if the M.2 slot on your motherboard supports PCIe 3.0.

Modern SSDs are 10x to 20x slower than RAM by transfer rate but at least 1000x slower by latency. If you access the data in predictable patterns (as in video editing), you can get a lot done with limited memory. On the other hand, if the memory access patterns are unpredictable, using the data directly from SSD feels like a return to the 80s or 90s.
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
M1 Macs use the same cheap consumer-grade SSDs as everyone else. You can buy 2 TB for $250 and get the same speeds as on M1 Macs if the M.2 slot on your motherboard supports PCIe 3.0.

Modern SSDs are 10x to 20x slower than RAM by transfer rate but at least 1000x slower by latency. If you access the data in predictable patterns (as in video editing), you can get a lot done with limited memory. On the other hand, if the memory access patterns are unpredictable, using the data directly from SSD feels like a return to the 80s or 90s.

This
 
  • Like
Reactions: pshufd

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
I’m wondering if they can or will do something similar for x86.
Unencrypted Read / write speeds on my 2016 15” MBP are similar to the M1’s, but I suspect the intel cpu and associated components is designed / optimised to work the way things have for decades, requiring the program to be loaded into ram to be executed rather than a blend of RAM and fast storage. Apples implementation of Unified Memory & fast storage in a SoC has truly turned things on its head and is a game changer.

This is fundamentally an old concept, typically used in dealing with large files. Instead of loading something to memory, then immediately writing to swap, the file can be mapped into virtual memory directly. That's typically where it makes sense. That isn't to say x86 represents the way you would approach something today if starting from scratch (obvious).

Otherwise, if something is the result of an operation, which does not read from disk, it's unlikely you would want to write directly to disk. For information that resides on disk, it's more frequently read than written, so the previous approach applies again and can be done without explicit processor support.

As someone else mentioned while I was writing this, you're often bound by latency. It doesn't matter how many GB/s you can shovel through if the workload is composed of small intermittent reads.


In the future, RAM and SSDs will probably merge together, becoming single ultra-fast non-volatile memory.

I don't think we're that close to such an event. It seems like it would require really cheap high density memory cells, due to the over-provisioning required for such a thing.
 

t0pher

macrumors regular
Original poster
Sep 6, 2008
134
228
UK
This is fundamentally an old concept, typically used in dealing with large files. Instead of loading something to memory, then immediately writing to swap, the file can be mapped into virtual memory directly. That's typically where it makes sense. That isn't to say x86 represents the way you would approach something today if starting from scratch (obvious).

Otherwise, if something is the result of an operation, which does not read from disk, it's unlikely you would want to write directly to disk. For information that resides on disk, it's more frequently read than written, so the previous approach applies again and can be done without explicit processor support.

As someone else mentioned while I was writing this, you're often bound by latency. It doesn't matter how many GB/s you can shovel through if the workload is composed of small intermittent reads.




I don't think we're that close to such an event. It seems like it would require really cheap high density memory cells, due to the over-provisioning required for such a thing.
Ddr3-800 has a peak spread of 6.4GB/, less than 3 times the speed of the M1’s storage peak speed.

Ddr3 speeds

todays storage isn't far off yesterday’s RAM speeds.
 

t0pher

macrumors regular
Original poster
Sep 6, 2008
134
228
UK
M1 Macs use the same cheap consumer-grade SSDs as everyone else. You can buy 2 TB for $250 and get the same speeds as on M1 Macs if the M.2 slot on your motherboard supports PCIe 3.0.

Modern SSDs are 10x to 20x slower than RAM by transfer rate but at least 1000x slower by latency. If you access the data in predictable patterns (as in video editing), you can get a lot done with limited memory. On the other hand, if the memory access patterns are unpredictable, using the data directly from SSD feels like a return to the 80s or 90s.
Yes latency is a killer, but the whole thing is latency mitigation.

programs and data are stored (cached) on disk which is slowest
that is read into ram (cached from storage) which is quicker
that is then read into the cpu caches which is fastest
CPU processes data from its caches or streamed from slower lower tier off chip caches.

if there is gigs of stuff in RAM that isn’t read frequently it doesn’t need to be in ram It can be in lower tier access like quick storage.
 
  • Like
Reactions: Fefe82

TrueBlou

macrumors 601
Sep 16, 2014
4,531
3,619
Scotland
The problem with that approach is that the reason iOS is so frugal with RAM is that it suspends apps in the background after a few seconds and releases the memory if necessary. That wouldn’t be feasible or acceptable on MacOS

I was just going to say the same thing. iOS is exceptionally efficient because of the way it handles memory. But that same approach cannot be completely transferred over to a desktop system, where many apps running in the background is far from unusual.

Don’t get me wrong, from what I’ve seen, they’ve clearly managed to implement certain memory management aspects from iOS into the new system. It appears to be, just like iOS, extremely efficient. But there will have to have been a degree of modification to it, to work under a desktop operating system.

I’m not by any means trying to dispel the efficacy of the M1, far from it. Indeed, where I would normally want at least 16GB of RAM, so impressed by the M1 and it’s efficient handling of, well, everything, that I’m going for 8GB in my Air.
 
  • Like
Reactions: t0pher and Phil A.

4sallypat

macrumors 601
Sep 16, 2016
4,034
3,782
So Calif
....
Modern SSDs are 10x to 20x slower than RAM by transfer rate but at least 1000x slower by latency. If you access the data in predictable patterns (as in video editing), you can get a lot done with limited memory. On the other hand, if the memory access patterns are unpredictable, using the data directly from SSD feels like a return to the 80s or 90s.
Glad Apple is finally THINKING DIFFERENT (again) - have you seen the traditional RAM (DDR3/4) that have CAS latencies as high as 9, 11, 13 ?

OP is correct, since Apple unified all the "modules" onto a single chip reminding me of the old RISC PPC days.

The need for more and more RAM we had become accustomed to with the Intel days has everyone thinking the same must be on the M1. However it's a fallacy to compare M1 to Intel.

Once you realize the M1 is so much more efficient (less clock cycles per instruction), faster at RAM access with lower latencies and storage being unified with quicker smarter refresh/read/write that you can't compare Apples (Mac) to Oranges (Intel).

Combine the more efficient HW to the iOS/MacOS and you have a computing system that is leaps and bounds different than Intel based systems.

I knew there was a lot of tradeoffs when Apple changed from their own processors (PPC) to Intel back in the mid 2000s. The heat generated by the CPU, thermal slowdowns to protect itself, and low battery life.

Glad to see Apple changed back to the "old days" and reclaimed their fame...
 
  • Like
Reactions: cool11

Toutou

macrumors 65816
Jan 6, 2015
1,082
1,575
Prague, Czech Republic
Normally and for decades, programs and data are copied from storage into RAM and CPU’S process those programs and data from RAM writing to disk when the data needs to saved. The whole point of copying from storage to ram is to reduce the time the cpu has to wait to read the data it needs to perform its work. Traditionally reading from RAM was thousands of times quicker than reading from storage.

RAM is effectively a faster cache for the storage
RAM is also cache for the much faster caches on the cpu itself.

programs contain lots of data but most of that data isn’t used most of time, consuming RAM that could be used by something else.

As the storage on M1 is so much faster than traditional spinning disks or even most aftermarket ssd‘s and the M1 has the notion of unified memory, and also remember RAM is really where the CPU caches data from storage so it can access it quicker and process the instructions quicker, M1 is able to put into swap or just read from where it already is in storage bits of programs that would normally be read into RAM.
Sorry but all this is so very inaccurate that you can't really draw any conclusions from that.
 
  • Like
Reactions: widEyed

satcomer

Suspended
Feb 19, 2008
9,115
1,977
The Finger Lakes Region
I starting to get the impression that the RAM situation in the M1 Macs is unless are using audio/video long time work get Higher memory sense it you use those program hard all day! For home regular guys can get buy the 8gig!
 

theluggage

macrumors G3
Jul 29, 2011
8,014
8,446
programs contain lots of data but most of that data isn’t used most of time, consuming RAM that could be used by something else.
...and since 1962 operating systems have addressed (hah!) that by using virtual memory systems that can shunt data in RAM out to disc when it isn't being actively used.

Sorry, folks - Apple Silicon may be wonderful but it is still "just" a classical computer architecture with some special-purpose bits like neural engines and GPUs bolted on. The "unified memory" advantage just means that the GPU, neural engine, disc controller etc. can access system memory directly so the CPU doesn't waste time copying data over PCIe "into" video RAM and other special pools of RAM outside of system memory. It's certainly part of the reason for the speed, but in terms of memory capacity then, if anything, it is going to consume more RAM for framebuffers, texture storage etc. (...but then all the current machines are replacing iGPU systems that took VRAM out of main memory anyway)

1981 called and want their technology back:


(..ok, bit of a stretch - but also kinda significant in that these are the folk that originally designed the ARM processor..)

Seriously, though - I've yet to see any evidence that supports the idea of this "secret sauce" that somehow makes RAM go further on the M1 in any fundamental way.

The M1 is all-round faster and more efficient for a whole host of reasons - which may well negate any small speed-up you were seeing from having extra RAM for file caching on an Intel system on particular jobs - and when an M1 does run short of RAM and starts swapping, the faster SSD and more efficient SSD controller help reduce the impact. However, what we're mostly seeing is tasks that aren't RAM-limited on Intel systems running as fast/faster on M1 because of the better/more efficient CPU, GPU and SSD speed - and where reviewers have found tasks that actually stress the RAM, the 16GB M1 still beats the 8GB.

That's not saying that an 8GB M1 machine can't compete with/beat your 16/32GB Intel machine... but that most likely means that the Intel system didn't actually need so much RAM or, if it did, that your M1 is still being slowed down by lack of RAM... and is going to get sand kicked in it's face when 32G+ Apple Silicon systems are available.
 

Earl Urley

macrumors 6502a
Nov 10, 2014
793
438
One of the better things is that 1.5 GB of RAM no longer has to be locked out for use exclusively by the GPU.. on a 8 GB machine that used to mean you were automatically limited to 6.5 GB for apps..
 

curmudgeonette

macrumors 6502a
Jan 28, 2016
586
496
California
so long as apple silicon and OS X is smart enough to only use RAM for what is actually used then less ram is needed to quickly feed the cpu and typically the disk is fast enough to swap into RAM any bits that are needed to the extent that the user typically won’t notice.

Just wait until introduces an Apple Silicon based MBP16. With many more cores, it will be much faster at compute than the current M1 machines. However, the SSD likely won't be any faster. Benchmarks will suggest getting as much RAM as you can use. In other words, many buyers will be clamoring for the 32GB version - while complaining that Apple isn't releasing a 64GB machine.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Yes latency is a killer, but the whole thing is latency mitigation.

programs and data are stored (cached) on disk which is slowest
that is read into ram (cached from storage) which is quicker
that is then read into the cpu caches which is fastest
CPU processes data from its caches or streamed from slower lower tier off chip caches.

if there is gigs of stuff in RAM that isn’t read frequently it doesn’t need to be in ram It can be in lower tier access like quick storage.
Unpredictable access patterns are the point. If the OS can't predict what data you are going to need next, it's always caching the wrong data, except by accident. Then you have to buy enough memory, or you are going to have a very slow computer.
 

t0pher

macrumors regular
Original poster
Sep 6, 2008
134
228
UK
Sorry but all this is so very inaccurate that you can't really draw any conclusions from that.
what bit is inaccurate?

Programs can run from storage (virtual memory is an example) programs and data is loaded into RAM so the CPU has faster access to it and not wasting cycles waiting for data from slow storage. The CPU caches frequent instructions in its onboard caches which is typically measured in MB or KB not GB. CPU's wait far less for their onboard cache than from RAM.

Just witness how much more useable older machines are when the HDD is swapped with a speedy SSD, programs load into RAM faster and launch quicker.

its all about getting data into the CPU as quick as possible, traditional thinking was always storage -> RAM -> CPU, its still that but what is loaded into RAM and what stays on storage isn't exactly the same now.

have a read of RobbieTT's post


his 8GB M1 reduced its RAM pressure after 2 weeks of operation

memory pressure has reduced and free RAM has increased to 4.2 GB

The M1's appear to be operating differently to what conventional wisdom says they should.
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
what bit is inaccurate?

Programs can run from storage (virtual memory is an example) programs and data is loaded into RAM so the CPU has faster access to it and not wasting cycles waiting for data from slow storage. The CPU caches frequent instructions in its onboard caches which is typically measured in MB or KB not GB. CPU's wait far less for their onboard cache than from RAM.

Just witness how much more useable older machines are when the HDD is swapped with a speedy SSD, programs load into RAM faster and launch quicker.

its all about getting data into the CPU as quick as possible, traditional thinking was always storage -> RAM -> CPU, its still that but what is loaded into RAM and what stays on storage isn't exactly the same now.

have a read of RobbieTT's post


his 8GB M1 reduced its RAM pressure after 2 weeks of operation



The M1's appear to be operating differently to what conventional wisdom says they should.
What is inaccurate is your suggestion "that the M1 doesn’t need as much RAM as older systems like we’ve seen from Intel, AIM etc" and that the M1 unified memory architecture includes the SSD.

The Unified Memory architecture refers to the CPU, GPU and other co-processors on the SOC. iOS memory management does not work the same as MacOS memory management (iOS does not swap data from memory to storage) but that doesn't mean that MacOS on ARM manages memory like iOS. It works the same way as MacOS on Intel because it's the same operating system.

The 8GB M1 Macs have less total memory than the video memory on some of the high end Intel Macs. Therefore, tasks requiring a lot of GPU memory will run faster on those Macs.
 
  • Like
Reactions: crevalic

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
The switch to M1 and subsequent Mx SoCs presents Apple with a clean slate for the macOS architecture. We are probably witnessing the first of many changes coming with Big Sur.

I believe M1 based Big Sur's treatment of swap vs memory is likely different from Intel based Big Sur. One main big difference is that for M1, the SSD is now fully encrypted, but for Intel based version, due to the only a portion of the supported Macs has the T2 chip, Big Sur could not fully encrypt the SSD without acceptable latency and probably thruput, thus treating the swap differently. Apple has this thing with security nowadays.

With M1, Apple has a baseline in terms of latency and thruput. Apple could probably treat their M1 based Big Sur OS kernels differently with respect to memory management (among other OS tasks), where for example, graphic textures could be swapped out to disk and read directly to the CPU/CPU if it is marked as read only memory? Doing this with Intel based Big Sur likely resulted in unacceptable performance.

The use of UMA also probably freed up system memory that is required to be reserved for Intel based Macs using iGPUs. Having said that, if a workload requires more RAM, it'll need more RAM. No two ways about it. It's encouraging tho. that we have reports from many actual users reporting that the 8GB base models M1 Macs being sufficient for their needs.

Well, all the above a conjectures on my part. Only the OS team in Apple has a true picture of how Big Sur manages memory for Mx based Macs.

As for me, I'm planning to get a M1 Mini with 16GB memory, to future proof it, as I use my computers for a long time. Still using a mid 2010 27" iMacs, so it'll be a massive upgrade for me.
 
  • Like
Reactions: t0pher and rezwits

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
I believe M1 based Big Sur's treatment of swap vs memory is likely different from Intel based Big Sur. One main big difference is that for M1, the SSD is now fully encrypted, but for Intel based version, due to the only a portion of the supported Macs has the T2 chip, Big Sur could not fully encrypt the SSD without acceptable latency and probably thruput, thus treating the swap differently. Apple has this thing with security nowadays.

With M1, Apple has a baseline in terms of latency and thruput. Apple could probably treat their M1 based Big Sur OS kernels differently with respect to memory management (among other OS tasks), where for example, graphic textures could be swapped out to disk and read directly to the CPU/CPU if it is marked as read only memory? Doing this with Intel based Big Sur likely resulted in unacceptable performance.

I see no reason why Big Sur would treat swap and memory any differently between Intel and Apple Silicon. That would be a significant and unnecessary change to the OS kernel and Apple had plenty of other things to work on.

BTW almost all current Intel Macs have a T2 chip (I think the 21" iMac is the only exception) and the SSD on a T2 Mac is always encrypted (the T2 chip is a modified A10. running something called bridge os). For older Intel Macs without the T2 chip, encryption is an optional feature. One significant difference between the T2 and other Intel Macs is they are SSD only. Many of the non-T2 Macs shipped with hard drives.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
Ddr3-800 has a peak spread of 6.4GB/, less than 3 times the speed of the M1’s storage peak speed.

Ddr3 speeds

todays storage isn't far off yesterday’s RAM speeds.

Latency could still get you. The number you're referring to here represents throughput. It's how much you can push through that pipeline each second. It's independent of the minimum cost of an initial trip, starting from the time it's issued.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.