Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mi7chy

macrumors G4
Original poster
Oct 24, 2014
10,625
11,296
This thread serves a purpose so that next time someone posts scammy misinformation that 8GB on M1 is equal to 16GB on other architectures it's easy just to point them here. And, for people on the sideline they need to base their purchasing decision on facts. There are more misinformation we're going to address.
 

robco74

macrumors 6502a
Nov 22, 2020
509
944
I'm honestly not sure why you feel the need to go on this crusade. The M1 Macs are strong performers, and people seem to be very happy with them. It's a similar situation with Android, where the iPhone, despite having considerably less RAM, still manages to perform well.

There is nothing wrong with Intel or AMD chips. There is nothing wrong with Windows or Linux. Use what works best for you, and allow others to do the same. Someone else's happiness doesn't diminish yours, it's not a zero sum game.
 

satcomer

Suspended
Feb 19, 2008
9,115
1,977
The Finger Lakes Region
Why would we want to do that? It's a long dead distro built for a different architecture.

You can virtualize modern ARM Linux distros just fine (e.g. Parallels will automatically download and set up Debian, Fedora, Ubuntu or Kali for you, and they run like a dream).



This tread started not making sense with the first post. OP is the one all over the place with their opinions and (mis)understanding how things work. (And I mean it, just look at their other posts all over the forum.)
I was making a joke! Why are you so defensive? Yellow Linux was very old Linux in the PPC days and that’s the joke, it could be like that again with the M1!
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
macOS frameworks are terrible memory sucks no matter how you want to characterize it. RAM usage stats mean _a_lot_ for those who can mentally compare a process sizes from running Tiger in 256MB. That's the price of progress: bloat.

Except you are not running Tiger and this is not 2005. Twenty years ago we didn't have HiDPI content, complex CSS layouts, reactive websites with virtual DOM and other fun things. I mean, my code editor is running a neural network classifier in the background as I type to predict my keystrokes. And of course, macOS itself didn't run security checks, cloud synchronization, automated backups or many other things it does today.

As new features become commonplace, resource usage of everyday applications increases. This phenomenon of resource inflation in computing is a known, real thing. Thats why we get more powerful hardware to deal with the demands of the software.

The great thing about memory leaks in macOS is that the unused pages get compressed before being flushed to disk by the swapper. It's a win for a leaky OS for sure. But that doesn't hide the fact those issues exist and have existed for several generations since El Capitan.

Tell me this kernel gobbling 5.26GB of real memory is just macOS doing the right thing which is what you're singing here. BTW. purgeable memory is not the savior you think it is. A few MB here and there doesn't help whatsoever as most users know macOS will slow down after a week of use and need a reboot.

Speaking of which. I need to reboot so BRB.

Can't say I share your experience. The last time I had to routinely reboot a Mac to restore it's performance was around Snow Leopard/Lion times (which for me were the worst, buggiest releases ever). My workhorse 16" works perfectly with Big Sur, so does my wife's 15" and my M1 Pro. Maybe you have some third-party kext with a memory leak or there is something about your environment that triggers a macOS bug?

Regarding purgeable memory: it currently accounts for around 25% of the allocated active RAM on my machine, half of the WindowServer usage and around 20% of the Safari usage are purgeable caches. kernel_task sits on 200MB RAM.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
This thread serves a purpose so that next time someone posts scammy misinformation that 8GB on M1 is equal to 16GB on other architectures it's easy just to point them here. And, for people on the sideline they need to base their purchasing decision on facts. There are more misinformation we're going to address.

You mean point them here so that they can read some more scammy misinformation from a person who does not understand how memory management works? Marvelous.
 

Toutou

macrumors 65816
Jan 6, 2015
1,082
1,575
Prague, Czech Republic
most users know macOS will slow down after a week of use and need a reboot

What?

My machine goes to sleep every night, wakes up in the morning, works for 10 hours (servers, databases, VMs, IDEs, terminals left and right, full face masks, hacking gloves, green letters on black background), goes to sleep. Every day, every week, every month. I don't shut it down and I only reboot for updates.
My GF doesn't reboot (the 4GB RAM 2013 machine I mentioned earlier), her uptime is currently 59 days. Back when I was using that machine and it was on Mojave and I didn't want to update, it hit 100+ days of uptime MULTIPLE TIMES.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
There is nothing wrong with Intel or AMD chips.
Plenty of things wrong with x86 chips from Intel and AMD. Dragging along 40 year old garbage for backward compatibility. How many people these days do you see using acoustic couplers? How many analog modems? Pretty much no one, because technology moves on and things that were once great are outdated and needs to be replaced.

It's time to let go of the old and outdated technology and move forward to the next big thing. Not going to happen over night, but Apple's seen it and they're working on it. Nvidia too by announcing Grace. Sure, they're far behind Apple, but they're serving a different market as well (with a much larger impact). They have identified x86 as a major bottleneck in the HPC market and once they produce and ship large enough quantities, x86 will be a distant memory in the HPC datacenter market. Not today, not tomorrow, but let's wait and see how things look in 5 years.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Nvidia too by announcing Grace. Sure, they're far behind Apple, but they're serving a different market as well (with a much larger impact). They have identified x86 as a major bottleneck in the HPC market and once they produce and ship large enough quantities, x86 will be a distant memory in the HPC datacenter market. Not today, not tomorrow, but let's wait and see how things look in 5 years.

You mean, they have identified x86 as a major bottleneck in their ability to make more money ;)

But jokes aside of the moment, current ARM server offerings do not outclass the x86 state of the art. When you look at something like Ampere Altra, an 80-core ARM Neoverse N1 is more or less equivalent (slightly slower in many workloads) to AMD's 64-core EPYC with roughly the same power consumption. I'd say this is still not good enough. Let's hope that future Neoverse products will bring the efficiency up a notch.
 

robco74

macrumors 6502a
Nov 22, 2020
509
944
Plenty of things wrong with x86 chips from Intel and AMD. Dragging along 40 year old garbage for backward compatibility. How many people these days do you see using acoustic couplers? How many analog modems? Pretty much no one, because technology moves on and things that were once great are outdated and needs to be replaced.

It's time to let go of the old and outdated technology and move forward to the next big thing. Not going to happen over night, but Apple's seen it and they're working on it. Nvidia too by announcing Grace. Sure, they're far behind Apple, but they're serving a different market as well (with a much larger impact). They have identified x86 as a major bottleneck in the HPC market and once they produce and ship large enough quantities, x86 will be a distant memory in the HPC datacenter market. Not today, not tomorrow, but let's wait and see how things look in 5 years.
Right, we're in the transition phase. So far, in the consumer space, we have 8CX and M1 actually shipping. If I don't want a Mac or a Surface Pro X, that means Intel or AMD. Both of those will still be solid purchases that will last for the next few years. The future may be ARM, but not today.

As a Mac user, my current laptop will likely be my last x64 machine. But for Windows and Linux users, the option simply isn't there yet. Honestly, given the inertia in the Windows realm, I wonder if the transition ever will take place.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
You mean, they have identified x86 as a major bottleneck in their ability to make more money ;)
Well, everyone is in for the money, they're not doing it for free. ?
But jokes aside of the moment, current ARM server offerings do not outclass the x86 state of the art. When you look at something like Ampere Altra, an 80-core ARM Neoverse N1 is more or less equivalent (slightly slower in many workloads) to AMD's 64-core EPYC with roughly the same power consumption. I'd say this is still not good enough. Let's hope that future Neoverse products will bring the efficiency up a notch.
That depends on the use-case. There are use-cases where ARM is the better choice over x86 and vice versa. For Nvidia, they care little about the CPU, as long as they can effectively get their data on/off the GPU where the magic happens. HPC for CPUs is in decline, more and more work is done on GPUs.

On projects that I have insight to at varying degree, be it NASA/ESA/DLR, CERN/GSI for particle physics, KUKA/ABB in robotics, Volvo/BMW/Audi in autonomous driving or just the general "number crunching", people care little about CPUs these days. The CPU is necessary, but a necessary evil. The holy grail would be a large SOC with the biggest GPU possible ready to plug into the network. The target market is HPC after all and if you look at the speaker list at GTC, Nvidia got that market covered. There's no alternative, not AMD, not Intel, not general ARM or Apple.

However, CPUs still have their benefit for a large number of smaller VMs to run, such as websites, Overleaf, Blogs, etc. where 1-2 cores with 4-8GB of RAM is the standard these days.
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Well, everyone is in for the money, they're not doing it for free. ?

That depends on the use-case. There are use-cases where ARM is the better choice over x86 and vice versa. For Nvidia, they care little about the CPU, as long as they can effectively get their data on/off the GPU where the magic happens. HPC for CPUs is in decline, more and more work is done on GPUs.

On projects that I have insight to at varying degree, be it NASA/ESA/DLR, CERN/GSI for particle physics, KUKA/ABB in robotics, Volvo/BMW/Audi in autonomous driving or just the general "number crunching", people care little about CPUs these days. The CPU is necessary, but a necessary evil. The holy grail would be a large SOC with the biggest GPU possible ready to plug into the network. The target market is HPC after all and if you look at the speaker list at GTC, Nvidia got that market covered. There's no alternative, not AMD, not Intel, not general ARM or Apple.

However, CPUs still have their benefit for a large number of smaller VMs to run, such as websites, Overleaf, Blogs, etc. where 1-2 cores with 4-8GB of RAM is the standard these days.
Jetsons demonstrate this on a small scale. IBM's Summit and Sierra ❤️ demonstrate on the very high-end.
 

dogslobber

macrumors 601
Oct 19, 2014
4,670
7,809
Apple Campus, Cupertino CA
What?

My machine goes to sleep every night, wakes up in the morning, works for 10 hours (servers, databases, VMs, IDEs, terminals left and right, full face masks, hacking gloves, green letters on black background), goes to sleep. Every day, every week, every month. I don't shut it down and I only reboot for updates.
My GF doesn't reboot (the 4GB RAM 2013 machine I mentioned earlier), her uptime is currently 59 days. Back when I was using that machine and it was on Mojave and I didn't want to update, it hit 100+ days of uptime MULTIPLE TIMES.
Reboot more often and you'll realize how much more perky your computer becomes. Pages swapped out and compressed memory leaks all impact your computers ability to perform optimal utility for you.
 

mi7chy

macrumors G4
Original poster
Oct 24, 2014
10,625
11,296
Reboot more often and you'll realize how much more perky your computer becomes. Pages swapped out and compressed memory leaks all impact your computers ability to perform optimal utility for you.

I used to close the lid to put it to sleep but now shut it down after every use to avoid kernel panic and growing memory usage and swap file use during sleep.
 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
This thread serves a purpose so that next time someone posts scammy misinformation that 8GB on M1 is equal to 16GB on other architectures it's easy just to point them here. And, for people on the sideline they need to base their purchasing decision on facts. There are more misinformation we're going to address.
I agree 8gb and 16gb are not the same and I will continue to explain to people why this is the case even on M1. I work professionally as an audio engineer and I know firsthand why 16gb is still 16gb on M1. No argument there.

However this thread is absolutely not at all the thread I would point to when explaining this.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Reboot more often and you'll realize how much more perky your computer becomes. Pages swapped out and compressed memory leaks all impact your computers ability to perform optimal utility for you.

Again, not my experience. I don't see any increased RAM or swap use after weeks and weeks of operations on any of our Macs.
 
  • Like
Reactions: Fawkesguyy

Rock Star

macrumors newbie
May 5, 2021
25
15
I might be wrong on this. But your MBA M1 16GB/256GB probably also includes graphics memory in the totals. On M1, both CPU and GPU have access to the exact same pool of graphics memory, which is reserved from the M1's 16GB RAM. For a fairer comparison, maybe you should also add the used memory on your discreet graphics card and/or reserved iGPU memory, to the Windows and Linux totals.
When your Windows box has an iGPU, you can open up "Resource Monitor >> Memory-tab". Then under "Hardware Reserved" you should see the amount of memory reserved for the iGPU. Or in your Windows 10 screenshot I can see 643MB Hardware reserved.

EDIT:
Also.
I see that you include "Cached Files" by the M1 in its total of memory used.
But "Cached Files" (in Activity Monitor) displays the total size of previously loaded files which is still kept in RAM memory. To speed up reloading of applications and documents. It's inactive, ready for the taking. You won't run out of usable RAM memory because of the high total displayed next to "Cached Files", because the RAM total displayed at "Cached Files" will be relinquished to apps on demand. "Cached Files" is sometimes also named "Inactive Memory" and opposing "Active Memory" and "Free Memory".
MacOS has been using RAM memory to cache files and apps ever since HFS+ (and SSD drives since APFS, MacOS 10.3).

The Memory Pressure graph lets you know if your computer is using memory efficiently.
  • Green memory pressure: Your computer is using all of its RAM efficiently.
  • Yellow memory pressure: Your computer might eventually need more RAM.
  • Red memory pressure: Your computer needs more RAM.
— Source: Check if your Mac needs more RAM in Activity Monitor
 
Last edited:
  • Like
Reactions: MacCheetah3

theoak2

macrumors member
Nov 29, 2017
39
25
Is "high" RAM usage bad? RAM is considerably faster than SSD, and outrageously faster than a spinning hard drive. That is why computers are designed to copy stuff to RAM, and use it from there. When you "save" a document, what is in RAM is written back to hard drive (or SSD) to have a persistent copy. Otherwise you lose that document (or drawing, video edit, etc) you spent hours creating if you turn your computer off (without saving), because RAM, although fast, is volatile. When powered off, it doesn't remember anything! That is how computers work.

Also remember that RAM does not have the write limitations that SSD nand cells have. SSD cells can be written to a limited number of times. Since SSD lives are rated at hundreds of TBW (TeraBytes Written), some people who use their computers less than others, may never need to worry about wearing out their SSD. But for other people, wearing out their SSD is very possible. This is where "high" RAM usage is desirable, to spare that SSD. That is why Linux users with SSD and large amount of RAM try to limit their swap file (disk space used as RAM, when RAM gets low) and Windows users will decrease the size of their paging file, and make it not resizable by the OS. High RAM usage is actually faster than disk usage, and saves writes to those SSDs.

If you have an SSD (especially low capacity SSD) soldered to the motherboard (read: non-replaceable), and fall into the category of people who use their computers hard enough to reach TBW limits on their SSD, high RAM usage is very good.

Although I don't see how they could do it, see "Samsung Example" of Office Worker writing 40 GB per day on low capacity SSD on this web page:
https://www.ontrack.com/en-gb/blog/how-long-do-ssds-really-last

I guess someone doing high resolution CAD work, video editing (or similar) all day at work could wear out their SSD in 5 years?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Is "high" RAM usage bad?

Not per se. The problem is that the RAM usage data is interpret by people that have zero understanding of memory management.

Also remember that RAM does not have the write limitations that SSD nand cells have. SSD cells can be written to a limited number of times. Since SSD lives are rated at hundreds of TBW (TeraBytes Written), some people who use their computers less than others, may never need to worry about wearing out their SSD. But for other people, wearing out their SSD is very possible. This is where "high" RAM usage is desirable, to spare that SSD. That is why Linux users with SSD and large amount of RAM try to limit their swap file (disk space used as RAM, when RAM gets low) and Windows users will decrease the size of their paging file, and make it not resizable by the OS. High RAM usage is actually faster than disk usage, and saves writes to those SSDs.

It's a different discussion however. System RAM utilization and SSD swap use are not necessarily correlated. In fact, if a system is bad at managing memory or requires excessive amount of RAM to function (the "bloat" referred to by some), it will usually lead to increased swapping which might reduce your SSD lifespan. If I understand correctly, this is where the OP is ultimately going on with this thread, but he does it in a really clumsy way.

The background for this entire discussion is the fact that some users report abnormally high SSD writes on the new M1 machines, sometimes in excess of hundreds TBW since December. This will of course kill your SSD within couple of years, and it's definitely not normal. Since it only affects a small groups of users (mine M1 is perfectly fine for example), common sense dictates that this is some sort of software/kernel bug that can strike in certain edge cases, but as usual it lead to a now often quoted myth that macOS memory management is somehow fundamentally flawed.
 

Gnattu

macrumors 65816
Sep 18, 2020
1,107
1,671
Not per se. The problem is that the RAM usage data is interpret by people that have zero understanding of memory management.



It's a different discussion however. System RAM utilization and SSD swap use are not necessarily correlated. In fact, if a system is bad at managing memory or requires excessive amount of RAM to function (the "bloat" referred to by some), it will usually lead to increased swapping which might reduce your SSD lifespan. If I understand correctly, this is where the OP is ultimately going on with this thread, but he does it in a really clumsy way.

The background for this entire discussion is the fact that some users report abnormally high SSD writes on the new M1 machines, sometimes in excess of hundreds TBW since December. This will of course kill your SSD within couple of years, and it's definitely not normal. Since it only affects a small groups of users (mine M1 is perfectly fine for example), common sense dictates that this is some sort of software/kernel bug that can strike in certain edge cases, but as usual it lead to a now often quoted myth that macOS memory management is somehow fundamentally flawed.
I've calculated one of the abnormal data's write rate, and it is about 800MB written into the SSD per second whenever the SSD is online, which make be believe this is never a normal case but a bug.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I've calculated one of the abnormal data's write rate, and it is about 800MB written into the SSD per second whenever the SSD is online, which make be believe this is never a normal case but a bug.

What? By now you should know better - it’s a feature Apple cunningly snuck into M1 chip to make the SSDs fail faster so that people pay their ridiculous repair prices!
 

TheSynchronizer

macrumors 6502
Dec 2, 2014
443
729
This thread serves a purpose so that next time someone posts scammy misinformation that 8GB on M1 is equal to 16GB on other architectures it's easy just to point them here. And, for people on the sideline they need to base their purchasing decision on facts. There are more misinformation we're going to address.
I don‘t think I’ve ever seen anyone say they’re equal. Saying that is completely false.

However, they are indeed very different and 8GB of RAM on an M1 system is not the same as 8GB on an x86 system. It’s greatly more efficient due to physically being soldered on to the M1 chip itself, allowing it to be used as unified memory therefore both the GPU and the CPU have zero-copy direct access to all the memory they could ever want, which is a lot more efficient than the way x86 systems do it. All the parts of the M1 SoC can access any data in memory they need at the exact same address. The whole overhead of the CPU needing to access the memory of the GPU and vice-versa has been completely eliminated with the M1. This fact coupled with the much faster SSD controller, and faster access to the memory itself due to it being soldered on the SoC, means that an 8GB M1 system performs a lot better than an 8GB x86 system in terms of memory management, efficiency, and total size required.

No amount of memory on an M1 system is equal to any amount of memory on an x86 system as they simply function completely differently, so there is no logical way to call them equal.

However, as a fact: an 8GB m1 system can handle a lot more memory intensive workloads than an 8GB x86 system, and many workloads which you would need (>8GB) 16GB RAM for on an x86 system, you can do just fine on the M1. Hence why the 8GB M1 is plenty enough for a lot of people.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.