Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

How much RAM is enough?

  • 8GB ASi does me fine

    Votes: 19 13.3%
  • 16GB is a cute start

    Votes: 37 25.9%
  • 24 or 32GB is a sweet spot

    Votes: 49 34.3%
  • AS MUCH AS I CAN GET

    Votes: 54 37.8%

  • Total voters
    143
Once you are running, it will begin caching things in ram that it anticipates you may need based on your normal usage.
I highly doubt that, do you have a source for this? Normally Unix and Unix-like operating systems will cache pages from the filesystem in RAM after they have been needed by something.
 
My questions/concerns:
  1. I’m not running out of space, but, obviously, if I things slow down and apps crash or buffer, I’m exceeding my RAM!
  2. I feel like I can’t install much more than I have and I’ve hardly installed ANYTHING! Hopefully, I won’t need much more……..
  3. What apps can I shut down? I know how to Quit an app, but it comes right back! How can I permanently shut down an app I don’t need?
  4. How do I know which ones I don’t need??
  5. What else can I do to minimize RAM usage?
  6. Ultimately, I think, I will have to trade in for a 16GB machine if it doesn’t work out, but, of course, I don’t want to do that. I’m really disappointed that I just didn’t get 16gb RAM! Every thing I read online said M2 is soooo much more efficient. But I feel like I may have made a mistake : (
  7. Thank you!! Donna
1. As others have said, macOS and Unix/Linuxes (for that matter, even Windows) do various amounts of caching. Anything from libraries used by one or more applications, to the filesystem itself, and they also do effectively 'lazy unloading,' meaning they don't immediately flush the cache once something is no longer in use...and even then it would be continuing to cache files etc. For example, on my 64GB RAM MBP, it's currently caching 21GB(!!) of files.

Anything 'cached but not actively being used' will be forced out of cache when an application or system service opens or requests additional memory for something. The simplest way to see what 'reality' looks like is to look at the 'memory pressure' in Activity monitor - green or yellow is fine. Occasional spikes into red is generally OK, but you may want to consider the next step up in memory on your next system. Constantly being in red or nearly also also isn't the complete end of the world, as you will start 'swapping' which is basically using the SSD as memory - this was much more significant prior to SSDs as there are still differences in SSD vs memory bandwidth and speeds (how fast can you read and write to memory) but nowhere nearly as badly as compared to spinning platter drives in laptops a decade ago.

2. Installing apps has nothing to do with RAM. It has to do with free space on your SSD. You can open uf a Finder window, then go to View, and 'show status bar' and assuming you only have the single SSD, it will show used and free space, or you can to the Apple Menu/About this Mac/More Info then scroll down to Storage and it will show the same. Generally I like to stick at 75% use or lower if possible. If I'm over that, it's either time to review what I can offload or delete or move to the next storage size up next time around.

3. Just quit apps via menu or CMD-Q. You're probably freaking out seeing some memory usage message somewhere in the system w/out understanding it. Refer to #1 above as well as others responses on how macOS handles memory. Uninstalling - for most apps, just open a Finder Window, go to Applications, then delete the apps you don't want, followed by emptying the trash.

4. I wouldn't advise deleting system applications. They aren't hurting anything unless your SSD storage space is approaching 100%. If it's actually memory/RAM that's still a concern after seeing the various responses, well, you can go through System Settings and look for optional things you might be able to disable, for example - Settings / General / Sharing - you can probably make sure all of those options are disabled/not green if any are currently enabled. Each of these will run one or more 'services' in the background so if they aren't running, you'll regain some amount of usable memory, and unless you have a specific reason, you don't generally want these turned on.

5. Stop worrying about it unless your memory pressure is consistently in red. You can open activity monitor and post a screenshot with the window made full height, memory tab, sort by memory usage with the largest # up top.
 
1. As others have said, macOS and Unix/Linuxes (for that matter, even Windows) do various amounts of caching. Anything from libraries used by one or more applications, to the filesystem itself, and they also do effectively 'lazy unloading,' meaning they don't immediately flush the cache once something is no longer in use...and even then it would be continuing to cache files etc. For example, on my 64GB RAM MBP, it's currently caching 21GB(!!) of files.

Anything 'cached but not actively being used' will be forced out of cache when an application or system service opens or requests additional memory for something. The simplest way to see what 'reality' looks like is to look at the 'memory pressure' in Activity monitor - green or yellow is fine. Occasional spikes into red is generally OK, but you may want to consider the next step up in memory on your next system. Constantly being in red or nearly also also isn't the complete end of the world, as you will start 'swapping' which is basically using the SSD as memory - this was much more significant prior to SSDs as there are still differences in SSD vs memory bandwidth and speeds (how fast can you read and write to memory) but nowhere nearly as badly as compared to spinning platter drives in laptops a decade ago.

2. Installing apps has nothing to do with RAM. It has to do with free space on your SSD. You can open uf a Finder window, then go to View, and 'show status bar' and assuming you only have the single SSD, it will show used and free space, or you can to the Apple Menu/About this Mac/More Info then scroll down to Storage and it will show the same. Generally I like to stick at 75% use or lower if possible. If I'm over that, it's either time to review what I can offload or delete or move to the next storage size up next time around.

3. Just quit apps via menu or CMD-Q. You're probably freaking out seeing some memory usage message somewhere in the system w/out understanding it. Refer to #1 above as well as others responses on how macOS handles memory. Uninstalling - for most apps, just open a Finder Window, go to Applications, then delete the apps you don't want, followed by emptying the trash.

4. I wouldn't advise deleting system applications. They aren't hurting anything unless your SSD storage space is approaching 100%. If it's actually memory/RAM that's still a concern after seeing the various responses, well, you can go through System Settings and look for optional things you might be able to disable, for example - Settings / General / Sharing - you can probably make sure all of those options are disabled/not green if any are currently enabled. Each of these will run one or more 'services' in the background so if they aren't running, you'll regain some amount of usable memory, and unless you have a specific reason, you don't generally want these turned on.

5. Stop worrying about it unless your memory pressure is consistently in red. You can open activity monitor and post a screenshot with the window made full height, memory tab, sort by memory usage with the largest # up top.
Well said. Traditional memory management and legacy windows thinking is root of this misunderstanding. Performance can actually be worse if the swap allocated is zero under heavy load. The swap space marked doesn’t mean active swap, Mac OS actively unloads some pages in to swap. This avoids unloading existing pages from RAM to SSD under load, and at the same time allocating memory in real time.
 
  • Like
Reactions: wegster
I’m sorry but the justification that “it’s just cache” isn’t sufficient. The cache space might not be required but it’s being used for a reason. Limiting or reducing the cache usually won’t impact functionality but it will (or could depending how you work) affect performance. Sloppy coding that doesn’t release cache properly or promptly is something we are forced to live with (unless you are doing you own coding). And sloppy coding often runs more efficiently if there is spare RAM available. Extra RAM helps forgive a variety of [programing] sins. Having done my share of sloppy programming, I have seen this in action. And just because it’s big name commercial software doesn’t mean it isn’t sloppy (stability and sizing are more important than efficiency).

So I’m in the camp, more RAM is always better. If it is a trade between a few more CPU/GPI cores, a slightly faster clock speed or more RAM; more RAM is the preferred choice, in my opinion
 
..Sloppy coding that doesn’t release cache properly or promptly is something we are forced to live with (unless you are doing you own coding). And sloppy coding often runs more efficiently if there is spare RAM available.
This kind of cache is not under the control of the programmer. It is managed by the Kernel. (macOS)

Swap made more of a difference back when spinning hard drives were used but today the speed of the cache is much closer to the speed of RAM.

Yes, more RAM is always better but if you are on a fixed budget then getting more RAM means getting less of something else. The way to decide is this: If you are only going to use the apps that come with macOS, then you can. likely use the base RAM size. But if you are planning to buy and use any other software, you likely could use double the base size. Some special cases like running virtual machines or some scientific computing or editing video every day need maybe 4X the base size.
 
  • Like
Reactions: wegster
And even if swap is okay, why would you want it?
It allows you to run things your machine wouldn't be able to run without swap. I agree you shouldn't plan to use swap all the time, that's inefficient, but for rare or one off jobs, down right I'd use it and like it.

You probably don't remember how bad it was on OS's that didn't have virtual memory.
 
I’m sorry but the justification that “it’s just cache” isn’t sufficient. The cache space might not be required but it’s being used for a reason. Limiting or reducing the cache usually won’t impact functionality but it will (or could depending how you work) affect performance. Sloppy coding that doesn’t release cache properly or promptly is something we are forced to live with (unless you are doing you own coding). And sloppy coding often runs more efficiently if there is spare RAM available. Extra RAM helps forgive a variety of [programing] sins. Having done my share of sloppy programming, I have seen this in action. And just because it’s big name commercial software doesn’t mean it isn’t sloppy (stability and sizing are more important than efficiency).

So I’m in the camp, more RAM is always better. If it is a trade between a few more CPU/GPI cores, a slightly faster clock speed or more RAM; more RAM is the preferred choice, in my opinion
The kind of cache that you're referring to is something that MacOS does very aggressively. Some of it is labeled as cache, but not all of it actually labeled under this category. MacOS does a lot of neat things with this.

MacOS' kernel is a typical "demand paging" kernel that doesn't load entire binaries or entire files into memory simply because it has been opened, but rather generally loads pages in as they are accessed/needed. Most modern kernels are demand paging kernels, as it would take exorbitant amounts of RAM to attempt to load entire binaries into RAM simply because they have been opened. However, the latency of accessing a new page from disk is very high. Having to do this over and over again when opening a new file or a binary would slow application launch times, file reading times, and other such things, so modern kernels will utilize speculative loading and readaheads (for example, loading the next 8 pages also when a page is grabbed from disk). MacOS does this fairly aggressively, and this is one of the primary reasons that systems will "use more RAM" when more RAM is present on the system. MacOS will keep more of these around when it is able to do so, and much of this is actually categorized as "in use" memory.

One of the reasons MacOS does this is because, as it turns out, it's actually much faster to simply compress them when memory starts to run a little tighter rather than necessarily always purging them and potentially having to grab these from disk again in the future. MacOS will often do this, and will frequently try compress file-mapped pages that are technically purgeable if there is the spare headroom to do so. These are pages that would simply be purged if MacOS didn't do this, but it doesn't necessarily always make sense to do this if there is still headroom to keep them in memory.

Activity monitor sometimes makes it seem as though the system is running tighter on memory than it actually is, but the technical underpinnings of all of this are quite a bit more interesting.
 
It allows you to run things your machine wouldn't be able to run without swap. I agree you shouldn't plan to use swap all the time, that's inefficient, but for rare or one off jobs, down right I'd use it and like it.

You probably don't remember how bad it was on OS's that didn't have virtual memory.
In some tasks, it's better for software to crash because it can't allocate enough memory than continue running 10x or 100x slower than it should be. Servers often limit swap to a negligible amount, because immediate and obvious failures are often better than a silent degradation of performance.
 
In some tasks, it's better for software to crash because it can't allocate enough memory than continue running 10x or 100x slower than it should be. Servers often limit swap to a negligible amount, because immediate and obvious failures are often better than a silent degradation of performance.
Servers and desktops are entirely different bread. And no, I don’t want any software to crash. Performance and stability are not mutually exclusive.
 
Servers and desktops are entirely different bread. And no, I don’t want any software to crash. Performance and stability are not mutually exclusive.
That's a consumer perspective. Or an end-user perspective. If you are a developer or a power user, computers are just computers. In particular, higher-end laptops and desktops are often used for similar tasks as servers.

And it's not a matter of performance vs. stability. When software encounters an unrecoverable error, and it's not reasonable to expect that the situation can be resolved by human intervention, crashing quickly and in an controlled way is the preferred outcome. Not crashing would almost always be a bug.
 
  • Haha
Reactions: Isamilis
In some tasks, it's better for software to crash because it can't allocate enough memory than continue running 10x or 100x slower than it should be. Servers often limit swap to a negligible amount, because immediate and obvious failures are often better than a silent degradation of performance.
I couldn't disagree more. I know of nobody that runs their shop that way. You can track memory usage without crashing a servers (and all it's users).
 
  • Like
Reactions: Chuckeee
My 8gb Mini has happily done everything I've asked of it so far, including some 1080p video editing in iMovie. That said, I hesitate to do much more work with it as:
  • It's forever stuck with 8gb of RAM
  • If I go deeper into the Mac camp, I'll be forced to shell out an unholy amount of money for a new machine with 16 or 32gb.
The machine on my desk at work is a Dell with 16gb of RAM. I hit that limit on a somewhat regular basis, but that's due to me working in a VM with 10gb assigned to it (which isn't necessary, I just prefer that workflow). Were it up to me, that computer would have 24gb.

Ignoring future, as-yet-unknown requirements, 16gb would suit my personal computers fine.
 
I couldn't disagree more. I know of nobody that runs their shop that way. You can track memory usage without crashing a servers (and all it's users).
Crashing due to lack of memory is a user-friendly feature that helps you to run the server properly. It's generally better to let the server crash before the users arrive than to have poor performance when the users are there.

Memory usage is largerly a function of parameter values. The more memory you use, the higher the performance, as long as you actually have that memory. But you don't always know the proper parameter values and their exact impact on memory usage without experimenting with them. Then it helps that you get instant and obvious feedback when you try to use too much memory.
 
Crashing due to lack of memory is a user-friendly feature that helps you to run the server properly. It's generally better to let the server crash before the users arrive than to have poor performance when the users are there.

Memory usage is largerly a function of parameter values. The more memory you use, the higher the performance, as long as you actually have that memory. But you don't always know the proper parameter values and their exact impact on memory usage without experimenting with them. Then it helps that you get instant and obvious feedback when you try to use too much memory.
I'm speaking as an IT Manager, and there's never a time where my servers don't have any users. You're talking about a testing situation, I'm about production most of the time, and letting a production server crash, especially because it's not configured to use virtual memory, well I'd expect to be fired for something like that. But I wouldn't even do that in testing.
 
Crashing due to lack of memory is a user-friendly feature that helps you to run the server properly. It's generally better to let the server crash before the users arrive than to have poor performance when the users are there.

Memory usage is largerly a function of parameter values. The more memory you use, the higher the performance, as long as you actually have that memory. But you don't always know the proper parameter values and their exact impact on memory usage without experimenting with them. Then it helps that you get instant and obvious feedback when you try to use too much memory.
As someone who is a sysadmin, I'm not sure I would ever intentionally run a production server this way. We try to avoid swap usage when possible (for obvious reasons). We have lots of monitoring in place to detect if there is an issue (much of it is very sophisticated in nature, if there is an issue we will be notified very obnoxiously). But I'm not sure I'm ever going to intentionally disable swap to cause a system to fail on purpose before it otherwise would, even if it would notify me of an issue.

We would know anyway if there was some sort of issue because we have dedicated systems in place to let us know. We have monitors for memory issues, among many others (and we also have full stack tests that will send a plethora of real user requests periodically and will notify us immediately in the event of any issue, so we have very extensive systems in place). If swap is the difference between a customer getting a slow response to the server and not getting a response at all, we will take swap over downtime any day of the week.

I'm not necessarily disagreeing that you have a valid point. We don't want to rely on it (if I see that a server is using swap, I'm going to upgrade it immediately, regardless of cost), but if we're talking about failsafes, going down on a production system is a bad, bad thing. If swap means that a system might have a chance of staying up, I'd rather have the monitors catch it so that I can have the potential to avoid downtime for the user while I fix the issue. And I have seen that exact scenario before.

Other people might approach this differently, and they might have perfectly valid reasons for doing so (not every use case is the same, there are plenty of cases I can think of where it would absolutely make perfect sense to leave swap disabled). But I'm of the opinion that (in general) if you're relying on preventable OOM situations to notify you of a problem, you should probably have much better monitoring in place to begin with.
 
Last edited:
As someone who is a sysadmin, I'm not sure I would ever intentionally run a production server this way. We try to avoid swap usage when possible (for obvious reasons). We have lots of monitoring in place to detect if there is an issue (much of it is very sophisticated in nature, if there is an issue we will be notified very obnoxiously). But I'm not sure I'm ever going to intentionally disable swap to cause a system to fail on purpose before it otherwise would, even if it would notify me of an issue.
I'm a scientist and a developer myself, so my perspective is a bit different. The production servers I have access to only have a nominal amount of swap space, for example 32 GB in a system with 2 TB RAM. If you would run out of memory without swap, you would almost certainly run out of it with swap as well. As far as I understand, the swap is there only to allow the system to fail gracefully by killing the offending process.

When I'm developing and testing software and trying to find appropriate parameters, crashing due to lack of memory is something that may happen several times in an hour. When I'm doing it on Linux. macOS on the other hand will gladly allow individual processes to use 2x more memory than there is on the system. That makes wrong choices harder to see.
 
That's a consumer perspective. Or an end-user perspective. If you are a developer or a power user, computers are just computers. In particular, higher-end laptops and desktops are often used for similar tasks as servers.

And it's not a matter of performance vs. stability. When software encounters an unrecoverable error, and it's not reasonable to expect that the situation can be resolved by human intervention, crashing quickly and in a controlled way is the preferred outcome. Not crashing would almost always be a bug.
No. At an enterprise level, If the production servers crash, and App isn’t available, people lose jobs more often than not. Especially when you tell some one I will rather have a down time, than manage peak loads with more latency. Desktops and laptops may have more tolerance, but for me personally it has to be legitimate reason other than likes and dislikes about swap. Swap usage in servers and to certain extent a consumer device is a symptom of something else, not cause.
 
I'm speaking as an IT Manager, and there's never a time where my servers don't have any users. You're talking about a testing situation, I'm about production most of the time, and letting a production server crash, especially because it's not configured to use virtual memory, well I'd expect to be fired for something like that. But I wouldn't even do that in testing.
Not to mention there are more than enough monitoring and alerting strategies and tools available to infotm and allow mitigation before a problem is reached. Sure in some specific cases of non-memory related issues if something crashes you want as much useful info as possible including logs, stack trace … but often enough for the RAM case there’s a one-off runaway job or overly-aggressive.improperly-sized VM/pod/container that can be corrected.

With the aforementioned proper monitoring and a bit of load/scale testing BEFORE deploying a new sysytem/server/service(s) you can also be reasonably sure resources are proper for the system, or can be scaled up when needed. Yeah, there will always be some edge/surprise cases because software, but at worst I’d expect a single job/container/service to get bounced, ideally re-attempt (e.g. if it’s a user initiated job/task or some kind that can re-attempted) while alerting and logging throughout.
 
Not to mention there are more than enough monitoring and alerting strategies and tools available to infotm and allow mitigation before a problem is reached. Sure in some specific cases of non-memory related issues if something crashes you want as much useful info as possible including logs, stack trace … but often enough for the RAM case there’s a one-off runaway job or overly-aggressive.improperly-sized VM/pod/container that can be corrected.

With the aforementioned proper monitoring and a bit of load/scale testing BEFORE deploying a new sysytem/server/service(s) you can also be reasonably sure resources are proper for the system, or can be scaled up when needed. Yeah, there will always be some edge/surprise cases because software, but at worst I’d expect a single job/container/service to get bounced, ideally re-attempt (e.g. if it’s a user initiated job/task or some kind that can re-attempted) while alerting and logging throughout.
Not to mention, most of the IT infrastructure/Cloud is virtualized or containerized today. Virtual Memory literally uses the combination of Physical Memory and disk. Swap is mostly OS offloading inactive pages to disk from physical memory. Paging activity is more important than swap, and often a better indication of possible memory pressure or issues.
Unless it’s an edge case of folks paying too much attention to a YouTuber who has no idea of Disk speeds/RAM or swap making a binary call on swap or using disk along with physical memory.
 
  • Like
Reactions: wegster
Not to mention there are more than enough monitoring and alerting strategies and tools available to infotm and allow mitigation before a problem is reached. Sure in some specific cases of non-memory related issues if something crashes you want as much useful info as possible including logs, stack trace … but often enough for the RAM case there’s a one-off runaway job or overly-aggressive.improperly-sized VM/pod/container that can be corrected.
Yeah, monitoring has gotten quite good over the years!

I do understand his point about performance though, but always in the jobs I've worked, availability was the most important factor. I can always throw more hardware at it later.

I can explain slow to users, but "I can't do my job" is a lot harder!
 
  • Like
Reactions: wegster
Yeah, monitoring has gotten quite good over the years!

I do understand his point about performance though, but always in the jobs I've worked, availability was the most important factor. I can always throw more hardware at it later.

I can explain slow to users, but "I can't do my job" is a lot harder!
You can literally auto scale if paging/memory /CPU or any application level metrics become a bottle neck. Disabling won’t help much, monitoring, and mitigation is better than tanking the App.
 
I'm a scientist and a developer myself, so my perspective is a bit different. The production servers I have access to only have a nominal amount of swap space, for example 32 GB in a system with 2 TB RAM. If you would run out of memory without swap, you would almost certainly run out of it with swap as well. As far as I understand, the swap is there only to allow the system to fail gracefully by killing the offending process.

When I'm developing and testing software and trying to find appropriate parameters, crashing due to lack of memory is something that may happen several times in an hour. When I'm doing it on Linux. macOS on the other hand will gladly allow individual processes to use 2x more memory than there is on the system. That makes wrong choices harder to see.
Yeah, this sort of workload is one where this would make much more sense.
 
Crashing due to lack of memory is a user-friendly feature that helps you to run the server properly. It's generally better to let the server crash before the users arrive than to have poor performance when the users are there.

Memory usage is largerly a function of parameter values. The more memory you use, the higher the performance, as long as you actually have that memory. But you don't always know the proper parameter values and their exact impact on memory usage without experimenting with them. Then it helps that you get instant and obvious feedback when you try to use too much memory.
What you are describing is virtual memory paging not swap. Virtual memory paging and swap are different.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.