Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

How much RAM is enough?

  • 8GB ASi does me fine

    Votes: 19 13.3%
  • 16GB is a cute start

    Votes: 37 25.9%
  • 24 or 32GB is a sweet spot

    Votes: 49 34.3%
  • AS MUCH AS I CAN GET

    Votes: 54 37.8%

  • Total voters
    143

leifp

macrumors 6502a
Original poster
Feb 8, 2008
538
526
Canada
So, once more in the hopes of removing my own ignorance rather than telling you why my way is the only way, I’m fascinated by RAM use.

ALL of my sub units (I’m responsible for tech choices and updates for nine people +/-) do fine with 8GB of RAM (specifically on Apple Silicon; definitely not the case on Intel Macs). A few of them opt for 16GB and I never attempt to dissuade them (they have the money and the peace of mind offered by adding 8GB of RAM is worth it). I myself have 32GB of RAM because I know that 16GB is insufficient (I measure and pay attention) and that 32GB is actually more than I presently need (zero swaps in one and a half years of daily use).

So, on that note: do you benefit more from RAM than from other elements of the system? E.g. if you could get an M2 setup with 128GB of RAM, would it make sense? What on earth do you do for that to be the case? (This includes substantially less absurd setups. My own is just slightly above my needs: M1Max w 32GB RAM has left me hanging solely when ripping discs [turns out Handbrake uses 100% of CPU cores available] and the only area where I would consider an upgrade is the GPU but the next step up doubles my already too many CPU cores, the underused ML cores, and doubles RAM I already don’t take advantage of… I’m not so wealthy that I can swallow such a price increase for some more GPU cores)

Looking forward to learning something!

I’m going to allow multiple selections because the important part will be in the responses posted below… and some folk have multiple systems with multiple goals/setups
 
  • Like
Reactions: Gudi
I have three AS machines M2A 8GB, M1P MBP 16GB and M1M Studio 32GB.

The M2A is for very light use and has run perfectly. The MBP occasionally gets in to the swap and I don't like that. The Studio has run flawlessly and never swapped with numerous applications open all the time. I wouldn't get less than 32 for my primary workhorse machine. But for the Air (which is effectively an iPad replacement) 8 GB is fine and made the machine dirt cheap on a BBY sale.
 
I use my Mac for scientific computing, and sometimes do calculations that require very large amounts of RAM, occasionally exceeding the 128 GB on my machine (forcing it to swap). I use an Intel iMac, but the same would apply if I were instead using Apple Silicon, since the amount of RAM needed is dictated by the calculation itself, and would thus be independent of the OS.

One of the current downsides of AS is that, in order to get that much RAM, you need to spring for a workstation-class CPU (a 24-core M2 Ultra), regardless of whether you need that many cores (most of my calculations are single-threaded, so I don't).

Hopefully that will change when Apple switches to LPDDR5x RAM, which offers higher-capacity chips than LPDDR5. It's just a matter of whether Apple takes advantage of this—it's possible they will limit the RAM available with the Max and Pro chips for marketing rather than technological reasons, the same way they limit the current base M2 chip to a total of two displays.
 
I've been rocking 4GB on a 2012 MacBook Air for 10 years now, and when I get a new M2 MacBook Air, I plan on getting 8GB. I'm a bit between a power user and a casual user, having a bunch of apps constantly open, but also not doing crazy math computations or anything like that. I have barely ever noticed the RAM cap on this, I surely don't need 16GB with the benefits of Apple Silicon combined with just faster RAM and SSDs in general.
 
Personally, I get along fine with 8GB on my personal M1 Mini and 16GB on my work M2 MBP. My work apps are mostly Figma for UX stuff, PSD / AI for when I need to edit graphics, MS Office / Teams, and the rest is all web-based.

That said, as I dabble in music recording, I have heard reports from users of very large sample-based plugins (e.g. the Vienna Symphonic Ensemble orchestral stuff) can’t get them to load - they really don’t work properly in a swap system, they’re intended to be loaded in RAM.

Prior to systems like the 2019 MP, they often hosted said instruments on separate computers connected by Ethernet.

That said, other users have done multitrack DAW tests on the 8GB Mini and loaded something like 900+ tracks, each with their own Space Designer convolution reverb, so YMMV.
 
  • Like
Reactions: leifp
This is much as I anticipated it would be: different strokes for different folks. But squeaky wheels and all that. I’m constantly frustrated that people aren’t willing to pay for their level of computing needs (read: wants) and insist that their desired configuration should be the base configuration available. Of course, that means I fall into the same trap: opening my trap to let the flies in/out…
 
My father who owned an Apple VAR till he passed in 2013 said "Always, always, always, when buying a computer, pick the processor you want first, then get the most ram you can afford, and then consider storage. He ran typically ran all his machines maxed out on ram. Then the base HD / SSD with a external drive that was portable. His reason was simple it's better to have to much ram over not enough.
 
I use my Mac for scientific computing, and sometimes do calculations that require very large amounts of RAM, occasionally exceeding the 128 GB on my machine (forcing it to swap). I use an Intel iMac, but the same would apply if I were instead using Apple Silicon, since the amount of RAM needed is dictated by the calculation itself, and would thus be independent of the OS.

One of the current downsides of AS is that, in order to get that much RAM, you need to spring for a workstation-class CPU (a 24-core M2 Ultra), regardless of whether you need that many cores (most of my calculations are single-threaded, so I don't).

Hopefully that will change when Apple switches to LPDDR5x RAM, which offers higher-capacity chips than LPDDR5. It's just a matter of whether Apple takes advantage of this—it's possible they will limit the RAM available with the Max and Pro chips for marketing rather than technological reasons, the same way they limit the current base M2 chip to a total of two displays.
What exactly are you doing? (algorithm and "big picture" use case) Go ahead and use technical terms, assume you are talking to a computer scientist with post-graduate education...

I'm actually curious about what could be both memory bound and not decomposable to parallel tasks. My first guess is that you are solving linear optimization problems with high dimensionality. But why?

As an example, I'm interested in motion planning for a high DOF robot with perhaps 28 joints. There are MANY constraints and the joint angles need to be updated in real-time. If we treat this as an LP problem, the machine would never move. But we use a time-limited search much like a chess-playing program that finds the best solution to a cost function before the time-limit expires. So adding cores improves the solutions.

Actually, I think it is an open question if non-parallelizable problems exist.

Back to the chips and RAM. The heart of the chip is the big N x M switch. I don't think Apple is using a bus to connect CPU and neural processor and crypto processors and so on. he M1/M2 is fast because this "fabric" switch is very fast. Adding more addressing bits to the switch makes it bigger and slower. I think Apple has a balancing act here. With a bus, the square area of the bus is the number of pins times the length. So area scales linearly with the width of the address bus.

But I bet a fabric switch scales with something like the square of the number of address bits. I'm not 100% sure but it can't be linear. There is a "law" of computers that says a smaller memory can always be made to be faster. So "balance" is important.

Designing the chip in a way that allows a terabyte of RAM might mean that users who only need 8GB pay a speed penalty. Apple could not accept that

I am still curious about your memory bounded nondecomposable problem.
 
  • Like
Reactions: harold.ji and leifp
What exactly are you doing? (algorithm and "big picture" use case) Go ahead and use technical terms, assume you are talking to a computer scientist with post-graduate education...

I'm actually curious about what could be both memory bound and not decomposable to parallel tasks. My first guess is that you are solving linear optimization problems with high dimensionality. But why?

As an example, I'm interested in motion planning for a high DOF robot with perhaps 28 joints. There are MANY constraints and the joint angles need to be updated in real-time. If we treat this as an LP problem, the machine would never move. But we use a time-limited search much like a chess-playing program that finds the best solution to a cost function before the time-limit expires. So adding cores improves the solutions.

Actually, I think it is an open question if non-parallelizable problems exist.

Back to the chips and RAM. The heart of the chip is the big N x M switch. I don't think Apple is using a bus to connect CPU and neural processor and crypto processors and so on. he M1/M2 is fast because this "fabric" switch is very fast. Adding more addressing bits to the switch makes it bigger and slower. I think Apple has a balancing act here. With a bus, the square area of the bus is the number of pins times the length. So area scales linearly with the width of the address bus.

But I bet a fabric switch scales with something like the square of the number of address bits. I'm not 100% sure but it can't be linear. There is a "law" of computers that says a smaller memory can always be made to be faster. So "balance" is important.

Designing the chip in a way that allows a terabyte of RAM might mean that users who only need 8GB pay a speed penalty. Apple could not accept that

I am still curious about your memory bounded nondecomposable problem.
To protect my anonymity, I'm reluctant to show you exactly what I'm doing. But I'm happy to show you an example from the Mathematica Stack Exchange (https://mathematica.stackexchange.com/questions/230282/3d-fem-vector-potential). See the solution posted by xzczd, where they note:
1689467277120.png


Now you might want to argue that one could, with sufficient work, reconfigure the code to reduce the memory usage, and you might be right. But for most natural scientists, that also misses the point: We're not computer scientists. Our end goal is not to write the most efficient possible code. Rather, it's to write the code that enables us to get answers to our scientific problems as quickly and efficiently as possible. And if having, say, 128 GB RAM means we don't have to spend hours rewriting code to fit into a smaller amount of RAM, that's worth it.

Of course, if we're developing a production tool that we'll be distributing to the scientific community at large, then the game changes. Then it's worth focusing on efficiency. But, most typically, we're just developing code for our own use. [And yes, the game also changes if our problem is so computationally demanding that, even with generous computing resources, it takes too much time or RAM; in that instance, we'd also need to improve our code's efficiency. A good example of that would be meterologists working on weather prediction.]
 
Last edited:
... I'm happy to show you an example from the Mathematica Stack Exchange (https://mathematica.stackexchange.com/questions/230282/3d-fem-vector-potential). See the solution posted by xzczd, where they note:
...

It looks like practical issues get in the way of Theory. Clearly, FE can be done with parallel code but you can't rewrite all those Mathematica functions.

About all you can do is reduce the resolution from 1024 cubed to (say?) 500 cubed or whatever fits.
 
It looks like practical issues get in the way of Theory. Clearly, FE can be done with parallel code but you can't rewrite all those Mathematica functions.

About all you can do is reduce the resolution from 1024 cubed to (say?) 500 cubed or whatever fits.
I was keeping up until here… nice to see more competent users than myself on this site! :p
 
It looks like practical issues get in the way of Theory. Clearly, FE can be done with parallel code but you can't rewrite all those Mathematica functions.
Pretty much.

As another illustration of high memory use with Mathematica (MMA), here's a toy example using Reduce, which is MMA's generalized equation solver: Suppose you want all sets of four positive integers whose sum is 128,705, and whose product is 46,671,065 x 10^8. You'd solve this in Reduce using:

Reduce[a + b + c + d == 128705 && a*b*c*d == 46671065*10^8, {a, b, c, d}, PositiveIntegers]

On my 2019 i9 iMac with 128 GB RAM, this calculation takes 9.0 hours, and MMA reports MaxMemoryUsed[] = 210.45 GB.

Here is a corresponding snapshot from Activity Monitor taken during the calculation.

Screen Shot 2023-07-16 at 11.28.33 AM.png


But moving away from memory and turning to parallelelization:
Actually, I think it is an open question if non-parallelizable problems exist....I'm actually curious about what could be both memory bound and not decomposable to parallel tasks.
Indeed, to the extent most MMA users wish the program were written to better use computer resources, it wouldn't be for lower memory usage. Instead, they'd want more operations to be multi-core. Currently, most of its functions are single-threaded only, and many operations have a noticeable wait time for completion—not typically 9 hours, but then MMA is most commonly used interactively, so even several seconds' wait time becomes significant. And of course that leaves most machines with multiple cores sitting idle.

Wolfram does automatically parallelize, or allows to be parallelized, all the obvious candidates for parallelization. Here's an example of the former (screenshot taken from MMA's documentation). It returns the 1st, 2nd, 3rd, etc. prime numbers. As each of these can be calculated independently, this is an embarassingly parallel problem:

1689533590488.png


[Yes the documentation here is a bit confusing, since automatic parallelization should mean you don't need to explicitly call the Parallelize function and, indeed, you don't need to do this to get it to use all the cores (up to the max your license allows, which in my case is eight).]

Here's a more general list (also from MMA's documentation):

1689533741208.png


However, this is only a tiny fraction of the ~5,000 functions offered by MMA, and notably leaves out most of its "bread-and-butter" functions, such as those for doing integrations, differentiations, differential equations, equation solving (e.g., Reduce*), and expression simplification. So it would be great if all of these could be parallelized. Yet, in spite of the obvious benefit of parallelizing functions, and their interest in doing so, Wolfram has not done this. That leads me to conclude these functions are likely extremely difficult to parallelize.

*Occasionally one can see CPU use with Reduce exceeding 100%, indicating Mathematica does sometimes parallelize pieces of Reduce; but when it does that is pretty much a black box; it's not mentioned in the documentation.
 
Last edited:
Wow. I suck at math like really bad but have incredible spatial abilities and skills. Like guessing distance or seeing if things are level or not stuff like that. Example I can look at a board and tell you if it has a dip to almost 1/16 inch accuracy. So your math is way beyond my understanding of its use. Any advice on how to understand it?
 
  • Like
Reactions: harshar
Work has issued new (2022) 13" MBPs with the M2 chip. Got mine Friday and have been moving data over and apps that work did not include. The Mac came with 24GB ram and a 500GB SSD and it's running Ventura. The previous work Mac is a 2015 MBP with a 256GB SSD and 16GB ram. Neither was enough.

I'm a graphic designer so not only do I have multiple apps open, I have multiple files in Photoshop and QuarkXPress open at any given time. The additional 8GB ram will be appreciated as well as the increased space on the SSD. I've been using an external 1TB drive for things.

That said, my personal desktop Mac (a 2009 MacPro) has 32GB ram so work still hasn't matched my personal Macs except with faster CPUs.

I always try to max out RAM on any given Mac/PC, but in the case of work that's not my call - I get what they give me. Still, nice to be using a 'current' model. I haven't had that happen since the G4s were new in the very early 00s.
 
Wow. I suck at math like really bad but have incredible spatial abilities and skills. Like guessing distance or seeing if things are level or not stuff like that. Example I can look at a board and tell you if it has a dip to almost 1/16 inch accuracy. So your math is way beyond my understanding of its use. Any advice on how to understand it?
I didn't want to get too OT for this thread (I'm already pushing it). So if you'd like to post your question in one of general forums, and post the link to your thread here, I'll try to respond. Also, math is a big field, so you should give a specific example of the type of problem you're strugging with and want to be able to understand more intuitively. I'll look at it and see if I can find a youtube video (some are very good) that might help with it.
 
Last edited:
There is no one answer. Most people who browse the internet and do email, 8GB. A decent amount of gaming too, 16GB. Virtual machines, video editing or software dev, at least 32GB. 8K video editing, 64GB.
 
My father who owned an Apple VAR till he passed in 2013 said "Always, always, always, when buying a computer, pick the processor you want first, then get the most ram you can afford, and then consider storage. He ran typically ran all his machines maxed out on ram. Then the base HD / SSD with a external drive that was portable. His reason was simple it's better to have to much ram over not enough.
He got it all backwards. First you pick an OS (Mac or Windows), then you pick a form factor (Desktop, Laptop, Tablet, Brain implant), then you pick a display technology (Retina, ProMotion, miniLED, OLED). Next you need to find a color that fits into your setup, match it to your furniture, standing desk, gaming chair etc. As long as the processor is a modern ARM-based SoC with unified memory the performance will be sufficient. Even people who buy a MacBook Pro because they need a Pro/Max chip, most of the time only benefit from a better display and better speakers. You don't want a pale display, heavy form factor or color that doesn't match the drapes. Nowadays computers will last a long time, so the soft convenience factors are most important. New processors are released every year, but if you still love the size and weight of your computer and it can run the latest OS, you won't even notice faster machines with more memory exist. RAM anxiety is a thing of the past.
 
Personally, I have noticed over multiple purchases my RAM requirements go up after 2-3 years. I bought a 64 GB M1 Max, it replaced a 32 GB MBP, which was barely adequate. I am glad, I went 64 GB, coz there is no way my MBP would have been serviceable, I would be in market looking for new MBP. I keep my laptops 5-8 years, by year 4, any extra performance, memory will not force me to upgrade.
If you upgrade every 2-3 years, it doesn’t really matter.
My family uses m2 8 GB base model, they love it and does everything for the general computing needs.
 
  • Like
Reactions: harshar and leifp
I keep my laptops 5-8 years, by year 4, any extra performance, memory will not force me to upgrade. If you upgrade every 2-3 years, it doesn’t really matter. My family uses m2 8 GB base model, they love it and does everything for the general computing needs.
You mistakenly believe 8 years is a long time to keep a laptop. People who don't even know how much RAM their entry level MacBook Air has, tend to keep it 10-12 years. So the urge for more memory effectively reduces the lifetime of a computer. You prolong lifetime by not even caring about performance.
 
  • Angry
Reactions: AdamNC
You mistakenly believe 8 years is a long time to keep a laptop. People who don't even know how much RAM their entry level MacBook Air has, tend to keep it 10-12 years. So the urge for more memory effectively reduces the lifetime of a computer. You prolong lifetime by not even caring about performance.
Running out of memory makes my laptop useless after few years. For my family base model is good enough. I don’t usually keep more than 8 because I don’t want to run obsolete laptops in my network. People can run what ever they want, not me.
 
Back to the chips and RAM. The heart of the chip is the big N x M switch. I don't think Apple is using a bus to connect CPU and neural processor and crypto processors and so on. he M1/M2 is fast because this "fabric" switch is very fast. Adding more addressing bits to the switch makes it bigger and slower. I think Apple has a balancing act here. With a bus, the square area of the bus is the number of pins times the length. So area scales linearly with the width of the address bus.

But I bet a fabric switch scales with something like the square of the number of address bits. I'm not 100% sure but it can't be linear. There is a "law" of computers that says a smaller memory can always be made to be faster. So "balance" is important.
You've got some weird ideas about switch fabric area scaling!

The number of address bits matters a little bit, but only because adding more bits adds linearly to the number of bits which must be routed from source to destination. The parameters which actually determine how much above linear the switch will be are what you've identified as N and M - the number of 'requester' and 'completer' ports (to use PCIe terminology).

The way that SoC architects manage switch area explosion is by making the SoC interconnect topology more complicated than one giant switch. A GPU may need its own port on the big, high performance switch, but something tiny like an I2C or SPI controller doesn't. (I2C and SPI are low-speed serial IO bus standards used to talk to off-chip devices, e.g. many laptop Macs use a SPI connection to the keyboard.) These low speed things can share a single port on the central switch, with one or even a hierarchy of 'expansion' switches behind that port.

Another important way these low performance segments of the SoC reduce switch cost is by enabling a different data path width for the low performance parts of the chip. High performance stuff might need very wide busses like 512 or 1024 bits, low performance peripherals can make do with as little as 32. By inserting a data width converter between the big high performance switch and one of these low performance satellite switches, you can save a lot of area.
 
I have a base M1 Max Studio and I run production and a Windows virtual machine and it fits in 32 GB of RAM. Next to it I have a 2015 iMac 27 with 32 GB of RAM and it runs my office stuff. There's typically 10 GB of RAM free on the iMac so I'm using up about 22 GB of RAM for macOS, programs and cache. I bought the iMac for $200 6 weeks ago, mainly for the display, but having all of the RAM is nice. It can hold 64 GB of RAM but 16 GB DIMMs for that particular system are very expensive. If I wanted more RAM, I'd look at 2017, 2019 and 2020 iMacs.

I also have an M1 mini that was replaced by the iMac. I'm not sure what to do with it.

My 2021 MacBook Pro 16 has 32 GB of RAM. I can run production + office, Windows + office on it but not all 3 at the same time without swapping. I'd say that 48 GB of RAM would be perfect for me on a laptop. 64 works nicely on the desktop on multiple Macs.

So if you need more RAM for programs that aren't CPU-intensive, a used iMac may be far more cost-efficient than adding RAM to an Apple Silicon Mac.
 
  • Like
Reactions: theorist9
If I wanted more RAM, I'd look at 2017, 2019 and 2020 iMacs...a used iMac may be far more cost-efficient than adding RAM to an Apple Silicon Mac.
I'll second that. 128 GB RAM for the 2019 and 2020 iMacs is currently only 2 x $105 = $210 on Amazon (it wasn't so long ago that a 128 GB SSD cost more than that!):

1689871271737.png


If you can get by with 64 GB, then a 2017 iMac will work, and the RAM will run you ~$135:

1689871610874.png
 
I've been rocking 4GB on a 2012 MacBook Air for 10 years now, and when I get a new M2 MacBook Air, I plan on getting 8GB. I'm a bit between a power user and a casual user, having a bunch of apps constantly open, but also not doing crazy math computations or anything like that. I have barely ever noticed the RAM cap on this, I surely don't need 16GB with the benefits of Apple Silicon combined with just faster RAM and SSDs in general.
Hi! I am lifelong Windows user, since I started working at IBM (eons ago in 1984!) I grew up with the technology: Mag card typewriter->8” floppy->5”floppy->”Tiger (orange) terminal split in 4->Big head monitor green screen (one color!!!)-> 256gb Laptop that took FOREVER to boot up and then it feels like the technology just took OFF from there, it’s really amazing and was always Windows, for me.

I am longtime IPhone, IPAD, Apple watch, Airpods user, see a trend here?? I decided it’s time to pull the trigger and switch, so I bought a Macbook 15” Air. I am, like you, not doing any crazy computations. Usage determines how much storage/RAM one really needs, still, I found it hard to determine prior to purchase. And, I did a lot of research! A friend of mine who is lifelong Apple user assured me 8gb RAM would be enough, with which I agreed.

So I was alarmed to see my 8gb RAM chewed up to almost 6.5gb when I checked Activity Monitor!!! Beyond the apps I recognize, there’s so much stuff in there I don’t know! I rebooted, it went down a bit, jumped back up, so it seems I run around 6-6.5gb usage. My questions/concerns:
  1. I’m not running out of space, but, obviously, if I things slow down and apps crash or buffer, I’m exceeding my RAM!
  2. I feel like I can’t install much more than I have and I’ve hardly installed ANYTHING! Hopefully, I won’t need much more……..
  3. What apps can I shut down? I know how to Quit an app, but it comes right back! How can I permanently shut down an app I don’t need?
  4. How do I know which ones I don’t need??
  5. What else can I do to minimize RAM usage?
  6. Ultimately, I think, I will have to trade in for a 16GB machine if it doesn’t work out, but, of course, I don’t want to do that. I’m really disappointed that I just didn’t get 16gb RAM! Every thing I read online said M2 is soooo much more efficient. But I feel like I may have made a mistake : (
  7. Thank you!! Donna
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.