Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Premal212

macrumors regular
Original poster
Jan 26, 2017
249
127
London UK
Running a 2019, 15inch 32gb MBP and my memory pressure occasionally goes into orange when going heavy in After Effects.

Looking at buying a new machine, don't want to pick up a 64gb unnecessary as I would need to go for the Max Chips. But my concern is that the ram is shared between the GPU and CPU.

With my current machine I have a dGPU w/ 4GB of ram. If I buy a new machine if we assume 4GB goes to the GPU, am I technically left with 28gb of ram or am I thinking about this like a baboon?

My machines last 4-5 years so I don't mind splurging a little, but only if it's absolutely warranted. If it was just the ram I had to upgrade that would be fine, but I also need to upgrade from the Pro to Max - a lot of extra cash.
 
no its shared, so if you don't buffer out 4gb or more on the gpu side you will have 32gb for overall ram, there is no hard allocation for either gpu/cpu.
 
  • Like
Reactions: Premal212
If your memory pressure is already hitting amber then thats a sign to upgrade. Even more so for future proofing. Adobe software is notoriously RAM heavy.

Another thing to consider is if you'll be driving hi-res external displays.

I would spring for 64GB. Its better to have too much RAM than not enough. As you don't want to be swapping to flash memory which will increased your read/write usage unnecessarily.
 
If your memory pressure is already hitting amber then thats a sign to upgrade. Even more so for future proofing. Adobe software is notoriously RAM heavy.

Another thing to consider is if you'll be driving hi-res external displays.

I would spring for 64GB. Its better to have too much RAM than not enough. As you don't want to be swapping to flash memory which will increased your read/write usage unnecessarily.

Probably the answer that I needed to hear, rather this than have a machine that I am constantly pissed off with. I'm normally in scaled UI, so this is just adding to the usage.

Thanks :)
 
It's not shared, it's unified. Shared was how it was done with the old Intel integrated graphics where the GPU took memory away. Unified means the GPU can work directly with the data that is already in memory, so there is no VRAM needed in the first place.

Thus the memory usage will be similar, or a bit higher depending on the application and workload. For many workloads data will be processed in RAM first anyways and then sent to the VRAM, so with unified memory that sending to VRAM can be skipped entirely. Though if you are already going into the yellow pressure that means you are either just about to run out of memory or you are already experiencing slowdowns.

I'd take more memory on a brand-new machine. It's not like the apps get any less demanding over the years and I like to get a solid 3-4 years of use. And you can't upgrade the RAM afterwards. I went straight to 64GiB and plan to use it for around 5 years, 6 if it's still fast enough then. The price for the upgrade isn't worth thinking about since the hardware and its speed will dictate how quickly I can get things done during my work days for the next half decade.

I bought a 16GiB M1 Mac when they first came out and 16GiB was all they had - sold it within less than 12 months, it was perfect otherwise but I already downgraded from 32GiB on the previous Intel Mac and the memory was not enough at all. Had to accept quite a loss on that sale but there was nothing to be done. Not making that mistake ever again. Thankfully the current Macbooks have a lot of workstation-typical upgrade options and leave little to be desired, as long as you're willing to pay the Apple tax with these expensive upgrades.
 
Important to note that M1 and Intel do not seem to have the same RAM requirements per application.
Can you load a stressful scene onto a USB drive and test it on a m1 machine somewhere?

They’re demoed M1 plain doing all kinds of video and those maxed at 16GB. Of course, you can always add more layers in After Effects.
 
  • Like
Reactions: Premal212
Something to consider, the Max chips have better encoders and decoders, then the standard or Pro chips. If you are running AE, it is probably worthwhile to spend the extra money to get the M2 Max chip as it will speed up your workflow and make you more efficient.
 
It's a little more complex than "either or." You're thinking that the data "in" the GPU is not available to the CPU, and vice versa. This is true of most non-Apple systems. And it is true that if you display something with an Apple GPU that memory has to be held to keep the display current. However, what you're missing is that most systems had to have that display data in main memory AND in the GPU, at least at many/most times. Since the Apple is unified, there's no second copy, so the net usage of "main" memory is not that different between the two models, even though Apple has physically less in the machine. I've glossed over several details, some of which might be important in specific situations, but in general the Apple unified memory model is more efficient because it doesn't require as many bus transfers to get something displayed and also doesn't require nearly as many second copies of data.
 
So does that mean I could be left with 20gb of ram if the GPU is demanding 12gb?

No. The way RAM works under Apple Silicon is that both the CPU and GPU can access the entire memory pool simultaneously. RAM is not partitioned off between CPU and GPU like you see with x86 integrated GPUs. This also means that when processing something that requires both CPU and GPU processing, the system copies the data to RAM once, then both sides can work on the data simultaneously. With traditional systems (where integrated graphics RAM is partitioned off from the rest of the system RAM), the system has to copy the data twice (once into each partition), then reconcile the two versions after processing. This introduces a noticeable performance hit because it's essentially a write twice, read twice, reconcile once, execute model while Apple Silicon is a write once, read once, execute model.
 
So again, copy an intense but relatively realistic use case to a drive and test it on a working system.

I would offer to help, but I do not Adobe. There is probably someone on here who could though. It doesn’t have to be proprietary. Run off a minute or so of 4K footage. Build a project that looks like your workflow.

Then ask a few people to load it on M1/2 machines and see how it works.
 
  • Like
Reactions: Premal212
Running a 2019, 15inch 32gb MBP and my memory pressure occasionally goes into orange when going heavy in After Effects.

Looking at buying a new machine, don't want to pick up a 64gb unnecessary as I would need to go for the Max Chips. But my concern is that the ram is shared between the GPU and CPU.

With my current machine I have a dGPU w/ 4GB of ram. If I buy a new machine if we assume 4GB goes to the GPU, am I technically left with 28gb of ram or am I thinking about this like a baboon?

My machines last 4-5 years so I don't mind splurging a little, but only if it's absolutely warranted. If it was just the ram I had to upgrade that would be fine, but I also need to upgrade from the Pro to Max - a lot of extra cash.
Historically, more RAM is demanded by OS/apps as time goes on; always. IMO (just guessing) Apple's superb Unified Memory Architecture will make that trend even stronger as devs learn what UMA provides. You are pushing at 32 GB in 2023, so any new box should be more than 32 GB.

The only question is how much more, because one plans for the life cycle of the box, not for 2023. I was where you are (~at 32 GB) and put 96 GB in the new M2 MBP. Above 64 GB is wasted for me in 2023, but experience says RAM demands will increase. IMO limiting a multi-thousand-dollar computer's 2025 capability by failing to spend a few $hundred more to get the Max chip and sufficient RAM makes no sense.

Certainly if the additional $400 had been cost-prohibitive to me I would have sufficed with 64 GB. But with funds available IMO the 96 GB was appropriate for the ~6 year life cycles I tend toward. Having in the past spent $400 for 2 MB of third-party RAM, $400 for 32 GB seems like a bargain. YMMV...
 
Last edited:
Op I had a 32 GB 2019 MBP which I replaced with 64 GB M1 Max, it’s an amazing laptop. Get as much RAM you can get, if you are in orange on 32 GB.
 
  • Like
Reactions: Allen_Wentz
It's not shared, it's unified. Shared was how it was done with the old Intel integrated graphics where the GPU took memory away. Unified means the GPU can work directly with the data that is already in memory, so there is no VRAM needed in the first place.
So we've come full circle back to the Apple II? There the video circuit used the RAM on the off-cycle when the CPU was not on the memory bus. The CPU used the RAM when the Phase 1 clock was high, video used the ram when phase 1 clock was low.

Then dual-ported VRAM was a thing for awhile, the video circuits could read the VRAM at the same time the CPU was writing it.
 
  • Like
Reactions: Premal212
No. The way RAM works under Apple Silicon is that both the CPU and GPU can access the entire memory pool simultaneously. RAM is not partitioned off between CPU and GPU like you see with x86 integrated GPUs. This also means that when processing something that requires both CPU and GPU processing, the system copies the data to RAM once, then both sides can work on the data simultaneously. With traditional systems (where integrated graphics RAM is partitioned off from the rest of the system RAM), the system has to copy the data twice (once into each partition), then reconcile the two versions after processing. This introduces a noticeable performance hit because it's essentially a write twice, read twice, reconcile once, execute model while Apple Silicon is a write once, read once, execute model.
Not to be contentious, but this answer kind of side-steps the crux of the question, albeit with a mostly correct statement...

Yes, in the simplest of cases, if there is a video RAM demand of 12 GB, and you're starting with 32 GB, and the memory manager can allocate 12 GB of RAM for video textures, etc. then there is effectively 20 GB left over for other, non-video related workloads.

This in no way invalidates the fact that memory is unified on Apple Silicon, which provides several advantages; particularly that graphics resources are not held in "system memory" and then copied to "graphics memory" as can be the case on Windows with system RAM and discrete video RAM. Unless you're talking about shared video memory with zero copy capabilities, which has existed for some time:

"Zero copy: Refers to the concept of using the same copy of memory between the host, in this case the CPU, and the device, in this case the integrated GPU, with the goal of increasing performance and reducing the overall memory footprint of the application by reducing the number of copies of data."


Big difference though between the above and Apple Unified Memory. With Apple Unified Memory, it's a fundamental architecture benefit. With zero copy, it relies on a shared memory configuration, on historically poor performing integrated GPUs, and developers have to code to a specific API, with specific flags.
 
Last edited:
  • Like
Reactions: Premal212
The posts I've quoted below state my feelings better than I could. tl;dr: Get 64GB of RAM (yes the RAM works differently; but RAM is still RAM and amber memory pressure isn't going to be much better on Apple Silicon with 32GB than it is on Intel with 32GB).



If your memory pressure is already hitting amber then thats a sign to upgrade. Even more so for future proofing. Adobe software is notoriously RAM heavy.

Another thing to consider is if you'll be driving hi-res external displays.

I would spring for 64GB. Its better to have too much RAM than not enough. As you don't want to be swapping to flash memory which will increased your read/write usage unnecessarily.

It's not shared, it's unified. Shared was how it was done with the old Intel integrated graphics where the GPU took memory away. Unified means the GPU can work directly with the data that is already in memory, so there is no VRAM needed in the first place.

Thus the memory usage will be similar, or a bit higher depending on the application and workload. For many workloads data will be processed in RAM first anyways and then sent to the VRAM, so with unified memory that sending to VRAM can be skipped entirely. Though if you are already going into the yellow pressure that means you are either just about to run out of memory or you are already experiencing slowdowns.

I'd take more memory on a brand-new machine. It's not like the apps get any less demanding over the years and I like to get a solid 3-4 years of use. And you can't upgrade the RAM afterwards. I went straight to 64GiB and plan to use it for around 5 years, 6 if it's still fast enough then. The price for the upgrade isn't worth thinking about since the hardware and its speed will dictate how quickly I can get things done during my work days for the next half decade.
 
So we've come full circle back to the Apple II? There the video circuit used the RAM on the off-cycle when the CPU was not on the memory bus. The CPU used the RAM when the Phase 1 clock was high, video used the ram when phase 1 clock was low.

Then dual-ported VRAM was a thing for awhile, the video circuits could read the VRAM at the same time the CPU was writing it.

No. The CPU and GPU can access the RAM pool simultaneously - not alternating between on-cycle and off-cycle. This is also how the A-series SoCs used in the iPhone and iPad access RAM, as most smartphone SoCs.
 
Not to be contentious, but this answer kind of side-steps the crux of the question, albeit with a mostly correct statement...

Yes, in the simplest of cases, if there is a video RAM demand of 12 GB, and you're starting with 32 GB, and the memory manager can allocate 12 GB of RAM for video textures, etc. then there is effectively 20 GB left over for other, non-video related workloads.

This in no way invalidates the fact that memory is unified on Apple Silicon, which provides several advantages; particularly that graphics resources are not held in "system memory" and then copied to "graphics memory" as can be the case on Windows with system RAM and discrete video RAM. Unless you're talking about shared video memory with zero copy capabilities, which has existed for some time:

"Zero copy: Refers to the concept of using the same copy of memory between the host, in this case the CPU, and the device, in this case the integrated GPU, with the goal of increasing performance and reducing the overall memory footprint of the application by reducing the number of copies of data."


Big difference though between the above and Apple Unified Memory. With Apple Unified Memory, it's a fundamental architecture benefit. With zero copy, it relies on a shared memory configuration, on historically poor performing integrated GPUs, and developers have to code to a specific API, with specific flags.

The problem with this analysis is that it's still presuming that multiple copies of data are being loaded into RAM. With Apple's unified memory approach, both the CPU and GPU can work on the SAME data simultaneously. This is something Apple engineers went over at WWDC 2020 when the M1 was first announced. The other consideration which invalidates the comparisons to zero copy is that the unified memory is on the SoC itself, meaning that the CPU and GPU cores communicate with the RAM on the silicon itself rather than through a separate memory controller. and a system bus on the logic board. This means that the RAM is accessed without the latencies associated with RAM like you would see in x86 based systems.
 
If your memory pressure is already hitting amber then thats a sign to upgrade. Even more so for future proofing. Adobe software is notoriously RAM heavy.

Another thing to consider is if you'll be driving hi-res external displays.

I would spring for 64GB. Its better to have too much RAM than not enough. As you don't want to be swapping to flash memory which will increased your read/write usage unnecessarily.
One thing to note here as you kind of mention it. Adobe will use as much RAM as you give it. I ran orange memory pressure on a 720p project in AE with 64 GB of RAM. And yes it uses almost all my 128GB of RAM on my other systems too. Even on Windows.
 
  • Like
Reactions: Allen_Wentz
The problem with this analysis is that it's still presuming that multiple copies of data are being loaded into RAM. With Apple's unified memory approach, both the CPU and GPU can work on the SAME data simultaneously. This is something Apple engineers went over at WWDC 2020 when the M1 was first announced. The other consideration which invalidates the comparisons to zero copy is that the unified memory is on the SoC itself, meaning that the CPU and GPU cores communicate with the RAM on the silicon itself rather than through a separate memory controller. and a system bus on the logic board. This means that the RAM is accessed without the latencies associated with RAM like you would see in x86 based systems.
No. It’s absolutely not presuming that multiple copies of the data are being loaded into RAM. Where are you getting that idea from?

My example is perfectly clear. If the app, game, whatever process you are running requires 12GB of RAM for GPU data, it requires that much RAM, period. Now that it is allocated, you have that much less RAM for other processes. It’s simple arithmetic.

I was also clear that that Unified Memory had other advantages over zero copy; I’m not arguing that it is equivalent or as performant. Never did that.
 
No. It’s absolutely not presuming that multiple copies of the data are being loaded into RAM. Where are you getting that idea from?

My example is perfectly clear. If the app, game, whatever process you are running requires 12GB of RAM for GPU data, it requires that much RAM, period. Now that it is allocated, you have that much less RAM for other processes. It’s simple arithmetic.

I was also clear that that Unified Memory had other advantages over zero copy; I’m not arguing that it is equivalent or as performant. Never did that.
Same as Intel, it loads the app in the RAM, and additionally in GPU VRAM. Simple math, in either case the less RAM is available for other stuff. In AS, it’s unified, so it has other benefits, highlighted above.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.