Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

majormike

macrumors regular
Original poster
May 15, 2012
113
42
Hi,

How much ram does the M1 take up from the memory pool to run the GPU?

I want to get it for music production and I'd be happy with 16 gigs but I'm unsure as to how much memory the GPU using, especially with an external display?
 

warp9

macrumors 6502
Jun 8, 2017
450
641
It only requires a few megabytes to run a display but GPUs are also used for machine learning, pixel arrays, and specialized calculations. It really depends on the software you are using.

Look around in the prefs and see if you can spot GPU settings. For example, Photoshop can make heavy use of GPUs but it can also be disabled.
 
  • Like
Reactions: armoured

Wizec

macrumors 6502a
Jun 30, 2019
680
778
“For gaming at 1080p at high to very graphics settings with AA turned on, you will need 4GB to 6GB video memory”


if you’re not gaming, then likely only a few hundred MB depending on your monitor’s resolution and color depth. Once you get to 4K and millions of colors it can get much higher though, even for tasks like web browsing:

”The only potential issue we found was during the 4k web browsing tests where our logging showed the video cards using as much as 1,260MB of video memory. To put this into perspective, this is roughly 3 times as much video memory that would have been used if we performed the same tasks on a 1080p monitor.”

 
Last edited:
  • Like
Reactions: Tagbert

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Hi,

How much ram does the M1 take up from the memory pool to run the GPU?

I want to get it for music production and I'd be happy with 16 gigs but I'm unsure as to how much memory the GPU using, especially with an external display?

I wouldn’t worry about this. All systems reserve some memory to deal with basic functions, M1 is not too different from Intel machines in this regard. The GPU will not cannibalize your RAM. Not to mention that your music production software likely uses the GPU to do processing.
 

Modernape

macrumors regular
Jun 21, 2010
232
42
The M1 GPU does not have allocated RAM like the Intel machines, where the GPU would have perhaps 1.5GB set aside that the CPU couldn't therefore use. If you're not using the GPU that hard, then you'll have most of the RAM available for the CPU.
 
  • Like
Reactions: mds1256 and LeeW

leman

macrumors Core
Oct 14, 2008
19,521
19,678
The M1 GPU does not have allocated RAM like the Intel machines, where the GPU would have perhaps 1.5GB set aside that the CPU couldn't therefore use. If you're not using the GPU that hard, then you'll have most of the RAM available for the CPU.

How do you know this?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678

How is that supposed to support the claim you are making? What I am asking is - do you have any factual evidence, or a reference to a source presenting such factual evidence that GPU memory allocation works differently on Intel and Apple GPUs? Both use unified memory architecture with last level cache shared between CPU and GPU. M1 definitely reserves some memory for GPU use, although I am unsure how much.
 

Modernape

macrumors regular
Jun 21, 2010
232
42
How is that supposed to support the claim you are making? What I am asking is - do you have any factual evidence, or a reference to a source presenting such factual evidence that GPU memory allocation works differently on Intel and Apple GPUs? Both use unified memory architecture with last level cache shared between CPU and GPU. M1 definitely reserves some memory for GPU use, although I am unsure how much.
Wow, you must have read through those articles quickly. Oh, wait. You didn't read them did you.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Wow, you must have read through those articles quickly. Oh, wait. You didn't read them did you.

I am quite sure I read most relevant of them. I’ve been also doing technical analysis and low-level benchmarking of M1 GPU since I got my unit in December. That’s also the reason I am asking, maybe there is some new technical information out there than I am not aware of yet.
 

chabig

macrumors G4
Sep 6, 2002
11,450
9,321
Your question is impossible to answer. It's like asking how much memory TextEdit uses. It all depends on what you're doing. The CPU and GPU share the RAM according to their needs, which changes from moment to moment.

Here is an Apple reference:


Quote,
Building everything into one chip gives the system a unified memory architecture.

This means that the GPU and CPU are working over the same memory. Graphics resources, such as textures, images and geometry data, can be shared between the CPU and GPU efficiently, with no overhead, as there's no need to copy data across a PCIe bus.
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
I am quite sure I read most relevant of them. I’ve been also doing technical analysis and low-level benchmarking of M1 GPU since I got my unit in December. That’s also the reason I am asking, maybe there is some new technical information out there than I am not aware of yet.

Intel desktop and laptop chips do not use UMA, they use Shared Memory. On Intel, the GPU has to be allocated a partitioned chunk of RAM to be used. The CPU cannot access the partition allocated to the GPU and vice versa. So you still need to copy data between the partitions.

The whole difference between unified memory and shared memory is that lack of partitioning. The GPU and CPU can access the same block of memory. Compared to Intel it provides two specific benefits:

1. Without having to pre-allocate RAM to be used as video memory, you don’t have to deal with specific limits to video memory (the 1.5GB mentioned earlier), and can more easily balance between CPU and GPU demand.
2. Not needing to copy buffers means some measurable RAM savings, less pressure on memory bandwidth, and a bit of a latency boost.

These benefits are a reason why game consoles use UMA. The memory and latency savings help keep costs down on the GDDR they need.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Intel desktop and laptop chips do not use UMA, they use Shared Memory. On Intel, the GPU has to be allocated a partitioned chunk of RAM to be used. The CPU cannot access the partition allocated to the GPU and vice versa. So you still need to copy data between the partitions.

Intel documentation disagrees with you.


As far as I refill, Intel has been using unified memory since at least Sandy Bridge, maybe earlier. There might be some restrictions, not 100% sure.
 
Last edited:
  • Like
Reactions: AAPLGeek

theluggage

macrumors G3
Jul 29, 2011
8,015
8,449
I want to get it for music production and I'd be happy with 16 gigs but I'm unsure as to how much memory the GPU using, especially with an external display?
Unless anybody reports back with memory pressure readings from a 16GB M1 machine running something close to your intended workload - anybody's guess. "Music production" is a piece of string to start with (RAM usage depends entirely on what sort of virtual instruments and plug-ins you're using). Some Music apps have quite elaborate UIs, and running a couple of high-res displays will need more RAM allocated to video - especially if you're using scaled modes where everything is rendered to an internal buffer and then downsampled.

Odds are, an M1 will not only do your job, but do it faster because it is all-round more efficient and what you lose on the roundabouts, you gain on the swings - and there have been plenty of Youtube demos showing it running a shedload of Logic Pro tracks & instruments.

However, the safe assumption is that if your workflow actually needed more than 16GB on Intel then it will at least benefit from more than 16GB on Apple Silicon and it would be best to wait for the higher-end Apple Silicon systems to come out. Even if an M1 can currently outperform a high-end Intel iMac or 16" MBP, in six months' time its going to be getting sand kicked in its face - this strange hiatus where the entry-level Macs apparently out-perform the more expensive ones won't last for long - Apple can't afford for it to go on or it's going to hit higher-end Mac sales.

That said, you need to be sure that you really do need all the ram on your Intel Mac in the first place (look at Memory Pressure).

Video RAM wise, the M1 is almost certain to be better than a MacBook or Mini with Intel integrated graphics. Vs. an iMac that only has a discrete GPU with 8GB+ of VRAM, it is harder to tell.

(Also, you need to carefully check whether all the plug-ins, drivers etc, you need are compatible with Big Sur yet, let alone the M1...)

Wow, you must have read through those articles quickly. Oh, wait. You didn't read them did you.

Telling someone to effectively "go Google" isn't particularly helpful when the Internet is swimming with bogus information and unfounded speculation. Everything I've seen from Apple has been extremely vague, more marketing than technical info, and boiled down to "Unified Memory is faster because data doesn't have to be copied between devices" which says nothing about how RAM is allocated. All you get with a Google search is lots of tech sites speculating on the same limited Apple data. The possibility that the equivalent VRAM would be allocated "on demand" is a very plausible speculation - but unless someone can point to the Apple document that details that, it is speculation.

Reality seems to be that Unified Memory is more efficient - but how more efficient is hard to test, and hard to isolate from the other performance gains of the M1 (...which might look much less impressive when higher-end Apple Silicon Macs appear).

Lots of the YouTube stuff seems to come from people who don't understand the difference between "Memory Used" and "Memory Pressure" or "Swap used" and swap rate - or are looking for a RAM-related speedup on workflows that don't strain the RAM on an Intel system...
 

mi7chy

macrumors G4
Oct 24, 2014
10,623
11,296
The whole difference between unified memory and shared memory is that lack of partitioning. The GPU and CPU can access the same block of memory. Compared to Intel it provides two specific benefits:

1. Without having to pre-allocate RAM to be used as video memory, you don’t have to deal with specific limits to video memory (the 1.5GB mentioned earlier), and can more easily balance between CPU and GPU demand.
2. Not needing to copy buffers means some measurable RAM savings, less pressure on memory bandwidth, and a bit of a latency boost.

Intel says otherwise with dynamic allocation and zero copy buffer. My BS ignore list is getting bigger.

https://www.intel.com/content/www/us/en/support/articles/000020962/graphics.html

https://software.intel.com/content/...uffer-copies-on-intel-processor-graphics.html
 
Last edited:

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Think this could help: https://developer.apple.com/videos/play/wwdc2020/10632/

This explains the method of rendering common with mobile, consoles, and low power GPUs. This is why you need less memory than dGPUs. Tiles rendering has been a way to handle 3d in smaller tiles and scan changes, etc. It's been done for years on console as well as newer tablets (iPad.)

Given that, it isn't cut and dry how much memeory you use for textures and rednering. It's differnt than how dGPUs work with framebuffers.

Unless anybody reports back with memory pressure readings from a 16GB M1 machine running something close to your intended workload - anybody's guess. "Music production" is a piece of string to start with (RAM usage depends entirely on what sort of virtual instruments and plug-ins you're using). Some Music apps have quite elaborate UIs, and running a couple of high-res displays will need more RAM allocated to video - especially if you're using scaled modes where everything is rendered to an internal buffer and then downsampled.

Odds are, an M1 will not only do your job, but do it faster because it is all-round more efficient and what you lose on the roundabouts, you gain on the swings - and there have been plenty of Youtube demos showing it running a shedload of Logic Pro tracks & instruments.

However, the safe assumption is that if your workflow actually needed more than 16GB on Intel then it will at least benefit from more than 16GB on Apple Silicon and it would be best to wait for the higher-end Apple Silicon systems to come out. Even if an M1 can currently outperform a high-end Intel iMac or 16" MBP, in six months' time its going to be getting sand kicked in its face - this strange hiatus where the entry-level Macs apparently out-perform the more expensive ones won't last for long - Apple can't afford for it to go on or it's going to hit higher-end Mac sales.

That said, you need to be sure that you really do need all the ram on your Intel Mac in the first place (look at Memory Pressure).

Video RAM wise, the M1 is almost certain to be better than a MacBook or Mini with Intel integrated graphics. Vs. an iMac that only has a discrete GPU with 8GB+ of VRAM, it is harder to tell.

(Also, you need to carefully check whether all the plug-ins, drivers etc, you need are compatible with Big Sur yet, let alone the M1...)



Telling someone to effectively "go Google" isn't particularly helpful when the Internet is swimming with bogus information and unfounded speculation. Everything I've seen from Apple has been extremely vague, more marketing than technical info, and boiled down to "Unified Memory is faster because data doesn't have to be copied between devices" which says nothing about how RAM is allocated. All you get with a Google search is lots of tech sites speculating on the same limited Apple data. The possibility that the equivalent VRAM would be allocated "on demand" is a very plausible speculation - but unless someone can point to the Apple document that details that, it is speculation.

Reality seems to be that Unified Memory is more efficient - but how more efficient is hard to test, and hard to isolate from the other performance gains of the M1 (...which might look much less impressive when higher-end Apple Silicon Macs appear).

Lots of the YouTube stuff seems to come from people who don't understand the difference between "Memory Used" and "Memory Pressure" or "Swap used" and swap rate - or are looking for a RAM-related speedup on workflows that don't strain the RAM on an Intel system...
 

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
How is that supposed to support the claim you are making? What I am asking is - do you have any factual evidence, or a reference to a source presenting such factual evidence that GPU memory allocation works differently on Intel and Apple GPUs? Both use unified memory architecture with last level cache shared between CPU and GPU. M1 definitely reserves some memory for GPU use, although I am unsure how much.

x86 does NOT use unified memory for system RAM. UMA refers to the RAM setup in the system, not CPU cache. With the x86 platform, the system partitions the RAM into a CPU and iGPU section. For the iGPU, the system usually allocates around 2GB for GPU operations, meaning that only 6GB are available for the CPU. For operations where the data has to be manipulated by both the CPU and iGPU, it is copied twice across the system bus into RAM (once per partition), then the system has to reconcile the two sets of data once passed back from RAM, which adds additional processing time. With the UMA setup the M1 uses, both the CPU and iGPU can access the full system RAM simultaneously (i.e., there is no partitioning of the RAM between CPU and GPU.) This means that data is only copied to RAM once, and since all operations can happen simultaneously, there is no overhead associated with reconciling two versions of the same data once passed back to the CPU from RAM.
 
  • Like
Reactions: armoured and chabig

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
Intel documentation disagrees with you.


As far as I refill, Intel has been using unified memory since at least Sandy Bridge, maybe earlier. There might be some restrictions, not 100% sure.

One thing you have to realize is that Intel's use of the term "UMA" is misleading. For Intel's purposes, they just renamed Intel HD to UMA, but made no changes to the underlying architecture. On the other hand, Apple's approach is essentially what AMD has been trying to do for years with the development of their Infinity Fabric technology for Ryzen-series CPUs. Here's a relatively simplified explanation of why Apple's approach is not the same as Intel's:

The M1 processor’s memory is a single pool that’s accessible by any portion of the processor. If the system needs more memory for graphics, it can allocate that. If it needs more memory for the Neural Engine, likewise. Even better, because all the aspects of the processor can access all of the system memory, there’s no performance hit when the graphics cores need to access something that was previously being accessed by a processor core. On other systems, the data has to be copied from one portion of memory to another—but on the M1, it’s just instantly accessible.

Even the mention of "allocation" above is misleading and an artifact of the x86 platform. Since Apple's UMA does not partition RAM between GPU and CPU, there is no actual allocation of RAM between the CPU and GPU.

 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
One thing you have to realize is that Intel's use of the term "UMA" is misleading. For Intel's purposes, they just renamed Intel HD to UMA, but made no changes to the underlying architecture.

I’ll be honest, leman and mi7chy (despite the brash attitude) do point to good sources that Intel does at least support zero copy and dynamic partition sizing.

Even if it’s not true UMA as you claim, it looks like from the docs I’ve read through so far, it’s at least able to dedicate memory pages to be used for zero-copy, which I expect does some interesting tricks to make the same RAM page available to both sides.

The question I have which I’m hoping the docs will answer once I get more time is how those pages are handled in more detail, and what sort of integration the OS has to do to take best advantage of this. But even with that answer, if the OS APIs have to signal to the GPU how to manage the pages, how much optimization has Apple done there?

Intel documentation disagrees with you.


As far as I refill, Intel has been using unified memory since at least Sandy Bridge, maybe earlier. There might be some restrictions, not 100% sure.

Welp, my understanding of Intel’s GPU architecture is proven to be out of date. Indeed, there shouldn’t be huge differences in that case.

My understanding was that there was still some fixed partitioning going on, but it looks like Google’s dredging up old articles on this, which led me down the wrong path.

Think this could help: https://developer.apple.com/videos/play/wwdc2020/10632/

This explains the method of rendering common with mobile, consoles, and low power GPUs. This is why you need less memory than dGPUs. Tiles rendering has been a way to handle 3d in smaller tiles and scan changes, etc. It's been done for years on console as well as newer tablets (iPad.)

Given that, it isn't cut and dry how much memeory you use for textures and rednering. It's differnt than how dGPUs work with framebuffers.

My understanding after re-skimming the video (it’s been a few months since I last watched it) is that this doesn’t necessarily impact the amount of video memory this needs all that much, but rather the pressure placed on memory bandwidth.

I still need X MB for a texture of a given size, and X MB for the frame buffer in either design. However, TBDR reduces how often you need to reach out to (V)RAM, especially in situations where you need to make multiple passes. It *might* reduce intermediate buffers a little, but that assumes intermediate buffers are a noticeable contribution compared to the other buffers in use. My understanding was that the back buffer itself was used as the intermediate buffer, so I am a bit skeptical that there’s big gains to be had there. Draw to texture seems to be common these days, so there might be more than I expect, assuming these scenarios can be all done at the tile level, rather than a texture.
 
Last edited:

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
Intel still partitions the CPU and GPU RAM, so you have to copy data from the CPU side to GPU side and vice versa (bolded section is Apple's approach, the italicized section is Intel's):

Let’s dig into the last point, the on-chip memory. With the M1, this is also part of the SoC. The memory in the M1 is what is described as a ‘unified memory architecture’ (UMA) that allows the CPU, GPU, and other cores to exchange information between one another, and with unified memory, the CPU and GPU can access memory simultaneously rather than copying data between one area and another. Erik continues…

“For a long time, budget computer systems have had the CPU and GPU integrated into the same chip (same silicon die). In the past saying ‘integrated graphics’ was essentially the same as saying ‘slow graphics’. These were slow for several reasons:
Separate areas of this memory got reserved for the CPU and GPU. If the CPU had a chunk of data it wanted the GPU to use, it couldn’t say “here have some of my memory.” No, the CPU had to explicitly copy the whole chunk of data over the memory area controlled by the GPU.”


The part regarding "budget computer systems" is what Intel uses to this day for integrated graphics. That has not changed regardless of what Intel calls their architecture.



Johnny Srouji addressed this directly during WWDC:

"access the same data without copying it between multiple pools of memory"

 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
For sure out of date. I'm even more horrified by the fact they will now sell (to OEMs) descrete versions of the Iris Xe. More and more spreading out of the architecture. Such a mess.
I’ll be honest, leman and mi7chy (despite the brash attitude) do point to good sources that Intel does at least support zero copy and dynamic partition sizing.

Even if it’s not true UMA as you claim, it looks like from the docs I’ve read through so far, it’s at least able to dedicate memory pages to be used for zero-copy, which I expect does some interesting tricks to make the same RAM page available to both sides.

The question I have which I’m hoping the docs will answer once I get more time is how those pages are handled in more detail, and what sort of integration the OS has to do to take best advantage of this. But even with that answer, if the OS APIs have to signal to the GPU how to manage the pages, how much optimization has Apple done there?



Welp, my understanding of Intel’s GPU architecture is proven to be out of date. Indeed, there shouldn’t be huge differences in that case.

My understanding was that there was still some fixed partitioning going on, but it looks like Google’s dredging up old articles on this, which led me down the wrong path.



My understanding after re-skimming the video (it’s been a few months since I last watched it) is that this doesn’t necessarily impact the amount of video memory this needs all that much, but rather the pressure placed on memory bandwidth.

I still need X MB for a texture of a given size, and X MB for the frame buffer in either design. However, TBDR reduces how often you need to reach out to (V)RAM, especially in situations where you need to make multiple passes. It *might* reduce intermediate buffers a little, but that assumes intermediate buffers are a noticeable contribution compared to the other buffers in use. My understanding was that the back buffer itself was used as the intermediate buffer, so I am a bit skeptical that there’s big gains to be had there. Draw to texture seems to be common these days, so there might be more than I expect, assuming these scenarios can be all done at the tile level, rather than a texture.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
For sure out of date. I'm even more horrified by the fact they will now sell (to OEMs) descrete versions of the Iris Xe. More and more spreading out of the architecture. Such a mess.


The Iris Xe is nothing more than a rebranded Intel UHD iGPU (which itself was rebranded from Intel HD). It's like they're trying to present a Chevelle as a brand new car just by repainting the exterior...
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
The Iris Xe is nothing more than a rebranded Intel UHD iGPU (which itself was rebranded from Intel HD). It's like they're trying to present a Chevelle as a brand new car just by repainting the exterior...
So true, I can't believe they are even making discrete cards of these as well for 11th Gen.
 

robco74

macrumors 6502a
Nov 22, 2020
509
944
If you do any sort of music creation, RAM should be lower on your list of concerns when considering the new M1 Macs. Big Sur made big driver changes. Check with all your equipment manufacturers first to make sure they support Big Sur, and Apple Silicon. Not all do yet.
 
  • Like
Reactions: duervo
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.