Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

toke lahti

macrumors 68040
Original poster
Apr 23, 2007
3,294
510
Helsinki, Finland
Anybody seen anything like this?

I should order my new mbp very soon now.
Delivery dates are slipping and I need to do some color grading with it in December.

There are literally tons of benchmarking videos on youtube, but so far none, that I have found has compared how the gpu power is affected, when there is TWO ho-res monitors attached (like 4k-6k).

I find this quite amazing. Maybe "mac people" aren't so accustomed to test gpu's?

The thing here is, that when upgrading a ram is the most expensive part of a Apple laptop, I don't want to waste money, if not necessary.

But since (normally) gpu's need more memory, when they are pushing more pixels, m-chips might steal the vast majority of ram if they are pushing something like, lets say 24M pixels.

Do we even know how much ram the gpu part can even use?
All of it?

How does cpu & gpu struggle (prioritize) the ram, if it's all in use?
What is left for apps?
 
I was testing my base 14" video output.

HDMI 2.0 to LG CX OLED (a 2.1 port), doing 120Hz test via Chrome at 1440p120 HDR.
iStat Menus reported at one point 49.4% VRAM used, rest are like 35% or less.

Then I had an ASUS VG289Q also plugged directly to MBP HDMI port, pushing 4k60p HDR for an hour or two, never exceeded 30%.

I am unsure what is this % based on, is this talking about the entire pool of physical memory on the machine (i.e. 16GB), or is it talking about an arbitrary allocated VRAM pool after reserving some GBs for the system.
 
  • Like
Reactions: yurkennis
I was testing my base 14" video output.

HDMI 2.0 to LG CX OLED (a 2.1 port), doing 120Hz test via Chrome at 1440p120 HDR.
iStat Menus reported at one point 49.4% VRAM used, rest are like 35% or less.

Then I had an ASUS VG289Q also plugged directly to MBP HDMI port, pushing 4k60p HDR for an hour or two, never exceeded 30%.

I am unsure what is this % based on, is this talking about the entire pool of physical memory on the machine (i.e. 16GB), or is it talking about an arbitrary allocated VRAM pool after reserving some GBs for the system.
4k60 hdr = over 8bit color depth?
Through hdmi means color might be subsampled (4:2:2 or 4:2:0).
Does this mean that gpu needs even more ram? First it needs to hold the 4:4:4 picture in the ram and the convert it and send the conversion to monitor?

If you want to really test something, you should run some test with the monitor.

Eg. run some standard Blender test (or any other) WITHOUT external monitor AND then WITH monitor connected.
Would be also very interesting to know if there's a big difference between your LG & Asus. Also TB/DP vs. hdmi.
 
  • Like
Reactions: ProphetX
4k60 hdr = over 8bit color depth?
Through hdmi means color might be subsampled (4:2:2 or 4:2:0).
Does this mean that gpu needs even more ram? First it needs to hold the 4:4:4 picture in the ram and the convert it and send the conversion to monitor?

If you want to really test something, you should run some test with the monitor.

Eg. run some standard Blender test (or any other) WITHOUT external monitor AND then WITH monitor connected.
Would be also very interesting to know if there's a big difference between your LG & Asus. Also TB/DP vs. hdmi.
It was 4:2:2, though the Monterey System Profiler does not list these as detailed as previous macOS versions I wonder why.
I actually did my "test" before reading your post, thought to chime in since iStat menus records these stats 30 days. I myself was not concerned about memory usage at all, I was just interested to see if the 120Hz behaviour between the LG OLED and the MBP internal screen are different.

I guess you may be able to find precise answers in 3D or video centric forums since the MR general audience probably would never approach running out of VRAM.
 
But since (normally) gpu's need more memory, when they are pushing more pixels, m-chips might steal the vast majority of ram if they are pushing something like, lets say 24M pixels.

Do we even know how much ram the gpu part can even use?
All of it?

How does cpu & gpu struggle (prioritize) the ram, if it's all in use?
What is left for apps?

A full 8k 32-bit framebuffer takes around 120MB - that’s just a tiny fraction of the RAM these machines have. Refreshing such a framebuffer at 120fps requires 14 GB/s of memory bandwidth, much less in practice thanks to framebuffer compression and other optimization techniques. A 4K display requires 1/4 of this. Why do you think driving multiple high-res displays will be any issue for these chips?

Regarding your last question: work-critical RAM (such as framebuffer memory) is usually pinned so that it cannot be displaced by the apps. The memory controller likely has QoS features to prioritize display engine traffic for smooth display output.
 
  • Like
Reactions: senttoschool
A full 8k 32-bit framebuffer takes around 120MB - that’s just a tiny fraction of the RAM these machines have. Refreshing such a framebuffer at 120fps requires 14 GB/s of memory bandwidth, much less in practice thanks to framebuffer compression and other optimization techniques. A 4K display requires 1/4 of this. Why do you think driving multiple high-res displays will be any issue for these chips?

Regarding your last question: work-critical RAM (such as framebuffer memory) is usually pinned so that it cannot be displaced by the apps. The memory controller likely has QoS features to prioritize display engine traffic for smooth display output.
Hmm, lets assume 10bit color depth with alpha channel. 40*7680*4320/8=166MBytes.

Why do I think this matters?
Well basically, because dGPU's tend to have gigabytes of ram.
Even the workstation ones, with no gaming designed for them to do.
Why do they need the ram, but M1 does not?
 
Hmm, lets assume 10bit color depth with alpha channel. 40*7680*4320/8=166MBytes.

Then it's 64bits per pixel or 240MB. 4x such monitors with triple buffering would need roughly 3GB of RAM and 112GB/s RAM bandwidth for 120 fps. That's quite heavy, but still very doable with M1 max. Of course, M1 Max only supports 3x 6K + 1x 4K displays, so memory requirements are not nearly as steep as what we are discussing here.

My point is simply that these GPUs have more than enough resources to deal with multiple monitors. Of course, doing high-performance rendering on all monitors is something different.


Why do I think this matters?
Well basically, because dGPU's tend to have gigabytes of ram.
Even the workstation ones, with no gaming designed for them to do.
Why do they need the ram, but M1 does not?

M1 GPUs have much more RAM than most dGPUs... M1 Pro starts at 16GB and M1 Max starts at 32GB of GPU-available RAM. You'll need to pay the amount of money equivalent to the entire laptop just to get a GPU with comparable amount of RAM.
 
Why do they need the ram, but M1 does not?

Because M1 has unified memory. There is no need to make copies of the same textures between system RAM and VRAM anymore. System RAM is also VRAM.

I run Fusion 360 and the app sees my 16GB M1 Pro device as having 10GB of VRAM. That's plenty.
 
Because M1 has unified memory. There is no need to make copies of the same textures between system RAM and VRAM anymore. System RAM is also VRAM.

I run Fusion 360 and the app sees my 16GB M1 Pro device as having 10GB of VRAM. That's plenty.
Does the app use more than 6GB of cpu ram?
 
Bottom mile here is:
If gpu does not use a lot of unified ram, when doing complex and heavy tasks, like 3d & rendering, etc., then you don't need 32GB or 64GB of ram. Ever. Still Apple sells those models to that 0,1% who needs that for running a dozen of virtual machines or scientific calculations. And for that 10%, that will never use it, but will buy it "just in case".
 
There is a whole thread dedicate to RAM. I'm not going to rehash all the arguments. Most people don't need more than 16. Some people do. Those people already know who they are.

I'm running (1) 4k@60 LG UltraFine at scaled resolution connected to my M1 Pro. I have another one on the way, so I'll let you know what system performance looks like when both are running. While running one of them, GPU load average is like 15%. Memory usage by the GPU is like 8% (out of 32 GB). I really haven't looked to compare with the built-in display, or other configurations. Just looking at what it is right now.
 
  • Like
Reactions: toke lahti
Bottom mile here is:
If gpu does not use a lot of unified ram, when doing complex and heavy tasks, like 3d & rendering, etc., then you don't need 32GB or 64GB of ram. Ever. Still Apple sells those models to that 0,1% who needs that for running a dozen of virtual machines or scientific calculations. And for that 10%, that will never use it, but will buy it "just in case".

I don't think this is true. I think my usage is light enough (simple 3D modeling on a single 5K display) to not trigger extra memory use, but I have seen swap getting kicked in when I do certain more intensive tasks like... stitching 100 photos in panorama to a single 2 gigapixels image. It's not a lot of swap (3GB), but it's still exceeding 16GB.

It's easy to see that if I were to do this on a regular basis that I should just get 32GB for peace of mind. Also if I'm leaving 10+ tabs open in Safari/Chrome all the time, then I'd want 32GB RAM over 16GB RAM as well.

The fact that GPU doesn't need its own dedicated VRAM does not detract from the other fact that... apps are still resource-hungry on their own. Web browsers in particular are very heavy RAM hogs these days.
 
I don't think this is true. I think my usage is light enough (simple 3D modeling on a single 5K display) to not trigger extra memory use, but I have seen swap getting kicked in when I do certain more intensive tasks like... stitching 100 photos in panorama to a single 2 gigapixels image. It's not a lot of swap (3GB), but it's still exceeding 16GB.

It's easy to see that if I were to do this on a regular basis that I should just get 32GB for peace of mind. Also if I'm leaving 10+ tabs open in Safari/Chrome all the time, then I'd want 32GB RAM over 16GB RAM as well.

The fact that GPU doesn't need its own dedicated VRAM does not detract from the other fact that... apps are still resource-hungry on their own. Web browsers in particular are very heavy RAM hogs these days.
Okay, 16GB of uRAM, one 5k monitor attached, some stitching app and the needed RAM was 19GB.
This will tell something, but not very precise.
If you make a test with exact same situation with no external monitor, we could have a comparable result.

I do understand that apps needs the most of RAM. But there's really no open knowledge on how much mGPU needs or uses uRAM for optimal or maximum productivity.

Windows version of autocad2020 min. gpu RAM is 1GB, rec. is 4GB.
For macOS there's no spec.
 
There is a whole thread dedicate to RAM. I'm not going to rehash all the arguments. Most people don't need more than 16. Some people do. Those people already know who they are.

I'm running (1) 4k@60 LG UltraFine at scaled resolution connected to my M1 Pro. I have another one on the way, so I'll let you know what system performance looks like when both are running. While running one of them, GPU load average is like 15%. Memory usage by the GPU is like 8% (out of 32 GB). I really haven't looked to compare with the built-in display, or other configurations. Just looking at what it is right now.
8% of 32Gb is a significant amount. 2.5GB. 2 of those monitors probably wouldn't take 5GB, would they?
But if they would, suddenly you are in a situation that your cpu & os has only 11 GB from that 16GB for the apps.
When will you get the second UltraFine?

This is a bit same that it was with cheapest iOS devices, where the os itself eats a big portion of the storage and this is not advertised in big letters.

So far very little reliable data, but looks like I really have to pay for that +16GB of ram.
 
Okay, 16GB of uRAM, one 5k monitor attached, some stitching app and the needed RAM was 19GB.
This will tell something, but not very precise.
If you make a test with exact same situation with no external monitor, we could have a comparable result.

It's the same RAM usage when unplugged.

I.e.: the external monitor isn't using more RAM. That's what we have been trying to tell you. Apps' RAM usage will trump the monitor(s).

Let me make it a bit clearer: you won't need more RAM to run more external monitors if your apps don't require more RAM. But it's not true that 32GB and 64GB are not necessary because there are apps that can exceed 16GB RAM that are not virtual machines and scientific use cases.

I have seen people showing Photoshop taking up at least 8GB or more on their machines all by itself.

Edit: someone actually has hard numbers to show for my specific use case (panorama stitching):

As I suspected, this is one use case where more RAM actually is needed if I want to finish the job faster. Since I don't stitch photos all the time, I'm okay without the extra RAM. But this is definitely one use case where more RAM is needed.
 
Last edited:
  • Like
Reactions: Pro Apple Silicon
8% of 32Gb is a significant amount. 2.5GB. 2 of those monitors probably wouldn't take 5GB, would they?
But if they would, suddenly you are in a situation that your cpu & os has only 11 GB from that 16GB for the apps.
When will you get the second UltraFine?

This is a bit same that it was with cheapest iOS devices, where the os itself eats a big portion of the storage and this is not advertised in big letters.

So far very little reliable data, but looks like I really have to pay for that +16GB of ram.
That's not how RAM works at all. For someone who seems hard up for "data" and super critical of everything people have tried to show you, you don't seem to know the basics either.
 
That's not how RAM works at all. For someone who seems hard up for "data" and super critical of everything people have tried to show you, you don't seem to know the basics either.
Lets continue with the basics for a bit then.

Why a bit more complicated app lists for hardware requirements a gpu with at least 1GB of VRAM?
How much VRAM 3d animation software uses, if it's showing the work in 2 4k monitors (+ maybe the internal display)?

I should test Cinema 4D with my new mbp.
Are you really sure, that it uses same amount of RAM for both cpu & gpu, regardless of what it displays?

Blender is also same?
And AutoCAD?

If you Know, why don't you tell?

You said, that in some of your use case, gpu used 8% of you ram. Did it use it or not?
 
Last edited:
It's the same RAM usage when unplugged.

I.e.: the external monitor isn't using more RAM. That's what we have been trying to tell you. Apps' RAM usage will trump the monitor(s).

Let me make it a bit clearer: you won't need more RAM to run more external monitors if your apps don't require more RAM.
Well, let's say you are designing a 3d object, you need to turn the object when you are drawing it. AFAIK, modeling sw uses 3d acceleration , same as games. After drawing the object, you want to animate it and add camera movement and lighting. All the time the display is showing what you are doing without needing any more RAM to do that, other than what framebuffer needs?
Metal does not use any caches and just computes everything without ram?
 
: "There is no rule of thumb to estimate how much memory we need. We cannot draw a parallel between the project file size and the graphics card memory usage. Archicad stores the model information of the objects' parameters, and not by the coordinates of the polygons. Huge files that have a lot of parameters might occupy large amount of memory. And when the parameters of some geometry objects are set to large resolution or segmentation it adds an extra burden. Also, there might be a model that itself is only just a few kilobytes, but its parameters would use up a big portion from the graphic card memory in ultra high resolution with lots of large textures triggering slowness. We recommend 2GB of dedicated memory for small to average-sized projects, but if your work involves UHD resolution monitors with large complex models, your graphics card dedicated memory should be at least double."
 
Well, let's say you are designing a 3d object, you need to turn the object when you are drawing it. AFAIK, modeling sw uses 3d acceleration , same as games. After drawing the object, you want to animate it and add camera movement and lighting. All the time the display is showing what you are doing without needing any more RAM to do that, other than what framebuffer needs?
Metal does not use any caches and just computes everything without ram?

Yeah, that is how it works. It's not like the software can "predict" one whole second ahead of what I am actually doing to the software. Each frame is rendered as soon as my action takes place.

Most of the RAM will be used to store texture data, not for displaying. Displaying doesn't take that much RAM. The PS3 only has 256MB VRAM (not unified memory) and it could do 1080p just fine. 4K is simply 4x that, so at worst, maybe you'll need 1GB VRAM for 4K, but not more. 2x of these displays may take up to 2GB but that's not a lot compared to 16GB.
 
Yeah, that is how it works. It's not like the software can "predict" one whole second ahead of what I am actually doing to the software. Each frame is rendered as soon as my action takes place.

Most of the RAM will be used to store texture data, not for displaying. Displaying doesn't take that much RAM. The PS3 only has 256MB VRAM (not unified memory) and it could do 1080p just fine. 4K is simply 4x that, so at worst, maybe you'll need 1GB VRAM for 4K, but not more. 2x of these displays may take up to 2GB but that's not a lot compared to 16GB.
Size of vram used for storing textures surely depends on the size of "looking window"?
Texture data is stored for displaying, what else use of it can there be?

I really need to find "How gpu uses memory"-101 article somewhere to understand this.
I never studied this subject deeply.
But, in the 90s' when "accelerated" graphics cards were introduced, I remember that the "acceleration" was about what gpu can do _in addition_ of just being a frame buffer. And there were things in both 2d and 3d acceleration.

I have always thought that "predicting" is one the most important thing in 3d graphics. Ie. the engine keeps the polygons that are most likely to be used in the near future in the memory, so it doesn't have to compute them again and again.

What was the reason why 2d/3d design apps need workstation grade gpu's? Surely not about framebuffering?
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.