Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Size of vram used for storing textures surely depends on the size of "looking window"?
Texture data is stored for displaying, what else use of it can there be?

I really need to find "How gpu uses memory"-101 article somewhere to understand this.
I never studied this subject deeply.
But, in the 90s' when "accelerated" graphics cards were introduced, I remember that the "acceleration" was about what gpu can do _in addition_ of just being a frame buffer. And there were things in both 2d and 3d acceleration.

I have always thought that "predicting" is one the most important thing in 3d graphics. Ie. the engine keeps the polygons that are most likely to be used in the near future in the memory, so it doesn't have to compute them again and again.

What was the reason why 2d/3d design apps need workstation grade gpu's? Surely not about framebuffering?

Sure, textures will take up space regardless, but on traditional systems without unified memory, the CPU has to read textures into RAM, then have the GPU copy those textures to VRAM. Textures end up taking space both in RAM and VRAM. This also applies to integrated graphics that don't treat system RAM as unified memory.

An accelerated graphics card simply just speeds up calculations. Since 3D calculations were costly on the CPU, having a dedicated GPU to speed up these calculations was all the rage back then. It doesn't have anything to do with VRAM.

As for why workstation-grade GPUs need so much VRAM, it's due to textures and materials. You can't guarantee textures will be optimized like they are in games, so they will inevitably take up a lot more space. Also it's not like an engineer or 3D artist will spend time to optimize the textures themselves, so it's best to just throw as much VRAM into the system as possible so it doesn't matter if someone is trying to use a 16K texture in any 3D model, it won't overload VRAM.

Note again the limitation discussed above: without unified memory, you need to mirror textures between VRAM and system RAM, which ends up wasting a lot of RAM and VRAM. This is much less of a problem for unified memory. These new MacBooks can be considered as having at least 2/3 of total RAM as VRAM, and there is no need to make copies of textures, so the new MacBooks in workloads like those are not required to have that much RAM overall.

Say, if your 3D scene has 10GB of textures, that means on a traditional system without unified memory, you need approximately 10GB system RAM and a bit more than 10GB VRAM to handle that scene. On the new MacBooks, you just need 10GB RAM to handle the same scene.

As for "predicting", I think you thought wrong. It's foolish to try and "predict" user's next move within a few millisecond. You're just wasting power. There is no "prediction" here. The system will simply just use a framebuffer to display the current screen, and that framebuffer will just take up as much as whatever the resolution of the screen is. I.e.: it's not going to use much RAM.

I hope you realize there are GPUs on the market currently with 2GB of VRAM that can drive up to 3x 4K60 monitors while still having enough VRAM for textures and other things.
 
  • Like
Reactions: toke lahti
Lets continue with the basics for a bit then.

Why a bit more complicated app lists for hardware requirements a gpu with at least 1GB of VRAM?
How much VRAM 3d animation software uses, if it's showing the work in 2 4k monitors (+ maybe the internal display)?

I should test Cinema 4D with my new mbp.
Are you really sure, that it uses same amount of RAM for both cpu & gpu, regardless of what it displays?

Blender is also same?
And AutoCAD?

If you Know, why don't you tell?

You said, that in some of your use case, gpu used 8% of you ram. Did it use it or not?
What on earth are you rambling about? No one else seems to know either.
 
Update: I tested with new displays, on an M1 Pro with 32 GB of RAM.

With (2) 4K@60 at scaled resolution:
GPU load is around 14%.
GPU memory use is around 25%.

With built-in display only:
GPU load is around 12%.
GPU memory use is around 14%.
 
Last edited:
I'll do one better: I just connected to both my LG CX and LG 34WK95U at the same time...

And I played StarCraft 2 while running Fusion 360 in the background.

Total memory usage? 10GB. Fan didn't even bother turning on and StarCraft 2 ran beautifully at Medium settings, 4K!
 
  • Like
Reactions: toke lahti
: "There is no rule of thumb to estimate how much memory we need. We cannot draw a parallel between the project file size and the graphics card memory usage. Archicad stores the model information of the objects' parameters, and not by the coordinates of the polygons. Huge files that have a lot of parameters might occupy large amount of memory. And when the parameters of some geometry objects are set to large resolution or segmentation it adds an extra burden. Also, there might be a model that itself is only just a few kilobytes, but its parameters would use up a big portion from the graphic card memory in ultra high resolution with lots of large textures triggering slowness. We recommend 2GB of dedicated memory for small to average-sized projects, but if your work involves UHD resolution monitors with large complex models, your graphics card dedicated memory should be at least double."
Isn't this pretty much your answer? The amount of VRAM (or RAM, in the case of Apple Silicon) you need is 100% dependent on the complexity of your models. If you are working with higher resolution UHD monitors, that's because you're working with more complex models. Have you looked at how much RAM and VRAM is used in your current workflow?

Are you intending on colour grading on two monitors simultaneously, or are you looking to grade on one monitor, while comparing to images on a second?

A monitor plugged into a M1/Pro/Max mac is not going to have any meaningful impact on RAM usage on its own. As stated previously, a frame buffer is of negligible size. It will only affect RAM usage if you are opening and running more applications or processing more files than you otherwise would without the monitor.

And if you are looking to process multiple files on different monitors, and it doesn't work too well once you buy, I don't see that performing those tasks sequentially rather than in parallel will be less efficient.
 
Update: I tested with new displays, on an M1 Pro with 32 GB of RAM.

With (2) 4K@60 at scaled resolution:
GPU load is around 14%.
GPU memory use is around 25%.

With built-in display only:
GPU load is around 12%.
GPU memory use is around 14%.
What apps were open? Did you do any power hungry tasks?

25% here means 8GB?

which would mean 50% of ram going to gpu with 16GB model.

Some of that might be stuff that would have been in both ram and vram in traditional system.
Let’s say half.
16GB m-mac would then have same amount of ram for cpu (os+apps) than 12GB in traditional system.

Would be nice to know more.
 
I'll do one better: I just connected to both my LG CX and LG 34WK95U at the same time...

And I played StarCraft 2 while running Fusion 360 in the background.

Total memory usage? 10GB. Fan didn't even bother turning on and StarCraft 2 ran beautifully at Medium settings, 4K!
Total memory usage of GPU?
Or both GPU + CPU?

SC2 was on the other screen and F360 on the other?

Would you say that you wasted your money with 32GB model and you'd see no slowdown with 16GB version?
 
Isn't this pretty much your answer? The amount of VRAM (or RAM, in the case of Apple Silicon) you need is 100% dependent on the complexity of your models. If you are working with higher resolution UHD monitors, that's because you're working with more complex models. Have you looked at how much RAM and VRAM is used in your current workflow?

Are you intending on colour grading on two monitors simultaneously, or are you looking to grade on one monitor, while comparing to images on a second?

A monitor plugged into a M1/Pro/Max mac is not going to have any meaningful impact on RAM usage on its own. As stated previously, a frame buffer is of negligible size. It will only affect RAM usage if you are opening and running more applications or processing more files than you otherwise would without the monitor.

And if you are looking to process multiple files on different monitors, and it doesn't work too well once you buy, I don't see that performing those tasks sequentially rather than in parallel will be less efficient.
I haven't still really used Resolve, so I'm not sure how I'm going to furnish the desktop.
I've used to have one big "canvas" and maybe few small windows of the footage.
Your idea of more bigger windows for comparison might be nice.

Of course I'm not talking about "plug the monitor and have a desktop on it".
I'm talking about situation where hi-res monitors are used for heavy load apps.

We now have one anecdote here, that plugging 2 4k monitors increased gpu ram usage to 25% (of 32GB or 10GB).
There really seem to be an effect that I visioned.

I currently have one 5k2k monitor and old 2.5k eizo, which will be replaced with something, maybe 5k.
 
Total memory usage of GPU?
Or both GPU + CPU?

SC2 was on the other screen and F360 on the other?

Would you say that you wasted your money with 32GB model and you'd see no slowdown with 16GB version?

For both GPU and CPU. Unified memory means memory usage is between both CPU and GPU now.

I had 32GB RAM in the M1 Max 16" and now I've downgraded to 16GB M1 Pro 14". I found the 16" too bulky/thick/heavy and I didn't really need the extra power.

No slow down at all. Just as I thought because all I needed was something about 2x faster than M1.
 
What apps were open? Did you do any power hungry tasks?

25% here means 8GB?

which would mean 50% of ram going to gpu with 16GB model.

Some of that might be stuff that would have been in both ram and vram in traditional system.
Let’s say half.
16GB m-mac would then have same amount of ram for cpu (os+apps) than 12GB in traditional system.

Would be nice to know more.
You're making way too many assumptions here. Computer performance does not scale in a linear fashion the way you seem to expect.

The takeaway from my post is that with a baseline comparison between the built-in display and TWO 4K displays@60Hz and using scaled resolution (aka pushing it even harder than if they were native)...there is almost ZERO GPU load increase, and at most 4 GB of additional RAM being consumed by the GPU.

I'll gladly trade 4 measly GB of RAM to run TWO 4K displays like this. I consider this overall to be a negligible impact on the GPU. Who wouldn't?

If your original concern was "should someone who plans to use multiple high performance external displays buy the cheapest MacBookPro" I don't think anyone in any walk of life with any computer knowledge would say YES to that, with or without any performance tests.

From my tests? Sure I think you could. But you wouldn't, if you're the kind of user who actually runs multiple 4K displays.
 
Total memory usage of GPU?
Or both GPU + CPU?

SC2 was on the other screen and F360 on the other?

Would you say that you wasted your money with 32GB model and you'd see no slowdown with 16GB version?
I'm sorry but your posts are absolutely ridiculous. I didn't buy a 32 GB model because of displays.
 
If your original concern was "should someone who plans to use multiple high performance external displays buy the cheapest MacBookPro" I don't think anyone in any walk of life with any computer knowledge would say YES to that, with or without any performance tests.

From my tests? Sure I think you could. But you wouldn't, if you're the kind of user who actually runs multiple 4K displays.
I'm not going to do anything super-heavy, I believe.
4k footage will be the absolut maximum.
I'm back in a school for engineering in my old days, because of corona (took all work from me).
Will do lot of cad work and modeling and hopefully still some moving pictures also.

Most YT videos are shouting, that "there's no benefit of having more than 16GB of ram, unless you are doing most demanding tasks on the planet".

I just wanted to have a bit more accurate line, when 32GB really turns from un-needed waste of money to beneficial for productivity in content creating. Not just a theory about because cheap can't be the best, you know what to pick. And if you want to be sure, pick one spec higher.

Well, I'm not earning much now.
Maybe very few people wants to see any actual data and most people prefer going with "gut feeling".

Thanks for your opinions.
 
This is pretty much what 99% of comparison are: entry level vs. maxed out:

All I'm looking for is m1pro 16GB vs. 32GB.

Redshift rendering shows of course a lot of difference.

But I'm thinking about "live" rendering like with that EEVEE Splash Fox.
And that's of course when the display resolution steps in?
 
Last edited:
I'd still like to know the fps difference with EEVEE Splash Fox & hi-res external monitor.
I already ordered 32GB version, but it might take 2 months to get it...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.