It’s not so simple, as ram would still be needed by the system, but essentially yes.Let's say M1 Max has 64GB unified memory. Theoretically, can M1 Max's GPU have up to 64GB VRAM?
Yes.Theoretically, can M1 Max's GPU have up to 64GB VRAM?
Not quite. You can always made a GPU with tons of private VRAM not shared with CPU.Do unified memory allow GPU have more VRAM?
The whole idea of needing VRAM is because the data path to the GPU is slow, so you use VRAM to hold data on the GPU side of the bottleneck. Once the bottleneck is removed, the need for VRAM goes away. Yes all of the RAM can be used for any purpose and the data never needs to move across a PCIe bus.Let's say M1 Max has 64GB unified memory. Theoretically, can M1 Max's GPU have up to 64GB VRAM?
It’s not so simple, as ram would still be needed by the system, but essentially yes.
With this one I don’t think so. The GPU has access to all 64Gb regardless of what the CPU is doing. There is no ram partitioning like on Intel chips. UMA allows 100% access to both GPU and CPU at the same time.Theoretically, yes.
As with all technical questions, the answer gets muddy.
Let's say M1 Max has 64GB unified memory. Theoretically, can M1 Max's GPU have up to 64GB VRAM?
With this one I don’t think so. The GPU has access to all 64Gb regardless of what the CPU is doing. There is no ram partitioning like on Intel chips. UMA allows 100% access to both GPU and CPU at the same time.
Not quite: it’s true that there isn’t a strict partition but there does appear to be a limit on how much RAM the GPU can subscribe in order to ensure that the cpu has x amount of space for things it might need. So there are no duplications and the CPU and GPU can both see the entire RAM stack but the GPU has limits on how much it can use for solely itself. How this works in under the hood is probably pretty interesting.
If I'm not wrong, macOS controls how much RAM is allocated for each process' 'VRAM' allocation. While Total War 3 Kingdoms may see 43GB, macOS itself would also need some 'VRAM' for its GUI rasterisation.A quick game test, Total War 3 Kingdoms see the graphic card has 43GB of VRAM...
Any apps that can accurately monitor VRAM usage? just curious to see what happened when I enable the unlimited Vmem on TW 3 Kingdom.If I'm not wrong, macOS controls how much RAM is allocated for each process' 'VRAM' allocation. While Total War 3 Kingdoms may see 43GB, macOS itself would also need some 'VRAM' for its GUI rasterisation.
In any case, the concept of 'VRAM' is no longer relevant in macOS running in UMA Macs. Processes can ask for as much as macOS allows.
Where have you seen this? Anything I can read/watch to learn more?
Most likely because CPU processes can make use of virtual memory, while GPU processes should not? I would think GPU processes should only be wired to physical memory space while the CPU can make use of the MMU.Hmmm nothing comprehensive unfortunately, it’s just been noted that applications that say how much GPU Ram they can make use of it is always around 2/3 of the total. In contrast, no such limit seems to exist for assigning memory to be processed by the CPU. So in practice there is a limit to how much RAM a program can use dedicated to the GPU. However unclear exactly how this limit works or what happens if multiple programs are in use. I’ll see if I can dig up something more concrete.
Not that I know of tho. Not sure if macOS has APIs to allow queries of other processes memory usage.Any apps that can accurately monitor VRAM usage? just curious to see what happened when I enable the unlimited Vmem on TW 3 Kingdom.
Most likely because CPU processes can make use of virtual memory, while GPU processes should not? I would think GPU processes should only be wired to physical memory space while the CPU can make use of the MMU.
AFAIK there is no direct way querying how much memory does a metal device actually 'has', but theHmmm nothing comprehensive unfortunately, it’s just been noted that applications that say how much GPU Ram they can make use of it is always around 2/3 of the total. In contrast, I’ve not seen any such limit to a process normally. So in practice there is a limit to how much RAM a program can use dedicated to the GPU. However unclear exactly how this limit works or what happens if multiple programs are in use. I’ll see if I can dig up something more concrete.
Edit: @Boil ‘s post above linked to a post from @singhs.apps that’s pretty good, but I don’t think anyone has done a full technical analysis - at least not that I’ve found.
MTLDevice
protocol does offer a value called recommendedMaxWorkingSetSize
which means 'An approximation of how much memory, in bytes, this device can use with good performance.' I doubt most programs use this to probe how much memory a Metal device can use and that does not equal to the total memory available to the system but a practical upper limit that a Metal device can use without performance penalty.AFAIK there is no direct way querying how much memory does a metal device actually 'has', but theMTLDevice
protocol does offer a value calledrecommendedMaxWorkingSetSize
which means 'An approximation of how much memory, in bytes, this device can use with good performance.' I doubt most programs use this to probe how much memory does a Metal device can use and that does not equal to the total memory available to the system but a practical upper limit that a Metal device can use without performance penalty.
View attachment 1901156
can someone try the same in macOS and fill up the RAM with “GPU memory”?
MTLDevice
indeed reported 24GB has been allocated. Maybe we could test again with MTLHeap
and actually fill it with something not zero to see if the result differs.