Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City
You guys are overthinking it.

Apple silicon uses unified memory, so let’s say you get a MacBook with 8GB of RAM, the system will dynamically balance that amount of RAM between the system and the GPU with the workload.

Obviously that’s not how Intel works. The GPU on an Intel machine will allocate 2GB of that 8GB you have even if you’re using the calculator app.

It ends up being so much more efficient. Also RISC uses a little more RAM than CISC after all, the difference here is the efficiency on the unified memory design.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
Obviously that’s not how Intel works. The GPU on an Intel machine will allocate 2GB of that 8GB you have even if you’re using the calculator app.
Are you sure this is true?

According to this, OS X has been dynamically allocating video memory as needed since Mavericks:

Not claiming of course that M1's approach may be superior, but does not seem true that os x on intel will use whatever amount no matter what is being done.
 

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City
Are you sure this is true?

According to this, OS X has been dynamically allocating video memory as needed since Mavericks:

Not claiming of course that M1's approach may be superior, but does not seem true that os x on intel will use whatever amount no matter what is being done.
Yeah, I think whenever something less than 4Gb boots up, macOS allocates 1.5gb or lower. When you load up 8Gb+ on your system, macOS maxes out to 2GB.

I’m not so sure its dynamic. @leman can answer it better I guess.
 

pshufd

macrumors G4
Oct 24, 2013
10,149
14,574
New Hampshire
Always better to have more RAM as it increases longevity of your system.

I'm using a Late 2009 iMac which I upgraded from 4 GB of RAM to 16 GB. Never have to touch the HDD. I'll take more RAM over more SSD any day of the week.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I’m not so sure its dynamic. @leman can answer it better I guess.

Don’t think I can. This is also something I was always curious about, but I couldn’t find any info. Based on Intel’s marketing material, the basic approach seems to be very similar to what Apple is doing - shared last level cache and shared memory controllers, but the details are very vague. It’s unclear whether these modern Intel iGPUs can only access certain substance of physical RAM, whether they are subject to the same TLB lookup procedure as the CPU cores and also how all this stuff is interconnected. The “GPU Memory” could very well be an artificial limitation by the OS/driver or there could indeed be some technical limitation.

All we know for sure is that M1 has more cache than Intel chips, and probably also more effective GPU bandwidth.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
Don’t think I can. This is also something I was always curious about, but I couldn’t find any info. Based on Intel’s marketing material, the basic approach seems to be very similar to what Apple is doing - shared last level cache and shared memory controllers, but the details are very vague. It’s unclear whether these modern Intel iGPUs can only access certain substance of physical RAM, whether they are subject to the same TLB lookup procedure as the CPU cores and also how all this stuff is interconnected. The “GPU Memory” could very well be an artificial limitation by the OS/driver or there could indeed be some technical limitation.

This is only a small clue (although I do think from the documentation and other references that it does do dynamic video memory allocation, I'm not certain):

The clue I noticed here (and I'm interested in the issue of why lightroom has some issues) is that metal has three different memory resource storage modes: shared, private, and managed. Private for stuff the gpu won't need access to, shared for gpu/cpu access, and managed for a copy being in each (to avoid shifting between the different memory banks). Each with slightly different characteristics for speed and whatnot.

A bit above my knowledge level but would seem that managed wouldn't be needed or be redundant in the unified memory architecture.

Now, one thing, I don't know if lightroom is using metal at all (yet), or using older models and whether those resource models work in OpenCL (or whatever).

Now my speculation: I could very much see that eg. Lightroom if written aggressively (e.g. in full gfx acceleration) would be copying between what it thinks of as different memory banks (core/gpu) and also saving them in both (similar to the managed) and just blow up the amount of memory in use, when it's actually unnecessary. Resulting in the bad swapping and some of the performance issues users are seeing.

To be clear, not saying that this is because of metal - it could be e.g. that Lightroom had some internal tricks/routines to achieve some acceleration similar to above - and that that's what's causing the 'collisions' on M1 systems. (I say not because of metal because I'd guess apple's own drivers would know about this and just present the same memory as managed to programs that called on it.)

[Obviously this could apply to any number of programs that did similar things, I just happen to be interested in lightroom.]
 

rui no onna

Contributor
Oct 25, 2013
14,916
13,260
There are some reports of significant SSD speeds from these M1 macs with speculation that the storage controller built into M1 is perhaps responsible


Screen-Shot-2020-12-09-at-9-22-20-AM.png


looks like reads in chunks of 8MiB through 64 MiB are seriously fast, reads are consistent from 64 KiB through 64MiB.

i look forward to the further analysis.

Chances are there's RAM caching going on there.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
This is only a small clue (although I do think from the documentation and other references that it does do dynamic video memory allocation, I'm not certain):

The clue I noticed here (and I'm interested in the issue of why lightroom has some issues) is that metal has three different memory resource storage modes: shared, private, and managed. Private for stuff the gpu won't need access to, shared for gpu/cpu access, and managed for a copy being in each (to avoid shifting between the different memory banks). Each with slightly different characteristics for speed and whatnot.

A bit above my knowledge level but would seem that managed wouldn't be needed or be redundant in the unified memory architecture.

You are correct, managed mode is not even exposed on iOS. It can be used on M1 machines, most likely for code compatibility with Intel Macs, but I suspect that managed mode there is just the same as shared and the relevant commands do nothing. It will probably be deprecated some time in the future.

On Apple Silicon, you can avoid the memory copy altogether by using special APIs that allow you to use existing memory allocation as a metal data buffer. It takes extra effort to do so abs there are some requirements you have to fulfill. The “regular” API still involves copying data.

Now, one thing, I don't know if lightroom is using metal at all (yet), or using older models and whether those resource models work in OpenCL (or whatever).

Now my speculation: I could very much see that eg. Lightroom if written aggressively (e.g. in full gfx acceleration) would be copying between what it thinks of as different memory banks (core/gpu) and also saving them in both (similar to the managed) and just blow up the amount of memory in use, when it's actually unnecessary. Resulting in the bad swapping and some of the performance issues users are seeing.

To be clear, not saying that this is because of metal - it could be e.g. that Lightroom had some internal tricks/routines to achieve some acceleration similar to above - and that that's what's causing the 'collisions' on M1 systems. (I say not because of metal because I'd guess apple's own drivers would know about this and just present the same memory as managed to programs that called on it.)

[Obviously this could apply to any number of programs that did similar things, I just happen to be interested in lightroom.]

I don’t think there is much merit in speculating without having access to Lightroom code and being able to debug the application properly. It very well might be something as trivial as buggy cache implementation that would force images to be stored over and over again.
 
  • Like
Reactions: armoured

armoured

macrumors regular
Feb 1, 2018
211
163
ether
I don’t think there is much merit in speculating without having access to Lightroom code and being able to debug the application properly. It very well might be something as trivial as buggy cache implementation that would force images to be stored over and over again.
True enough, could be some other bug, although curious why this type of bug would present on Apple silicon and not Intel. And if it is an issue specific to Apple silicon, an updated version should be a LOT better.
 
Last edited:

Jouls

macrumors member
Aug 8, 2020
89
57
Then there's the thing called "garbage collection" that, again, absolutely needs to have the whole heap in main memory (RAM) in order to run effectively.
Actually, macOS doesn’t use garbage collection. It uses reference counting instead. And M1 macs do this 4-5 times faster than Intel macs. With implication on RAM usage. Read here.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
Actually, macOS doesn’t use garbage collection. It uses reference counting instead. And M1 macs do this 4-5 times faster than Intel macs. With implication on RAM usage. Read here.

Thanks, I think he puts it all pretty well - it's not magic, it still 'uses' as much memory as before (compared to intel macs, not to android), but because of very efficient optimisation, if you are somewhat short, 'faking it is a whole lot more fun.'

Or alternatively, running somewhat short on memory is a lot less painful.

And I agree, in some use profiles, it may not even be noticeable to have to use swap a bit, and that might make the difference for some between bumping up to the next memory level.

This actually reminded me of when SSDs first became common - with swap on HDD, running out of memory and having to swap was NOT a fun way to fake it. The first time you get an SSD, it's amazing - and in moderate swap usage, not bad at all.

I still hold that it's not magic, it's still mostly using as much memory as before (leaving aside unified memory architecture for now), but apple's architecture work has clearly removed a lot of bottlenecks to make memory less of an issue (speed is compensating for size).
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
True enough, could be some other bug, although curious why this type of bug would present on Apple silicon and not Intel. And if it is an issue specific to Apple silicon, an updated version should be a LOT better.
Given that most bugs are stupid and/or amusing, I would guess Lightroom simply claims all the GPU memory the OS is willing to give it. Then there is very little RAM left for everything else. As high-end GPUs often have 8-16 GB memory, the amounts don't look particularly unusual, and Lightroom can probably take advantage of the memory. It has not been designed for environments, where the CPU and GPU share the same memory and the amount of GPU memory is not strictly limited.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
Given that most bugs are stupid and/or amusing, I would guess Lightroom simply claims all the GPU memory the OS is willing to give it. Then there is very little RAM left for everything else. As high-end GPUs often have 8-16 GB memory, the amounts don't look particularly unusual, and Lightroom can probably take advantage of the memory. It has not been designed for environments, where the CPU and GPU share the same memory and the amount of GPU memory is not strictly limited.
Ha - possibly of course. But hopefully if that's the case they could do a minor update (not even recompile for Apple Silicon) to just do an environment test ({if ASi, don't do that!}) that would improve things a lot.

As I think I mentioned, Lightroom seems to set the gfx acceleration to full acceleration on the M1 when that setting is left to auto; turning that off improves things a fair bit.

But it's Adobe so who knows?
 

matrix07

macrumors G3
Jun 24, 2010
8,226
4,895
Today I finally did it! I crashed my MBA. It warned me since yesterday that my SSD was low but I continued using it. I thought of re-starting but forgot. Today after 7 hours of use it crashed.
Luckily it re-started right up.

If you don't do anything special and willing to part with $200 I'd say upping the SSD to 512 GB will be a better decision than upping the RAM to 16 GB. Your machine will still be snappy but less crash.
 
  • Like
Reactions: Jeff Kirvin

Toutou

macrumors 65816
Jan 6, 2015
1,082
1,575
Prague, Czech Republic
Actually, macOS doesn’t use garbage collection. It uses reference counting instead. And M1 macs do this 4-5 times faster than Intel macs. With implication on RAM usage. Read here.
Well, not exactly.
It's not that macOS doesn't use garbage collection. It's that Objective-C and Swift (the main languages of the Apple world) both don't use garbage collection.

Programs written in other languages, or using other runtimes (Java, Erlang, C#) will use their respective memory handling methods (garbage collection, reference counting, malloc + free ...).

But hey, you're pretty observant.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.