Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Ethosik

Contributor
Oct 21, 2009
8,141
7,119
I really dislike telling people 'how' they should do anything.

But why do you have so many apps open?

I view opening, using, and closing apps efficiently as being good organisation/work efficiency.
Yeah I really don’t understand myself. I don’t know how people can have 50+ browser tabs open either. It makes me less efficient when the tabs get too small and I click the wrong one 5 times in a row.
 
  • Like
Reactions: alexjholland

alexjholland

macrumors 6502a
Yeah I really don’t understand myself. I don’t know how people can have 50+ browser tabs open either. It makes me less efficient when the tabs get too small and I click the wrong one 5 times in a row.
I use Workona for Chrome tab management.

Each client and project has its own group of tabs that I can load and close together.

Running my marketing agency without Workona would be impossible.
 
  • Like
Reactions: bobcomer

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
I fully understand what you are saying.

But why is RAM usage of native M1 apps lower than equivalent non native (both run on a intel mac or through rosetta on a M1 mac)?
Most obvious thing I noticed this behavior in is MS Office.

So, the problem with questions like this is that without specific examples, done in a reproducible way (so as to be inspected by others knowledgeable in the space), the best answer that can be provided is "it depends". All we can do is speculate without specifics.

That said, there is a podcast that interviewed Erik Schwiebert from Microsoft, who worked on the M1 port, and is probably one of the better ways to get some insight on engineering this sort of shift: http://podcast.macadmins.org/2020/12/21/episode-195-getting-ready-for-the-m1-with-erik-schwiebert/

But here's some other things to consider:
- Big complicated projects like Office lazy load libraries, manage memory internally to try to decide when memory can be freed, etc. Memory usage is thus specific to how the app is being used, even down to the features a specific document uses (like Office Art), and how long the app has been running (i.e. when did it last purge internal caches/etc).
- Office in particular is a heavy user of CoreAnimation/Metal, and leans on GPU memory quite a bit.
- There are differences in memory alignment between different architectures that lead to different memory usage, but those are more relevant to 32 vs 64-bit. ARM64 and x64 have similar memory alignment behaviors.
- We'd need to know a bit more about how Apple attributes memory usage to an app, so we can eliminate or control for known factors, but that requires either reverse engineering aspects of how Apple operates, or Apple telling us how memory is attributed to processes in this UMA world. One thing I do wonder is how does iGPU memory usage get reported on Intel macOS, if at all?
 

casperes1996

macrumors 604
Jan 26, 2014
7,597
5,769
Horsens, Denmark
So, the problem with questions like this is that without specific examples, done in a reproducible way (so as to be inspected by others knowledgeable in the space), the best answer that can be provided is "it depends". All we can do is speculate without specifics.

That said, there is a podcast that interviewed Erik Schwiebert from Microsoft, who worked on the M1 port, and is probably one of the better ways to get some insight on engineering this sort of shift: http://podcast.macadmins.org/2020/12/21/episode-195-getting-ready-for-the-m1-with-erik-schwiebert/

But here's some other things to consider:
- Big complicated projects like Office lazy load libraries, manage memory internally to try to decide when memory can be freed, etc. Memory usage is thus specific to how the app is being used, even down to the features a specific document uses (like Office Art), and how long the app has been running (i.e. when did it last purge internal caches/etc).
- Office in particular is a heavy user of CoreAnimation/Metal, and leans on GPU memory quite a bit.
- There are differences in memory alignment between different architectures that lead to different memory usage, but those are more relevant to 32 vs 64-bit. ARM64 and x64 have similar memory alignment behaviors.
- We'd need to know a bit more about how Apple attributes memory usage to an app, so we can eliminate or control for known factors, but that requires either reverse engineering aspects of how Apple operates, or Apple telling us how memory is attributed to processes in this UMA world. One thing I do wonder is how does iGPU memory usage get reported on Intel macOS, if at all?

Thanks for adding this. I'll have to look at that podcast!
 

Fred Zed

macrumors 603
Aug 15, 2019
5,819
6,515
Upstate NY . Was FL.
ARM smartphones even ship with 16 GB RAM memory these days. Macrumors are smoking some good stuff if they believe 8GB of data loaded in RAM is 2 GB on ARM.

Also it is kinda of funny if you buy an ARM laptop that has half the memory of some ARM smartphones.

When I order a 16” M2X MBP later this year, it will definetly not be with 8GB of RAM.
Let’s not forgot with android smartphones the 16GB RAM is primarily for marketing purposes. The manufacturers know this and utilize it for sales.
 

tskwara

macrumors regular
May 6, 2010
104
91
As a user of Macs for pro iOS development for the past decade, get the 16GB if you can swing it. I can't say for sure that 8GB will get you by for your needs, but 16GB works quite well for what I would ordinarily expect from a machine with 32GB. Consider that I'm driving an Apple XDR Pro display with various development and design apps running at once - a real pleasure. To be transparent, work requires that I use an issued 2019 i9 16" 32GB, which I don't prefer over the M1.
 

casperes1996

macrumors 604
Jan 26, 2014
7,597
5,769
Horsens, Denmark
Also again stop equating ARM and Apple Silicon - they are not the same thing.
That very heavily depends on the lens and context we look at things through. It's as much ARM as Netburst and Zen 3 are both x86. That still means something even though there's still a massive difference between Penryn and Zen3
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Actually the lens does not matter here at all. Apple Silicon uses a completely different microarchitecture than other ARM designs. The only thing ARM about it is the ISA and even there Apple has customizations. As a result, Apple Silicon quite likely behaves differently than both x86 and other ARM in most respects.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
So, the problem with questions like this is that without specific examples, done in a reproducible way (so as to be inspected by others knowledgeable in the space), the best answer that can be provided is "it depends". All we can do is speculate without specifics.

But here's some other things to consider:
- Big complicated projects like Office lazy load libraries, manage memory internally to try to decide when memory can be freed, etc. Memory usage is thus specific to how the app is being used, even down to the features a specific document uses (like Office Art), and how long the app has been running (i.e. when did it last purge internal caches/etc).
- Office in particular is a heavy user of CoreAnimation/Metal, and leans on GPU memory quite a bit.
... One thing I do wonder is how does iGPU memory usage get reported on Intel macOS, if at all?
Thanks, I'll listen in to that. Some brief thoughts:
-This reproducibility for the big ones is really the issue in piecing together. I would expect even some things like preferences and other features would impact which libraries are loaded etc - which might mean, for example, that you'd have to compare clean installs to really know, and on top of that, it's possible MS made some tweaks in the background for M1 due to architecture differences.
-As far as I'm aware it's really not easy to see how iGPU memory is being utilised.
-But the hint here that they're using the CoreAnimation/Metal frameworks heavily may be the indication that this is an area where UMA makes a difference, depending on how heavily and which features of memory handling they were using. (Sort of a side hint here is that Office using features here that are not at all efficient in terms of resource use to compensate for performance issues elsewhere.)

That said, it's not a huge difference in terms of memory used. Obviously every little bit helps for those that are on-the-edge.

As far as I can tell from the UMA information and the memory handling frameworks, there's ONE type of memory handling - where the program specifically and purposefully keeps 'live' copies of data in both main memory and the GPU - that would potentially mean UMA substantially reduces memory use - both are now seeing and accessing the same block directly. (The other memory techniques move the data back and forth from main memory to GPU memory, so UMA also more efficient, but wouldn't be as large a reduction in total ram used). I assume this is what is referred to as reducing 'duplication.'

But then the question is: how many and which programs actually use this approach? And I'm guessing it's not at all easy to determine what's under the hood without some pretty extreme testing. Which unfortunately for the end-user means (most often) just trying / testing the programs they're reliant on to see if it ram usage significantly reduced.

(I'd have assumed this would apply mostly to programs that make heavy use of GPU/metal frameworks etc, and honestly I wouldn't have thought Office would be in that category - more the graphics-heavy programs like Photoshop and Lightroom and equivalents...)
 

Toutou

macrumors 65816
Jan 6, 2015
1,082
1,575
Prague, Czech Republic

Both of these are misunderstanding the concept of "virtual memory" and misuing the term.

I appreciate your curiosity and I don't want to sound rude, but I majored in software engineering and am currently programming for a living.

Virtual memory is a concept of using an ISOLATED virtual address space for every process and using a piece of hardware called a MMU - memory management unit - to translate virtual addresses to the corresponding physical addresses.

All major OSes and hardware architectures use virtual memory with no way to opt out. Virtual memory is what makes paging to secondary storage (swapping) viable, but itself is an orthogonal concept.
 

the8thark

macrumors 601
Apr 18, 2011
4,628
1,735
I don't want to sound rude
Best not to say this, as it just comes across as "I don't want to sound rude but I will anyway". Just get to the point of your message and let us all decide the tone of the message. Also direct to the point messages, are not harsh, getting the facts is a good thing. Your message was not harsh, so no need to sugar coat it as you did.

Secondly, You are 100% correct in your definition. Even I kno this. However even Apple used the term Virtual Memory back in the classic Mac OS days. I looked up a quick picture to show you.

YgMcGLT.png


You could change how much of the HDD, as used for virtual memory in one of the system panels. You could even set the amount of virtual memory to zero. As far as I remember Apple added it into System 7 though I think third party apps that did this existed earlier? Not so sure about this though.

You saying I am wrong is also saying Apple was wrong. In your opinion what should have apple called this feature in the classic OS if you still believe this use of the term virtual memory is wrong?
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
Secondly, You are 100% correct in your definition. Even I kno this. However even Apple used the term Virtual Memory back in the classic Mac OS days. I looked up a quick picture to show you.

You saying I am wrong is also saying Apple was wrong. In your opinion what should have apple called this feature in the classic OS if you still believe this use of the term virtual memory is wrong?
Apple was wrong, too.

But seriously: of what possible benefit to anyone is this discussion now? The term as used by apple way back when was wrong. It wasn't intentional on your part, either - ok. But does arguing about whether the way apple used the phrase in the past add any value?

The only reason why this is relevant now is that the "wrong" usage may be confusing to those reading this thread now. Virtual memory as reported in macos now is (basically) the correct definition, and completely and utterly irrelevant to the issue of 'how much physical main memory is enough*.'

Macos _right now_ reports virtual memory for a process that (basically) corresponds to the correct virtual memory definition used in computer science. For example, right now, Mail.app for me shows 'virtual memory' of ~5gb (on an older laptop with only 4gb of actual RAM).

Thinking of this in terms of apple's previous (wrong) usage of virtual memory provides no useful information. Anyone looking this number up will just be confused. So let's stop using it.

For users looking at this and trying to get useful information: virtual memory as reported in macos has nothing to do with physical ram. They care about 'real memory' (as reported in macos), physical memory, swap memory, and a number of other measures - and they should ignore 'virtual memory' except in very, very rare cases.

*And I put an asterisk by 'enough' above because it's quite subjective, and partly a function of how much real-world slowdown an individual user experiences in real world usage. Faster SSD means faster swap memory, which means fewer noticeable slowdowns, but it also comes at a monetary cost. If swap memory is slower than main memory, swapping _always_ means less-than-optimal performance. How much less-than-optimal is okay for a given dollar cost is a 'management' decision, whether that management is corporate or individual.
 

Toutou

macrumors 65816
Jan 6, 2015
1,082
1,575
Prague, Czech Republic
Secondly, You are 100% correct in your definition. Even I kno this. However even Apple used the term Virtual Memory back in the classic Mac OS days. I looked up a quick picture to show you.
You're right, I googled that and well, they went with a catchy name in the classic Apple manner and it was probably a good name then. No one should be buying a computer without a bit of virtual, digital or at least cyber in it.

But memory management techniques have gotten much more complicated since then and at this point I'd rather go for less confusion -- at least with a prosumer level hw debate going on.
And especially on MacRumors where you have people who know what a translation lookaside buffer is and people who think their iPads run Linux together.

So yeah, I'm the grumpy old dev in his mid twenties yelling that you're wrong and Apple was wrong and it doesn't really matter that much ?
 

Argon_

macrumors 6502
Nov 18, 2020
425
256
I really dislike telling people 'how' they should do anything.

But why do you have so many apps open?

I view opening, using, and closing apps efficiently as being good organisation/work efficiency.

I end up 'culling' my tabs on a regular basis, otherwise I tend to accumulate a lot of them.
 

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Actually the lens does not matter here at all. Apple Silicon uses a completely different microarchitecture than other ARM designs. The only thing ARM about it is the ISA and even there Apple has customizations. As a result, Apple Silicon quite likely behaves differently than both x86 and other ARM in most respects.
Microarchitecture determines how the CPU operates internally. It has nothing to do with RAM which is external to CPU. Being ARM means that it has the same instruction set as every other ARM CPU out there. Now explain to us how one can use the same instruction set and yet be better at using RAM because the silicon was designed by Apple? In theory there might be some far fetched possibilities like CPU being orders of magnitude more performance, the OS developers might consider compressing all data in RAM. But that's not the case here.
 

profcutter

macrumors 68000
Mar 28, 2019
1,550
1,296
Best not to say this, as it just comes across as "I don't want to sound rude but I will anyway". Just get to the point of your message and let us all decide the tone of the message. Also direct to the point messages, are not harsh, getting the facts is a good thing. Your message was not harsh, so no need to sugar coat it as you did.

Secondly, You are 100% correct in your definition. Even I kno this. However even Apple used the term Virtual Memory back in the classic Mac OS days. I looked up a quick picture to show you.

YgMcGLT.png


You could change how much of the HDD, as used for virtual memory in one of the system panels. You could even set the amount of virtual memory to zero. As far as I remember Apple added it into System 7 though I think third party apps that did this existed earlier? Not so sure about this though.

You saying I am wrong is also saying Apple was wrong. In your opinion what should have apple called this feature in the classic OS if you still believe this use of the term virtual memory is wrong?
Ahh, yeah, I remember the days of Connectix RAM Doubler. Make your 16MB Mac fly like a 32 MB Mac. With almost no downsides...
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Thanks, I'll listen in to that. Some brief thoughts:
-This reproducibility for the big ones is really the issue in piecing together. I would expect even some things like preferences and other features would impact which libraries are loaded etc - which might mean, for example, that you'd have to compare clean installs to really know, and on top of that, it's possible MS made some tweaks in the background for M1 due to architecture differences.
-As far as I'm aware it's really not easy to see how iGPU memory is being utilised.
-But the hint here that they're using the CoreAnimation/Metal frameworks heavily may be the indication that this is an area where UMA makes a difference, depending on how heavily and which features of memory handling they were using. (Sort of a side hint here is that Office using features here that are not at all efficient in terms of resource use to compensate for performance issues elsewhere.)

That said, it's not a huge difference in terms of memory used. Obviously every little bit helps for those that are on-the-edge.

As far as I can tell from the UMA information and the memory handling frameworks, there's ONE type of memory handling - where the program specifically and purposefully keeps 'live' copies of data in both main memory and the GPU - that would potentially mean UMA substantially reduces memory use - both are now seeing and accessing the same block directly. (The other memory techniques move the data back and forth from main memory to GPU memory, so UMA also more efficient, but wouldn't be as large a reduction in total ram used). I assume this is what is referred to as reducing 'duplication.'

But then the question is: how many and which programs actually use this approach? And I'm guessing it's not at all easy to determine what's under the hood without some pretty extreme testing. Which unfortunately for the end-user means (most often) just trying / testing the programs they're reliant on to see if it ram usage significantly reduced.

(I'd have assumed this would apply mostly to programs that make heavy use of GPU/metal frameworks etc, and honestly I wouldn't have thought Office would be in that category - more the graphics-heavy programs like Photoshop and Lightroom and equivalents...)

Yes, what I was hinting at is that how much an app leans on Metal will affect RAM savings from UMA. For apps that use Metal directly, they control buffers to an extent, but a developer really just tells Metal when they wrote to a buffer, and then the OS manages any copying that needs to be done. At higher levels like CoreAnimation/UIKit/AppKit, the developer has less control, and the OS has further freedom on managing the underlying Metal buffers.

But every app uses the GPU to composite screen contents, so texture memory is required no matter what app I build. The difference comes down to how much texture memory an app winds up using. More complicated view hierarchies, or more rasterized data (images, stuff rasterized on the CPU) would mean more need for texture memory that needs to be accessible to the CPU and GPU. The main difference between apps like Word and Lightroom and their demand on the GPU is that Word only uses the GPU for display, meaning texture memory is one-way, and will tend to be static once allocated. Lightroom also uses the GPU for compute, which makes it two-way, and creates more "off screen" textures than Word would.

Think about it though, Office is pretty graphics heavy in a way. It has to composite potentially complex documents, and do it very quickly. It has to layout those documents the same way on Mac, Windows, iOS and Android, so it's not going to use AppKit for layout. Office uses Direct2D on Windows for display, so using something similar on Mac makes sense.

Based on the interview with Erik, I don't think the team did really anything in the way of M1 optimizations. Especially in the face of having to port the VB compiler to ARM. As Erik mentions, the ARM transition is surprisingly smooth compared to the Intel transition (I've been in engineering long enough to have seen the difficulties in both myself). But it makes little sense to invest in M1-specific optimizations that your other platforms can't benefit from, from a business perspective. Better to go tackle things that help everyone when possible. That and a lot of the low-hanging fruit in an older project like Office is gone by now.

You're right, I googled that and well, they went with a catchy name in the classic Apple manner and it was probably a good name then. No one should be buying a computer without a bit of virtual, digital or at least cyber in it.

But memory management techniques have gotten much more complicated since then and at this point I'd rather go for less confusion -- at least with a prosumer level hw debate going on.
And especially on MacRumors where you have people who know what a translation lookaside buffer is and people who think their iPads run Linux together.

So yeah, I'm the grumpy old dev in his mid twenties yelling that you're wrong and Apple was wrong and it doesn't really matter that much ?
Get off my lawn, you young whipper snapper. /s :)

But yeah, it doesn't help that Virtual Memory was a term covering swap and protected memory on non-Mac side of things. It was the same CPU features that enabled both. Apple never got protected memory in Classic, but it did have swap, and called that swap Virtual Memory. Yay overloaded terms.
 
  • Like
Reactions: armoured

cocoua

macrumors 65816
May 19, 2014
1,010
624
madrid, spain
RAM is RAM, and it will not behave differently on the M1 vs Intel in how RAM is managed. However, since the M1 uses Unified Memory, it doesn't need to allocate 1.5 GB right off the top solely to the iGPU, and when it comes to swap on the storage drive its faster then the read/write of the Flash memory of the models it replaced
does it means that a MactIntel with 16GB RAM +GPU 8GB RAM has more available RAM than a MacM1 with "only" 16RAM for both CPU and GPU?
 

Spindel

macrumors 6502a
Oct 5, 2020
521
655
Yes, as the Intel Mac with a dedicated GPU will not be using any of its system RAM for GPU purposes. If the Intel Mac uses the Intel iGPU then it also shares system RAM with the GPU
Except for data that the CPU must touch, then that data will reside in both system RAM and GPU RAM and changes will be sent back and fort through the bus to "sync" those data.

EDIT:// Also the rest of the system can not access and use the GPU RAM only the GPU can so in the case of a dGPU (and traditional iGPU) the case is not that 1+1 = 2 but more along the lines of 1+1 = 1,12
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,516
19,664
does it means that a MactIntel with 16GB RAM +GPU 8GB RAM has more available RAM than a MacM1 with "only" 16RAM for both CPU and GPU?

Not really. A lot of GPU data has to be duplicated in system RAM. You can save things like frame buffer storage, but I would't worry too much about it.

Yes, as the Intel Mac with a dedicated GPU will not be using any of its system RAM for GPU purposes.

This is not correct most of the time. The driver has to keep copies of the GPU data in system memory to ensure correct operation. Not for everything, but for most of it anyway.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.