Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
I am sorry that this topic has frustrated an individual.

Actually, this hasn't been beaten to death when more and more individuals, as clearly shown in this thread, are posting the kind of real-life experiences I was looking for on how memory is handled differently by the M1 devices and Big Sur.
Apple has a very generous, no-questions-asked return policy. Just return the ew Mac if it struggles with what you want to do. But I suspect you will be pleasantly surprised the the performance from 16GB RAM.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
I KNEW I saw a screenshot somewhere of a wireless migration during the new OS setup. I would just probably use a USB-C cable between both the Mini and my laptop to transfer the data.
Let us know how it works. It should work and fast, but USB-C is weird in that ... well, what protocol will it use? Should automatically figure it out, thunderbolt maybe? Anyway I'd be more comfortable with that than wifi.
One reason I like the hard drive solution - particularly if getting rid of old computer - is I can leave it in a 'frozen' or archive state just in case.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
Let us know how it works. It should work and fast, but USB-C is weird in that ... well, what protocol will it use? Should automatically figure it out, thunderbolt maybe? Anyway I'd be more comfortable with that than wifi.
One reason I like the hard drive solution - particularly if getting rid of old computer - is I can leave it in a 'frozen' or archive state just in case.
The cable probably determines what performance you will get. If it is a Thunderbolt cable, you'll get top speed, 40 Gb/s. If it is a USB-C charging cable, you won't get more than 5 Gb/s. Average WiFi is probably maxed out at about 400 Mb/s for most 802.11ac systems.
 

Ethosik

Contributor
Oct 21, 2009
8,141
7,119
People keep on saying "RAM is RAM" but there can certainly be better memory management - like compression/swap when programs are not used that often, and potentially other things.

Also speed/latency of RAM is a factor too to the CPU and GPU. Is a 4GB stick of RAM from 2008 just as good as 4GB now that is much faster RAM?
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
People keep on saying "RAM is RAM" but there can certainly be better memory management - like compression/swap when programs are not used that often, and potentially other things.

Yup, but Apple's been pushing on these things prior to the M1. So it's not clear that this is an M1 vs Intel difference in terms of memory management. It more appears Apple is just getting more aggressive with this as it can rely more on SSD performance, and memory compression is a lot cheaper on modern processors than it used to be.

As others have mentioned, the seeming smaller impact of swap could be explained by the better performance of moving pages between swap/RAM. The larger RAM pages of the M1, along with the faster 4KiB-16KiB read/write performance of the SSD controller certainly can help make swap less painful, meaning that even if you are pushing an 8GB machine, the M1 might come out of it feeling much snappier than the Intel equivalent, despite similar RAM usage.

UMA does limit the amount of RAM you lose to the iGPU though, up to 1.5GB (max) as someone pointed out in the thread. That does make 8GB go further than it might otherwise.

Also speed/latency of RAM is a factor too to the CPU and GPU. Is a 4GB stick of RAM from 2008 just as good as 4GB now that is much faster RAM?

In this discussion anyways, this isn't really what's being asked/discussed. That said, Apple's using off the shelf 4266 LPDDR4X chips from Hynix. A fair bit better than the 2133 LPDDR3 modules used in the 2019 13" MBP, or the 2666 (LP?)DDR4 chips used in the 16" MBP.

That speed helps when the CPU, GPU and other components are all looking to make requests at the same time, but is less important when it comes to RAM management, except that you now pay a smaller penalty doing that RAM compression or pro-active swap (which should be done during periods of low load anyways).
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
There are definitely some cases where M1 is using less RAM. We have documented examples of the same program running the same file both on the same version of Big Sur and the M1 example being smaller (significantly) than the Intel one.

Now it is not always the case, so this may be a side effect of the conversion of the software to be native to Apple Silicon.
 

armoured

macrumors regular
Feb 1, 2018
211
163
ether
There are definitely some cases where M1 is using less RAM. We have documented examples of the same program running the same file both on the same version of Big Sur and the M1 example being smaller (significantly) than the Intel one.

What are the cases/documented examples?
 

djlythium

macrumors 65816
Jun 11, 2014
1,170
1,619
I know I’m appealing to annecdote here, so, take this for what it’s worth: I personally know someone who was using 32+ gb of ram on a 64gb Intel Mac Mini for website design, all sorts of apps, + hella windows in Chrome going, + multiple spaces for all of their work stuff for clients and whatnot. I jumped on the M1 bandwagon first, and with all the reports, they decided to get a 16gb Mini, despite having the same concerns as yourself. Their workload has not suffered one bit, constantly using 9gb of the 16gb, no swap, memory pressure always fine. Most of their daily apps have been optimized for M1 now, but some are pushed through Rosetta first.

Anywho, they’re incredibly impressed. No qualms. In fact, it was a net positive gain because they happened to get a Mini unit that does not have BT issues like the prior Intel machine! ? Like @Apple Knowledge Navigator said, it’s how the memory is being used that is the game-changer here.
 

Spindel

macrumors 6502a
Oct 5, 2020
521
655
Bear with me as I try and illustrate something with code.

Code:
typedef struct Point {
    UInt32 x;
    UInt32 y;
} Point;

int main(int argc, char** argv) {
    Point *p = malloc(sizeof(Point)*100);
    printf("How much space is heap allocated?");
    return 0;
}

It does not matter if you compile and run that on x86, ARM or any other architecture, it will always be a minimum of 6.4Kb allocated. Two 32 bit unsigned integers in a Point, and 100 points. There is no way for the CPU architecture to change the laws of physics on that.

The only thing is that for a memory use-case like you describe where most of the memory is not under active use; The apps are sitting open and thus in memory, but you're not interacting with them and they may be idle and the CPU thus never needing that memory, you'd likely be able to page it to disk without really noticing. For one thing the SSD is super fast, and for another the M1 has a very wide OOO buffer for memory and disk operations to be queued way ahead of time while it works on other instructions.
I fully understand what you are saying.

But why is RAM usage of native M1 apps lower than equivalent non native (both run on a intel mac or through rosetta on a M1 mac)?
Most obvious thing I noticed this behavior in is MS Office.
 

Toutou

macrumors 65816
Jan 6, 2015
1,082
1,575
Prague, Czech Republic
accessing virtual memory (SSD space used as Memory)
virtual memory these days is much better due to it being on high quality SSD
Virtual memory will only get better in the future as the quality and speecds of SSD's improve over time
M1 dynamically adjusts how much RAM is allocated to each application
You got the concepts a little mixed up.

https://en.wikipedia.org/wiki/Virtual_memory if you're interested.
 

NJRonbo

macrumors 68040
Original poster
Jan 10, 2007
3,233
1,224
I know I’m appealing to annecdote here, so, take this for what it’s worth: I personally know someone who was using 32+ gb of ram on a 64gb Intel Mac Mini for website design, all sorts of apps, + hella windows in Chrome going, + multiple spaces for all of their work stuff for clients and whatnot. I jumped on the M1 bandwagon first, and with all the reports, they decided to get a 16gb Mini, despite having the same concerns as yourself. Their workload has not suffered one bit, constantly using 9gb of the 16gb, no swap, memory pressure always fine. Most of their daily apps have been optimized for M1 now, but some are pushed through Rosetta first.

Anywho, they’re incredibly impressed. No qualms. In fact, it was a net positive gain because they happened to get a Mini unit that does not have BT issues like the prior Intel machine! ? Like @Apple Knowledge Navigator said, it’s how the memory is being used that is the game-changer here.

THIS is the kind of success story that keeps me optimistic.

I am going to try loading all my apps onto the new M1 that usually ran a bit above 32+ GB and see how it does under 16GB.
 

casperes1996

macrumors 604
Jan 26, 2014
7,597
5,769
Horsens, Denmark
I fully understand what you are saying.

But why is RAM usage of native M1 apps lower than equivalent non native (both run on a intel mac or through rosetta on a M1 mac)?
Most obvious thing I noticed this behavior in is MS Office.
Could be a plethora of reasons but the short answer is that right now, I don't know, and I don't have an M1 Mac to investigate further. I plan on getting the 16" M1X or whatever it will be when that comes around and may be able to illuminate further then. Assuming the differences are not major in size an argument could be that the instruction sizes themselves are smaller, but I find this unlikely, since aside from the longest of x86_64's variable length instructions, you'd normally think CISC instructions less space consuming since a single instruction can do more. On the flip side though, if that single instruction takes 4 bytes to encode and corresponding AArch64 takes just 2 bytes to encode that will result in less instruction space.

What I've talked about prior to this post has been data in memory, but of course the instruction stream itself is also kept in memory, pointed to by the %RIP register on x86_64. Now the lines are of course blurry between data and instructions, but to put it briefly, in the grand scheme of things data is responsible for far more memory usage than instructions. Though that difference could still be part of all of the difference you're seeing between M1 native and Intel binaries at this time.

Let''s make an experiment with the cross compiling capabilities we do have. To Godbolt's Compiler Explorer!

1611143691673.png
Do excuse me that this is so small, it was the only way I could make it all fit on screen. These are two objdumps from compiler explorer. The left is the binary output of x86_64 in a more human readable form, the right is the same for AArch64 (ARM).

The C code this corresponds to is the rather simple
Code:
#include<stdlib.h>
typedef struct Point {
int x;
int y;
} Point;
// Type your code here, or load an example.
int xTimesY(Point* p) {
return p->x * p->y;
}
int main() {
Point *p = (Point*) malloc(sizeof(Point));
p->x = 5;
p->y = 2;
return xTimesY(p);
}

As one can tell by applying relative offsets, the x86_64 is actually 110 bytes shorter than the corresponding ARM. Of course neither of these binaries are compiled with any level of optimisation since the code would reduce to just return 10 even with just -O.

This is not a very comprehensive look into things, and more experimentation would have to be done with larger programs and using optimisation levels, but as an initial investigation, it leads me to believe that the memory footprint reduction has more to do with the linked libraries being streamlined for the M1 than with anything inherent to the instruction stream.
 

Spindel

macrumors 6502a
Oct 5, 2020
521
655
Could be a plethora of reasons but the short answer is that right now, I don't know, and I don't have an M1 Mac to investigate further. I plan on getting the 16" M1X or whatever it will be when that comes around and may be able to illuminate further then. Assuming the differences are not major in size an argument could be that the instruction sizes themselves are smaller, but I find this unlikely, since aside from the longest of x86_64's variable length instructions, you'd normally think CISC instructions less space consuming since a single instruction can do more. On the flip side though, if that single instruction takes 4 bytes to encode and corresponding AArch64 takes just 2 bytes to encode that will result in less instruction space.

What I've talked about prior to this post has been data in memory, but of course the instruction stream itself is also kept in memory, pointed to by the %RIP register on x86_64. Now the lines are of course blurry between data and instructions, but to put it briefly, in the grand scheme of things data is responsible for far more memory usage than instructions. Though that difference could still be part of all of the difference you're seeing between M1 native and Intel binaries at this time.

Let''s make an experiment with the cross compiling capabilities we do have. To Godbolt's Compiler Explorer!

View attachment 1716165 Do excuse me that this is so small, it was the only way I could make it all fit on screen. These are two objdumps from compiler explorer. The left is the binary output of x86_64 in a more human readable form, the right is the same for AArch64 (ARM).

The C code this corresponds to is the rather simple
Code:
#include<stdlib.h>
typedef struct Point {
int x;
int y;
} Point;
// Type your code here, or load an example.
int xTimesY(Point* p) {
return p->x * p->y;
}
int main() {
Point *p = (Point*) malloc(sizeof(Point));
p->x = 5;
p->y = 2;
return xTimesY(p);
}

As one can tell by applying relative offsets, the x86_64 is actually 110 bytes shorter than the corresponding ARM. Of course neither of these binaries are compiled with any level of optimisation since the code would reduce to just return 10 even with just -O.

This is not a very comprehensive look into things, and more experimentation would have to be done with larger programs and using optimisation levels, but as an initial investigation, it leads me to believe that the memory footprint reduction has more to do with the linked libraries being streamlined for the M1 than with anything inherent to the instruction stream.
Difference for excel is that the M1 version uses about 30-50 % less memory (went from around 500-600 MB with excel running in the background with no open documents to about 300-400 MB in the same scenario.

In most cases that I have noted this it is about the same difference, but of course this do differ between different applications.

So it's not a minor difference, and I'm assuming that things like graphical assets should be unchanged since, as we have said earlier a 1 mb picture/sound is 1 mb.
 

djlythium

macrumors 65816
Jun 11, 2014
1,170
1,619
THIS is the kind of success story that keeps me optimistic.

I am going to try loading all my apps onto the new M1 that usually ran a bit above 32+ GB and see how it does under 16GB.
Nice! I think you’ll be pleasantly surprised. Report back on your findings!
 

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
So it's not a minor difference, and I'm assuming that things like graphical assets should be unchanged since, as we have said earlier a 1 mb picture/sound is 1 mb.
I loaded the same 1.8GB PSD, which was a 26MP photo duplicated onto 16 layers. According to Activity Monitor, my M1 Air is using around 1GB more memory than my 2014 MBP.
 
  • Like
Reactions: acwo

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
No. Both are using the same, Intel version of PS. I'm not worried, I got the 16GB Air because I suspected things like this would happen with big assets. I'll test the Beta.
 
Last edited:

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
So once I standardized everything (both are 16GB machines) in the Performance preference pane the results are.

2014 MBP - 3.71GB
M1 Air - Intel PS - 4.4GB
M1 Air - ARM PS Beta - 3.97GB
 
Last edited:

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
What are the cases/documented examples?

Here are a couple.
 
  • Like
Reactions: armoured
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.