Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Just a personal opinion.

For me and my workflow I don't really want more cores, I want more singel threaded performance because that is what I would notice the most. Doubling the cores would have very little performance gain in my case.
My situation is the opposite. I didn't see any significant performance difference when I replaced a 2010 iMac with a 2020 iMac, and I don't see it either between a 2017 MBP 15" and an M1 MBA. The difference is there if I deliberately pay attention to it or if I try to measure it, but it doesn't affect normal use. In most cases, network latency has a bigger effect than CPU speed on the performance of the software I use.

What I did see is that on the 10-core iMac, I can compile and run tests for macOS and Linux versions of software at the same time, and there are still enough free CPU cores for normal desktop use. It would be even better if I could do that without the fans making angry noises.
 
  • Like
Reactions: EntropyQ3

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
With regard to keeping the unified memory whilst increasing capacity: would it be feasible to continue adding dies around the SoC, much like a modern games console? Or would Apple consider some hybrid solution with the traditional RAM sticks?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
With regard to keeping the unified memory whilst increasing capacity: would it be feasible to continue adding dies around the SoC, much like a modern games console?

That's the most obvious approach. Apple designs their RAM systems like a GPU.

Or would Apple consider some hybrid solution with the traditional RAM sticks?

I don't see how they could do it whilst maintaining the high bandwidth they will need, especially on the high end products.
 

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
That's the most obvious approach. Apple designs their RAM systems like a GPU.
Thanks. I can understand this approach for the next iMac with a 128gb ceiling (4 x 4 8gb dies), but for a Mac Pro? Perhaps they will go with some bespoke solution, like a module.
 

Spindel

macrumors 6502a
Oct 5, 2020
521
655
With regard to keeping the unified memory whilst increasing capacity: would it be feasible to continue adding dies around the SoC, much like a modern games console? Or would Apple consider some hybrid solution with the traditional RAM sticks?
I could see it as an option for the Mac Pro, but it would probably be more used to house swap. Just another tier to the ones we have today.
Tier 1: CPU cache
Tier 2: SoC RAM (slower than tier 1 but cheaper for more capacity)
Tier 3: traditional SODIMM RAM but acts more like traditional swap on SSD/HDD but faster
Tier 4: SSD swap, when tier 3 is not enough
 

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
I could see it as an option for the Mac Pro, but it would probably be more used to house swap. Just another tier to the ones we have today.
Tier 1: CPU cache
Tier 2: SoC RAM (slower than tier 1 but cheaper for more capacity)
Tier 3: traditional SODIMM RAM but acts more like traditional swap on SSD/HDD but faster
Tier 4: SSD swap, when tier 3 is not enough
That’s exactly what I was thinking. The SoC unified memory will likely be a standard on all M-series chips, and the Mac Pro may be the only machine to offer SODIMM as an option.
 

Spindel

macrumors 6502a
Oct 5, 2020
521
655
That’s exactly what I was thinking. The SoC unified memory will likely be a standard on all M-series chips, and the Mac Pro may be the only machine to offer SODIMM as an option.
At this point we are all just speculating but I really could see this as an option. But the consumer machines (laptops, mini, iMac) will all continue to have just 3 tiers with only SSD swap if SoC RAM isn't enough (it's cheaper and "good enough" for 95% of the usage of consumer machines).
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Thanks. I can understand this approach for the next iMac with a 128gb ceiling (4 x 4 8gb dies), but for a Mac Pro? Perhaps they will go with some bespoke solution, like a module.

By late next year, when the new Mac Pro is expected, we will probably have high-density LPDDR5X/DDR5 chips which will make it less of an issue. I mean, Nvidia's new HPC product is supposed to leverage LPDDR5, so if Nvidia can use that stuff in datacenter supercomputers, Apple can use it for the Mac Pro.

I could see it as an option for the Mac Pro, but it would probably be more used to house swap. Just another tier to the ones we have today.
Tier 1: CPU cache
Tier 2: SoC RAM (slower than tier 1 but cheaper for more capacity)
Tier 3: traditional SODIMM RAM but acts more like traditional swap on SSD/HDD but faster
Tier 4: SSD swap, when tier 3 is not enough

Possible, but it also introduces certain complexity and performance issues... I think Apple will use a simpler approach (even if it ends up costing more — Mac Pro is already expensive enough)
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Just a personal opinion.

For me and my workflow I don't really want more cores, I want more singel threaded performance because that is what I would notice the most. Doubling the cores would have very little performance gain in my case.

Easiest way to do this would be to increase frequency and power consumption. It will be interesting to see if Apple takes this route in the Mac Pro.
 

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
By late next year, when the new Mac Pro is expected, we will probably have high-density LPDDR5X/DDR5 chips which will make it less of an issue. I mean, Nvidia's new HPC product is supposed to leverage LPDDR5, so if Nvidia can use that stuff in datacenter supercomputers, Apple can use it for the Mac Pro.
To meet the current Mac Pro’s 1.5TB ceiling, though? Seems like an awfully expensive way just to keep the dies on the logic board.
 
  • Like
Reactions: pshufd

darngooddesign

macrumors P6
Jul 4, 2007
18,366
10,128
Atlanta, GA
Not particularly. Consumers of the 27-inch iMac version may not be too happy about this new iMac. Perhaps it would be more accurate to say that it disrupted the 21.5-inch Intel iMac market. Which is no disruption at all...
Sure it did. Intel iMacs are going away, disrupted by AS iMacs.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Sure it did. Intel iMacs are going away, disrupted by AS iMacs.

In the future.

I have another browser tab open contemplating a 27 inch Intel Mac.

I have to pick up a repaired system this week and I'm going to try to take a look at a 27 inch iMac to see how warm it runs with Big Sur.
 

skaertus

macrumors 601
Feb 23, 2009
4,252
1,409
Brazil
Sure it did. Intel iMacs are going away, disrupted by AS iMacs.
Sorry, this is not a disruption. It is merely a replacement.

The original iPhone disrupted the market, banishing Blackberries and other similar devices. The original iPad was also disruptive, as netbooks simply vanished from stores. You can even say the original iMac was disruptive, as it changed the market. They were revolutionary and were followed by many copycats, thereby creating new market categories which did not exist before.

The new iMac simply fits in an existing market category, previously occupied by the 21.5-inch iMac. It is the next-generation iMac, much better than the previous one.

But the new iMac does not create a new market category. It does not revolutionize the whole market. It will not completely destroy the Mac Pro, the 27-inch iMac, and the custom desktop PCs. It will not replace these devices by offering a better, cheaper, no-brainer alternative. It will simply fit in the previous, existing category and price point which was previously occupied by the 21.5-inch iMac.

It is an evolution, not a revolution. No disruption at all.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
To meet the current Mac Pro’s 1.5TB ceiling, though? Seems like an awfully expensive way just to keep the dies on the logic board.

Yeah, I am also very curious as to how they will solve it. It is also very much possible that they will simply use regular DDR5, just with more memory channels. E.g. eight RAM slots with each slot connected to it's own controller (512-bit bus in total). Samsung already announced DDR5-7200 with capacities of 512Gb per DIMM, which would result in bandwidth of 460GB/s and total RAM capacity of 4TB... pair it with 512MB of LLC cache and you've got a really powerful memory subsystem.

Of course, such solution would also be extremely costly and RAM upgrades won't be as straightforward as one would think — all RAM slots will need to be populated, and all RAM modules would need to be identical.

What confuses me however is that recently announced Nvidia Grace datacenter architecture — which is very similar in design to Apple Silicon — is relying on LPDDR5X instead of DDR5-DIMMs. There must be a good reason Nvidia went with mobile RAM instead...
 
Last edited:

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
Yeah, I am also very curious as to how they will solve it. It is also very much possible that they will simply use regular DDR5, just with more memory channels. E.g. eight RAM slots with each slot connected to it's own controller (512-bit bus in total). Samsung already announced DDR5-7200 with capacities of 512Gb per DIMM, which would result in bandwidth of 460GB/s and total RAM capacity of 4TB... pair it with 512MB of LLC cache and you've got a really powerful memory subsystem.

Of course, such solution would also be extremely costly and RAM upgrades won't be as straightforward as one would think — all RAM slots will need to be populated, and all RAM modules would need to be identical.

What confuses me however is that recently announced Nvidia Grace datacenter architecture — which is very similar in design to Apple Silicon — is relying on LPDDR5X instead of DDR5-DIMMs. There must be a good reason Nvidia went with mobile RAM instead...
I'm starting to think that Apple may push the 'MPX module' concept further. Potentially they could use it not just for their own GPU cards, but for RAM as well.

For instance, the 'unified' RAM on the logic board could be anywhere between 32gb and 64gb (assuming they use 4 x dies around the SoC) - enough for the OS and a good portion of the intended user base's apps.

To then get more RAM, Apple could produce a small number of MPX modules with very high capacities, since the surface area of the module could support many times more RAM dies. Think of the Afterburner card in scale.

This would at least relate to the idea that the next-gen Mac Pro would be around half the size of the current model, because all the expansion space would be in one area.
 
  • Like
Reactions: neinjohn

Spindel

macrumors 6502a
Oct 5, 2020
521
655
I'm starting to think that Apple may push the 'MPX module' concept further. Potentially they could use it not just for their own GPU cards, but for RAM as well.

For instance, the 'unified' RAM on the logic board could be anywhere between 32gb and 64gb (assuming they use 4 x dies around the SoC) - enough for the OS and a good portion of the intended user base's apps.

To then get more RAM, Apple could produce a small number of MPX modules with very high capacities, since the surface area of the module could support many times more RAM dies. Think of the Afterburner card in scale.

This would at least relate to the idea that the next-gen Mac Pro would be around half the size of the current model, because all the expansion space would be in one area.
Isn't the problem with this approach that the the PCI-E interface is slower and has more latency than slots for SODIMMS? (sure faster than swapping to SSD but if it is a system with as much performance as possible)
 

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
Isn't the problem with this approach that the the PCI-E interface is slower and has more latency than slots for SODIMMS? (sure faster than swapping to SSD but if it is a system with as much performance as possible)
I'm honestly not sure. I'm sure they'll figure something out anyway!
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I'm starting to think that Apple may push the 'MPX module' concept further. Potentially they could use it not just for their own GPU cards, but for RAM as well.

For instance, the 'unified' RAM on the logic board could be anywhere between 32gb and 64gb (assuming they use 4 x dies around the SoC) - enough for the OS and a good portion of the intended user base's apps.

To then get more RAM, Apple could produce a small number of MPX modules with very high capacities, since the surface area of the module could support many times more RAM dies. Think of the Afterburner card in scale.

This would at least relate to the idea that the next-gen Mac Pro would be around half the size of the current model, because all the expansion space would be in one area.

This doesn't make to much sense to me because now you are locking the RAM behind a slow MPX-interconnect. Same consideration goes by the way to GPUs — if you put a GPU on a separate slot you lose all the advantages of the unified memory which is one of the main selling points of Apple Silicon for professional workloads.

What I can see instead is that the entire system — CPU+GPU+NPU+RAM— is shipped as an MPX module. And a Mac Pro could host multiple such modules, allowing you to build a ridiculously powerful supercomputer-like NUMA system. I believe there are some hints that Apple could be planning something like this: unified cooling system of the Mac Pro (which allows the MBP boards to be very powerful) and Metal peer group API (for discovering grouped GPUs).

Or of course, a more traditional system as discussed previously (with CPU+GPU residing on the mainboard, possibly slotted RAM and MPX modules for Afterburner etc.).
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Of course, such solution would also be extremely costly and RAM upgrades won't be as straightforward as one would think — all RAM slots will need to be populated, and all RAM modules would need to be identical.
Typically server grade boards have a minimum number of DIMM slots to be populated. If I’m not wrong, for Mac Pros, minimum is 4 DIMMs, with 6 DIMMs resulting in maximum bandwidth. 12 DIMMs for the Mac 1.5 TB memory, but only for the higher cores Xeons, presumably because it has more pins for more memory channels. I would think the AS Mac Pro will be similar, just how wide will Apple go.

Also, with DDR5-7200 I think cooling for the RAM DIMMs will probably be more than what’s required for the SoC. ?
 

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,693
12,917
What I can see instead is that the entire system — CPU+GPU+NPU+RAM— is shipped as an MPX module. And a Mac Pro could host multiple such modules, allowing you to build a ridiculously powerful supercomputer-like NUMA system. I believe there are some hints that Apple could be planning something like this: unified cooling system of the Mac Pro (which allows the MBP boards to be very powerful) and Metal peer group API (for discovering grouped GPUs).
Now that makes a lot more sense. I think it would take a while for the user base to get their head around not being able to cherry-pick certain components, but in terms of pure efficiency it would be the most logical approach. After all, the chances are that a customer who needs one high-end component would likely also need powerful components in the other respective categories.

How - or if - modules of different specs could be combined remains to be seen, but it's exciting to say the least.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Typically server grade boards have a minimum number of DIMM slots to be populated. If I’m not wrong, for Mac Pros, minimum is 4 DIMMs, with 6 DIMMs resulting in maximum bandwidth. 12 DIMMs for the Mac 1.5 TB memory, but only for the higher cores Xeons, presumably because it has more pins for more memory channels. I would think the AS Mac Pro will be similar, just how wide will Apple go.

Also, with DDR5-7200 I think cooling for the RAM DIMMs will probably be more than what’s required for the SoC. ?

Just that with Xeons, if you leave some RAM slots empty, you probably won't even notice it. With Apple Silicon, you might have just lost a good chunk of your GPU performance...
 

Bodhitree

macrumors 68020
Original poster
Apr 5, 2021
2,085
2,217
Netherlands
It’s harder to make a product disruptive by design than by value. You could argue that the iPhone was disruptive by design, but it unified a set of other functionalities into a package with few compromises. You could try to pursue that strategy on the desktop.

What could you try to unify in the home into a single device? Well, the electronics that most homes possess are the music system and the television and the games console and the computer. Perhaps Apple could be persuaded to create a single Home System which allowed high quality hifi music playback, watching television on a large screen, and doing gaming and some computing tasks from the couch.

They already have all the pieces needed for that — HomePod like music playback, the iMac and MacOS for the computing backbone, all it would need is a large enough screen and enough options for streaming video content. I can think of a few other features that would be compelling, like FaceTime on a large screen.

Still it’s important to understand that what drives the pressure to unify devices on your person is your limited carrying capacity, namely the size of your pockets and the weight, which doesn’t apply in the home. You would need to find another incentive, and I think that has to be overall value. By unifying these devices you could prevent duplication, less hardware, more quality, and a better value.
 
Last edited:

cvtem

macrumors member
Jun 8, 2016
37
32
5) Memory management on the M1 Macs seems to be pretty good compared to Intel Macs. I did have some issues with Safari hogging memory, and some cloud storage services seem to be unusually memory hungry (OneDrive, Google Backup & Sync), but overall it's not too bad. I'd agree that most Linux distros appear to be consume far less memory than MacOS or Windows. But again, how much of the software you need runs on Linux? There are a lot of useful utility apps, but not a huge offering of mainstream packages. Davinci Resolve and PixInsight are a couple that I use on Linux.

I actually compared the three OS on memory, windows and MacOS are equally bad, and the M1 is not exceptional in most cases.
linux was a standout as usual, most surprising was how little memory chrome on Linux uses compared to Mac and windows, but thats digressing.
due to 16gb being way less than I need I have been spending a bit of time watching this.

hint 1, disable GPU acceleration in Lightroom. i could get 13GB of ram used within a minute using it, and about 3gb with GPU accel off, and no noticeable difference in performance.
lightroom is not alone with this anomaly, but it’s the worst I found.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.