Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Bandwidth and latency.
Well, DDR5 and LPDDR5 bandwidth are identical. Yes, they have latency drawbacks when using DIMM as signals takes longer to stabilise when the traces are longer. If I remember correctly, DDR5 have an advantage due to them using higher voltages so it can be clocked higher.

My view is that Mac Pros are expected to have tons of memory. I don't think Apple will have so many SKUs built for Mac Pros and risks not being able to move the pre-built Mac Pros with memory capacities that nobody wants. Also I don't think LPDDR5 modules have ECCs, which is expected for Mac Pros, as they typically have a lot more memory and chances of corrupted bits will be higher.

The M1 Pro and M1 Max shows that Apple is not afraid to put out massive SoC dies, so for them to further increase SLC size is entirely reasonable. This will mitigate the higher access latencies for DIMMed memory modules.
 
Last edited:

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Let's say Apple wants to hit over 1TB memory bandwidth on their Mac Pro system, you'd need 16 or more DDR5 sockets, all of which have to be filled in order to get the full bandwidth.
The 2019 Mac Pro already sports 12 DIMMs slots, so I think it'll not be a problem.

It'll be interesting to see how Apple decides to go with the AS Mac Pro.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
The 2019 Mac Pro already sports 12 DIMMs slots, so I think it'll not be a problem.

It'll be interesting to see how Apple decides to go with the AS Mac Pro.

12 slots, sure, but its still 6-channel RAM. And you don't have to fill all of them. The socketed RAM approach simply does not scale. What if Apple wants 1.6TB/s RAM bandwidth (4x M1 Max)? That would be 32 sockets, all of which have to be filled... this will quickly get very silly :D
 
  • Like
Reactions: throAU

throAU

macrumors G3
Feb 13, 2012
9,204
7,357
Perth, Western Australia
12 slots, sure, but its still 6-channel RAM. And you don't have to fill all of them. The socketed RAM approach simply does not scale. What if Apple wants 1.6TB/s RAM bandwidth (4x M1 Max)? That would be 32 sockets, all of which have to be filled... this will quickly get very silly :D
Yup.

Far easier to treat the CPU/GPU/Memory as a single socketable module and scale that way. Bandwidth between the individual sockets may be less, but that can be made up for by ability to scale and caching anything required by the cores locally in l1/l2/l3/local DRAM on each package.


Will this be a compromise for those who need huge RAM but not much processing power - but are forced into buying CPUs and GPUs as part of the package to get the RAM capacity (for example)? Sure. But that's such an extreme edge case that apple aren't interested in. Those users can use/buy PCs.

For the markets Apple is targeting, the compromises made with the on-package GPU/RAM are probably worth it for the overall throughput improvement gained.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.