Sure there are desktops that offer similar throughput, but how many sockets are we talking about on those machines, and how much power do those sockets consume?
According to Samsung, LPDDR5 consumes 50 mV less than conventional DDR5, 1.05v vs 1.10v. So about a 4.8% difference there. I would calculate that in watts, but I can't seem to find a clear amperage either for the sockets themselves or for the DIMMs, even from Samsung or Micron's own datasheets. I'm sure it's out there, I just didn't see it.
As for socket counts, Gigabyte's MP72-HB0 has 16 slots for two processors; with four channels each CPU that reduces down to a minimum of four DIMMs. IBM's Power Systems S1014 has eight OMI serial memory channels with one DDIMM per memory channel.
As for socket counts, Gigabyte's MP72-HB0 has 16 slots for two processors; with four channels each CPU that reduces down to a minimum of four DIMMs. IBM's Power Systems S1014 has eight OMI serial memory channels with one DDIMM per memory channel.
To match the 200 GB/s bandwidth of my M1 Pro, my 14" MBP would need to have a whopping 4 DIMMs of DDR5 which would be very tricky to fit into a relatively light and portable laptop.
With $3T in market cap, I'm certain they can find a solution for that... mostly invented problem. One that doesn't have to be SODIMM, either; JEDEC is doing CAMM after DDR5 6400MT/s, for instance. It's not that hard to stick some LPDDR5 modules on a daughterboard.
Even with SODIMM, a single connector is ~6.1mm thick. Doubling them up would still leave you under the 15.5mm thickness of a 14" MBP, and you could even triple them up vertically if you didn't mind a ghastly... 21.6mm thickness.
As for room on the board, I'm looking at a board right now and it's not exactly crammed if you get my gist.
Or hey, why not use the ECC pins for data transfer lanes? Thin the pins in a bit more to give more room for more, hopefully enough to double the bus from 64 bit to 128?
Even with SODIMM, a single connector is ~6.1mm thick. Doubling them up would still leave you under the 15.5mm thickness of a 14" MBP, and you could even triple them up vertically if you didn't mind a ghastly... 21.6mm thickness.
As for room on the board, I'm looking at a board right now and it's not exactly crammed if you get my gist.
Or hey, why not use the ECC pins for data transfer lanes? Thin the pins in a bit more to give more room for more, hopefully enough to double the bus from 64 bit to 128?
The M1 Max would need a full 8!
Yes please. This is also ignoring that these aren't just laptop processors, desktops use them too where size and power aren't as important. Mac Pro especially would benefit from replaceable RAM. But that's off topic for this conversation.
Yeah, until you make a laptop that is actually power efficient like this one, and then suddenly it does.
Personally, I would rather take advantage of the power savings in the system architecture and CPU to enable stuff like socketed memory. But I did some more digging and saw a few threads doing a frankly better job explaining the power efficiency of soldered RAM (I wonder if race to idle and factory undervolting, e.g. DDR3L would make up some of the difference, though), so at least now I get the rationale. Not that marcan doesn't bring up a decent point, especially with the multi die chips. I think it's ultimately just different priorities. And RAM is really not my main concern, it's the storage I really don't like being soldered because RAM doesn't die nearly as easily as flash storage, which has a finite lifespan and instnatly dooms any board that it's on to become unrecyclable e-waste. They could so much as just switch out the storage with an m.2 2230 slot and I'd be pretty much satisfied on the repairability aspect.
Last edited: