I also work on server class stuff for a living. To be quiet blunt the difference isn't enough to quantify. There has been numerous tests done with memory and the difference is marginal. The biggest leap in technology that makes a big difference is the processor architecture changes. There are some boards out there that accept either ddr3 or ddr4 and will run 7th gen processors as an example. The difference between ddr3 and ddr4 with the same processor was so minor it wasn't even worth testing further. Udimm vs RDIMM is basically just registered ram and unregistered. UDIMM is faster if you are talking a single channel, when you go to 2 or 3 channels UDIMM is not faster. The way RDIMM handles parity and correction is better too. It may not actually fix the write issue but the memory knows the error occurred and reports it to the memory controller, UDIMM does not. You can also use x4 with RDIMM, you can only use x8 with UDIMM. What's the difference? Well with x4 it allows it to correct all possible device dram errors.
In short RDIMM x4 is much better. Only scenario UDIMM may shine is in single channel mode.
Hi. Thanks for chiming in.
How much would you say is "To be quiet blunt the difference isn't enough to quantify"? The performance issue I'm having is not a human being judging the performance (like when a human is interacting with a GUI or a virtualised machine on a remote server) but a software failing / stopping when the hardware takes too long to respond.
My machine is struggling to keep up although I've set my software to the most forgiving setting. I'd like to adjust that setting again to make the software feel responsive again - and most importantly; I'd like the machine to keep up and not fail on me.
If I may run some numbers to keep things clear:
- Todays setting amount to the following numbers: 94 accesses per second to its dedicated hardware, including up to 2KB of IO per access, with a delay of <11ms. Or else it fails / stops.
- I need to get as close as for it to handle: 1500 accesses per second to its dedicated hardware, including up to 2KB of IO per access, with a delay of <0,067ms.*
... all while keeping up with the rest of macOS, drivers and third party software.
I'm just asking, it may still well be that UDIMM vs RDIMM won't matter in my case. I know were talking about differences in micro seconds when dealing with memory speeds. But then again, I don't know how many of these micro second delays of access add up each millisecond.
* I may settle with 750 accesses, up to 1024KB of IO at maximum delay of ~0,33 ms if the cMP is to old to keep up with my highest wishes. Hoping to buy me some time before I need to go for a 7,1. (You know how costly maxed out modern Apple hardware is.)
Ah, well ... Because I want 48 or 96GB RAM I need more than a stick per processor. RDIMM might be the only way I can move. I may have to adjust my hopes / settings to what ever that leads to ...