Much has been made about the M1 Max offering an unusually high (400 MB/s) memory bandwidth for a laptop. But what does this mean, i.e., what's the basis of comparison? I ask because the M1 has unified memory, so when they compare it to other comparably-priced workstation-class laptops, what bandwidth are they using as the basis for that comparison? I.e., are they comparing it to the CPU RAM bandwidth, the GPU RAM bandwidth, or the sum of the two? Without understanding more, I don't see how any of those three comparisons would be meaningful.
More specifically, I've read the tradeoff between DDR and GDDR RAM is that the former provides low latency (needed for CPU's) and the latter provides high bandwidth (needed for GPUs). So it seems Apple decided they couldn't accept the high latency of GDDR RAM for unified memory, and instead used DDR RAM and increased the bandwidth to the point it was comparable to what is available from workstation-class GDDR.
For instance, a comparably-equipped (64 GB RAM, 4 TB SSD, 120 Hz screen) 17" Dell 7760 workstation laptop, with an 8-core Xeon and A4000, costs about the same (within 10%) as a 16" M1 Max ($4900 for the MBP and $5440 for the Dell, based on current pricing on Dell's website). And an A4000 mobile offers 384 GB/s GPU RAM bandwidth (see link below).
So it sounds like the Max's 400 MB/s isn't unprecedented when it comes to GPU bandwidth (compared to other laptops in its class), but might instead be unprecedented compared to their CPU bandwidth (?).