After watching the latest keynote I looked up the memory bandwidth figures for x86 chips and they seem to be in the 50-90 GB/s range (the latter for Alder Lake w/ DDR5). GPU's hit 400 GB/s+. What makes the memory bandwidth so slow? The distance of ram slots from the CPU?
Lots of good replys already but the short of it is that they don’t have to. X86 CPUs are straight number crunchers other than the Xeon-SP platforms which have the same 512bit busses. Why? Because they handle large datasets that need the memory bandwidth just like GPUs do.
Plus narrower busses reduce cost both on the CPU die and memory as they have to be paired together and can often be made up with raw speed hence higher power costs.
So simply it’s a design choice based on use cases of what Intel and AMd think their end users will be.
Could Intel/AMD make a Xeon-SP/EPYC laptop in the same power envelop as Apple, absolutely. They already do in the server space now.
Would it cost an astronomical amount of money similar to the M1 Max, also yes. They just don’t think there’s a market for it and I’m inclined to agree.
The M1 Max/Pro is the worst of both worlds. Too large to scale clock frequencies, too expensive to have broad appeal, cannot be serviced whatsoever by the end user or by their IT support.
The real answer would have been a knock off of Infinity Fabric from AMD which is truly revolutionary hence them beating Intel with a cudgel for the last couple of years.
With a chiplet system they could have done nothing more than fuse 4x M1 chiplets and have the same power targets, similar GPU target of 28 but quadruple the cores at 32