A user is saying and i quote
"
My point is that the article is 99% wrong and the MacBooks actual GPU performance is nowhere near the consoles performance.
Maximum theoretical TFlops doesn’t mean you can hit those number if the GPU in the Mac is power and thermal starved.
For the consoles to hit their max 10.3TF performance they actually use massive heatsinks and consume 200W+. Let that sink in for moment, and now imagine Apple’s 60W 10.4TF false marketing…"
Ah yes, the "I don't understand what is happening so thy must be lying" reaction. Well, M1 also hits 2.6TFLOPS with only 10 watts (I verified it myself), so Apple's claims are reasonable and realistic.
To be honest, I understand why the commenter might be confused, after all, it does sound very unlikely that Apple has such a big advantage in perf-watt, but they actually do. There are a few key factors to it:
- Apple does not have to rely on power-hungry RAM to feed their GPU (they use a high-tech multichannel low-power RAM that is much more expensive but uses somewhere around 10x less power than GDDR6 for comparable bandwidth)
- Apple GPUs are very streamlined devices, their SIMD processors are likely simpler than what Nvidia or AMD uses and hence more energy-efficient. They lack many tricks that other GPUs have, like the dual FP16 rate, their support of control flow divergence is very simple and efficient, their scheduling hardware is likely simpler too since they don't have to invoke very small shading kernels etc.
- Apple GPUs have their roots in the mobile phones, they have been developed to consume as little power as possible (while still being full-featured GPUs), so they probably use every trick in the book to lower their power consumption. Nvidia and AMD have neither Apple's expertise nor are their architectures focused on lowest possible power consumption. Their primary market is still desktop GPUs, and their design choices reflect that
- Apple has a process advantage that gives them that extra bit of power efficiency (but this is by far not enough to explain their lead)
Basically, this is what I have been repeating for the last two years — Apple currently has the most power-efficient GPU IP in the industry, by a large margin. We have had their entry-level hardware for a while now, we have benchmarks, it was actually very clear what G13 can do. I think it's quite funny that some people are surprised. Maybe Nvidia can catch up in perf/watt in two years. Who knows.
P.S. You are free to point that user to this post.
P.P.S. Of course, the simple fact is that Apple can do all this because their tech is very very expensive. Apple Silicon is basically power-efficient console tech for the desktop market. Or, you can also view it as a downsized custom supercomputer. Not even the RTX 3090 has a 512-bit RAM bus because it is too expensive for them. And M1 Max is also a huge die, larger than Nvidia's GA104 or the largest Xeons. These large Mac chips are not really a threat to the regular PC market with it's cheaper components, they are simply too expensive for a normal user.