A user is saying and i quote
"
My point is that the article is 99% wrong and the MacBooks actual GPU performance is nowhere near the consoles performance.
Maximum theoretical TFlops doesn’t mean you can hit those number if the GPU in the Mac is power and thermal starved.
For the consoles to hit their max 10.3TF performance they actually use massive heatsinks and consume 200W+. Let that sink in for moment, and now imagine Apple’s 60W 10.4TF false marketing…"
In your opinion Is this false or totally true ?
At best he is imprecise.
TFLOPS is calculated using the following formula:
TFLOPS = Cores x clock speed (Hz) x Floating Point Operations per clock cycle / 10^12
Only the clock speed might vary.
You have to use the clock speed which the GPU can obtain. You can't say if the GPU had unlimited power the clock speed could be higher and use that theoretical number.
So if the M1 GPUs could achieve X clock speed with no power and thermal constraints but only achieve 0.7X in practise, you use 0.7X to calculate TFLOPS.
Thus his argument doesn't really make sense.