Yep, Apple's matrix units are quite good.
Oh, I don't. But some people might. There was for example a rather heated discussion on chess engines, some of which really on custom neural network implementations. You want SIMD throughput for this.
SIMD is also increasingly used in general computing, for example for things like UTF8 processing, JSON parsing or hash table (Google Swiss Tables), but for these things latency still trumps throughput, and Apple's wide design works very well.
Ah yes.... the stockfish benchmark! Truthfully I had never heard of it before the thread on MacRumors.
Oh, I don't. But some people might. There was for example a rather heated discussion on chess engines, some of which really on custom neural network implementations. You want SIMD throughput for this.
I remember that thread - ultimately I tapped out of it. I got a sense that those who wanted the best stockfish results now, go buy an intel xyzzy, AMD abc and fill your boots while we wait for somebody to optimize it for M1!
There didn't appear to be any acknowledgment of the difference between latent performance potential (whether utilized or not in the case of the M1) and the role that optimization plays when a relatively new architecture to performance laptop is released!
How do you fake geekbench scores.
This is going to be a sick laptop with 32gb of quad channel ddr5 ram too
I agree, I don't think that you can fake geek bench scores.
That being said, that
might be the wrong question..... for me anyway
.
The question I'd ask is
"are these scores and the scoring differential indicative and representative of the performance that somebody would see in real applications that geek bench purports to show??"
For example, on the topic of Geekbench GPU results, we already know that M1 Pro/Max do not show accurate results (i.e. not indicative of GPU results) of M1 pro/max results, yet I cannot count the number of threads where folks have quoted the initial AnandTech on the M1 Pro/Max without also acknowledging that Andrei (the AnandTech authors') own observations on Geekbench GPU results should be disregarded because Geekbench GPU compute is too short in bursts to allow the GPU to ramp up to its max frequencies! :
On the topic of single core and multi core results, while we know that there is a native AArch64 build for Apple Silicon, I don't believe that Geekbench takes the approach of optimizing for one platform or another (e.g utilizing Apples native API stack over their own cross platform one).
This approach gives us somewhat of an apples to apples view of CPU performance in isolation and has the advantage that we get to see a general overview of how each platform performs on geek bench, but the approach is not necessarily real world realistic for how each each platform was designed to be utilized by it's manufacturer to execute software. For example , Apple sells a vertically integrated product/service offering stack. They want you to build, design and sell within their ecosystem.
This will make things challenging to compare into the future as more silicon designers follow Apples lead to SoC with dedicated accelerators and co-processors that are not engaged on benchmarking software in a 'keep all things equal' scenario. We can already see Qualcomm following suit with ARM on desktop and I wouldn't be surprised to see some performance laptops with ARM processors on the Windows side in the next year or two.
These accelerators and co-processors only get utilized under manufacturer defined conditions - i.e. using the Accelerate framework to gain access to the AMX co-processor for matrix coprocessing:
Developer Dougall Johnson has through reverse engineering uncovered a secret powerful coprocessor dubbed AMX: Apple Matrix coprocessor…
medium.com
Native MacOS apps written for Apple Silicon (which should be the primary driver for purchasing an M1 MacBook Pro IMHO anyway) will likely perform significantly better in real world testing when optimized - and they do! Look at the export differential between initial Adobe premiere (cross platform) and Apples own Final Cut Pro export times....
Please do not read my response as being apologist for M1 on geek bench or critical of Alder Lake. It's not intended that way.
The TLDR of my post is really that all results need to be contextualized without hyperboles on either side against the backdrop of "
Is this benchmark showing the latest potential of one architecture or another or a worst case scenario?is this benchmark showing results indicative of the software that I personally want to use?".
Geekbench has it's place as do many other benchmarks (stockfish, blender, h.264 export times, JetStream browser bench, etc.... ) but we have to contextualize what we are benchmarking and what we are not and it's applicability to our own needs!
As a total aside, I took a look at the 12800h leaked results - that 45W part, is for the CPU only ....
Questions that I personally want to see answered by the GB early leaks....
1. How does it perform under sustained load given the power constraints for desktop ADL parts?
2. Is that 45W number base or max TDP when under load?
3. Is the 12800h constrained by the same AVX512 issue whereby the user needs to go into the bios to choose between efficiency cores on (disables AVX512 and you sacrifice battery life and general multithread performance) or efficiency cores off (get better multimedia performance on AVX512 workloads - may benefit some video exports)?
4. Does the razor blade 15 2022 thermal throttle ?
5. How does the ADL 12800h perform under load and on battery ?