Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Just wondering whether they are comparing like-for-like. (I.e. cpu+GPU+RAM+etc)

Looking at the numbers I believe they are in that facet ... However, the "65W" 11980 can go as high 80W in SPEC multithreaded workloads so I don't know if the reported 12900HK watts is actually the power used during the test while Apple's 35W is roughly what the processor actually uses (30-40W).
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Looking at the numbers I believe they are in that facet ... However, the "65W" 11980 can go as high 80W in SPEC multithreaded workloads so I don't know if the reported 12900HK watts is actually the power used during the test while Apple's 35W is roughly what the processor actually uses (30-40W).
Right, that’s the second issue (how they measure the wattage). Third issue is what they choose to test - big history of cherry-picking at intel.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Right, that’s the second issue (how they measure the wattage). Third issue is what they choose to test - big history of cherry-picking at intel.
They used Spec Int 2017. So not a terrible choice, though it does avoid showing off Apple's massive advantages in Spec Float 2017 ;). Which to be fair, are specific to memory bound workloads and are so massive that they skew everything else (though Apple does perform very well in other float subtests too those memory-bound ones are almost literally off the charts). I am slightly wrong Apple does seem to use 35-45 W in Spec so it may be fair to compare paper wattage.

The biggest red flag is that they claim that in Spec Int that the M1 Max is equivalent to the 11980 which ... really shouldn't be the case. Anandtech found the M1 Max was 37% higher in Spec Int. So is the M1 Max low for some reason ... or the 11980 high? That could dramatically change where the M1 Max should be relative to the new Alder Lake CPUs.
 

Adarna

Suspended
Jan 1, 2015
685
429
Yup. According to the graph Intel seems to claim their 12th gen is more power efficient too. We‘ll see how accurate that graph turns out to be once real world benchmarks are out
Correct me if I am wrong but Intel's power curve has a superior performance per Watt to Apple silicon?

I'd love to see Anandtech take a stab at this.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Correct me if I am wrong but Intel's power curve has a superior performance per Watt to Apple silicon?

I'd love to see Anandtech take a stab at this.
That's what Intel's graph claims, yes. The rest of us have ... doubts. ;)
 

Adarna

Suspended
Jan 1, 2015
685
429
That's what Intel's graph claims, yes. The rest of us have ... doubts. ;)
I'd believe that power curve is they were on a die shrinker smaller than 5nm. As they're still on 2x(?) that size then it must be down to better engineering or materials?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
They used Spec Int 2017. So not a terrible choice, though it does avoid showing off Apple's massive advantages in Spec Float 2017 ;). Which to be fair, are specific to memory bound workloads and are so massive that they skew everything else (though Apple does perform very well in other float subtests too those memory-bound ones are almost literally off the charts). I am slightly wrong Apple does seem to use 35-45 W in Spec so it may be fair to compare paper wattage.

The biggest red flag is that they claim that in Spec Int that the M1 Max is equivalent to the 11980 which ... really shouldn't be the case. Anandtech found the M1 Max was 37% higher in Spec Int. So is the M1 Max low for some reason ... or the 11980 high? That could dramatically change where the M1 Max should be relative to the new Alder Lake CPUs.

Right. That third line is suspicious.
 
  • Like
Reactions: crazy dave

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
I'd believe that power curve is they were on a die shrinker smaller than 5nm. As they're still on 2x(?) that size then it must be down to better engineering or materials?
Well ... all fabrication node size names are BS. I have a longer post about it with references and all that, might even be in this thread somewhere but the short version is:

"Intel 7", what Alder Lake is manufactured on, used to be Intel "10nmESF" or something - i.e. an enhanced version of "Intel 10nm". However, Intel's regular "10nm" node is reckoned to be equivalent to TSMC's "7nm" - thus "Intel 7" might be even the equivalent of TSMC's "7nm+" node. (As an aside: Samsung's "5nm" is also reckoned to be equivalent to TSMC "7nm" node.) There is supposed to be a meaning behind the names in theory (like smallest feature size that could possibly be made), but in practice ... not really.

Thus Intel are thought to really be about one node generation (maybe less) behind the TSMC 5nm node which the M1 is manufactured on, but not two. Intel believes that they will catch up to TSMC's fabs in just a couple of years, while others predict longer. But we'll see. They're certainly not ahead in fabrication though and given 3rd party analysis of Alder Lake and Tiger Lake, definitely not ahead in core design. So their claims here are ... suspect.
 
  • Like
Reactions: Romain_H and souko

Adarna

Suspended
Jan 1, 2015
685
429
Well ... all fabrication node size names are BS. I have a longer post about it with references and all that, might even be in this thread somewhere but the short version is:

"Intel 7", what Alder Lake is manufactured on, used to be Intel "10nmESF" or something - i.e. an enhanced version of "Intel 10nm". However, Intel's regular "10nm" node is reckoned to be equivalent to TSMC's "7nm" - thus "Intel 7" might be even the equivalent of TSMC's "7nm+" node. (As an aside: Samsung's "5nm" is also reckoned to be equivalent to TSMC "7nm" node.) There is supposed to be a meaning behind the names in theory (like smallest feature size that could possibly be made), but in practice ... not really.

Thus Intel are thought to really be about one node generation (maybe less) behind the TSMC 5nm node which the M1 is manufactured on, but not two. Intel believes that they will catch up to TSMC's fabs in just a couple of years, while others predict longer. But we'll see. They're certainly not ahead in fabrication though and given 3rd party analysis of Alder Lake and Tiger Lake, definitely not ahead in core design. So their claims here are ... suspect.
What I like about this is competition.

Previous CEO were extracting the most value for their shareholder. They werent pushing the tech forward to benefit me.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
What I like about this is competition.

Previous CEO were extracting the most value for their shareholder. They werent pushing the tech forward to benefit me.
In fairness to Pat G's immediate predecessor, Bob swan, many of the moves you see being completed now were started under him rather than Pat G: Intel's plan to offer foundry services, the Golden Cove and Gracemont cores (P and E cores in Alder Lake), etc ... were years in the making. Most of Pat G's effect on Intel will be felt in a couple of years as he is rapidly expanding foundry capacity and investment (how sustainably is an open question).

However, yes, Intel made many mistakes in the last decade and stock buybacks instead of re-investments are definitely seen one of them.
 

Adarna

Suspended
Jan 1, 2015
685
429
In fairness to Pat G's immediate predecessor, Bob swan, many of the moves you see being completed now were started under him rather than Pat G: Intel's plan to offer foundry services, the Golden Cove and Gracemont cores (P and E cores in Alder Lake), etc ... were years in the making. Most of Pat G's effect on Intel will be felt in a couple of years as he is rapidly expanding foundry capacity and investment (how sustainably is an open question).

However, yes, Intel made many mistakes in the last decade and stock buybacks instead of re-investments are definitely seen one of them.
Was Skylake under Swan? This architecture was the reason Apple started developing Apple silicon
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Was Skylake under Swan? This architecture was the reason Apple started developing Apple silicon

It depends on what you mean ... Swan was CEO from 2018 to 2021. Before that he was CFO since 2016 (head financial officer). Apple didn't start the Mac transition to Apple silicon because of Skylake (which was a success when it launched in 2015) but rather Intel's inability to move on from the Skylake microarchitecture (Alder Lake's Golden Gove cores represent their first truly new performance microarchitecture since Skylake - 2015 to 2021! o_O) and their foundry woes which saw them stuck at 14nm for years longer than they were supposed to be (going from beating everyone by miles to being beaten by TSMC and Samsung catching up). Apple of course started developing its own SOCs since the iPhone A4 in 2010 (taking over the core design I think with the A6? - definitely by the A7 in 2013). While rumors had Apple toying with the idea of putting Apple silicon into Macs and testing it out over subsequent years, I think in interviews Apple executives said they didn't truly start this project until a couple of years ago, like 2018/19.

I have no idea who at Intel is most responsible for all of those failures over those years - if it can even be so pinpointed. It is entirely possible some of that is due to Swan. But I was just pushing back against post hoc ergo proctor hoc fallacies that I've seen some people, even professional reviewers who should know better, engage in: that because Alder Lake and Intel's IDM 2.0 program launched after Pat Gelsinger took the reigns that he gets the credit for them when each would've taken years to start the planning and develop, under Swan.
 
Last edited:
  • Like
Reactions: JMacHack

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
Are there GPUs on these intel things?
Find an AL core image here. To me, it looks like the GPU must be the block to the left of the P-cores (I suspect the L2 is the big block right of the E-cores). AAUI, as iGPUs go, it is pretty low-end compared to M-series, so they probably just do not really use it in tests, lest it push the Ws through the roof.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Why did Anandtech not use the Intel compiler? It seems a little unfair.

The only way Intel can make Tiger Lake perform the same as M1 Max perform is a) use aggressive auto-vectorization in ICC AND b) selectively choosing those parts of the benchmark suite where the performance difference is smaller in the first place and which benefit from such auto-vectorization. At this point it just becomes the benchmark of compiler implementations, not benchmarks of the CPUs. Besudes, ICC is not widely used to begin with.
 
  • Like
Reactions: Stratus Fear

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
“Torture the numbers enough and you can make them say anything.”

I think Intel could honestly just say “we have the fastest* CPU, bar none.” And have better marketing than this bs about power efficiency. It works for NVIDIA, the top tier gpu sells the rest on brand recognition alone, and they don’t talk about power efficiency at all.

Hell it’s like they’re painting a big target on their back when 3rd party tests come out and people clown them again.

It is indeed very embarrassing. They are posting graphs that contradict every third-party review out there. As such, there is little doubt that ADL can reach higher performance numbers than the M1 Max - by utilizing the top part of its power curve and leveraging the fact that it has more cores. But Intel’s power efficiency claims are likely total BS. Unless the mobile ADL is somehow 5x more power efficient than desktop ADL.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,204
what? at that chart M1 max is consuming 25-35W max...while intel is starting from there
Maybe you want to say that at 35W intel one is a little better
Again i would wait, as we all did with the M1 SoC...to see the real world tests and performing under pro apps and so on
I kind lose fate on charts made by Intel and Samsung or any windows laptops for the battery life
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
The only way Intel can make Tiger Lake perform the same as M1 Max perform is a) use aggressive auto-vectorization in ICC AND b) selectively choosing those parts of the benchmark suite where the performance difference is smaller in the first place and which benefit from such auto-vectorization. At this point it just becomes the benchmark of compiler implementations, not benchmarks of the CPUs. Besudes, ICC is not widely used to begin with.


Intel claims >40% on SpecInt than standard clang-LLVM which would almost perfectly explain these results. So yup, this is all compiler shenanigans. ICC is giving all the Intel processors a big boost.
 
Last edited:

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
We have detailed SPEC results for M1 Max and the top desktop Alder Lake. M1 is 10% slower in single core SPECint while consuming 14x less power. And it’s 40% slower in SPEC-int multi while consuming 4x less power.
Can you post a link to these numbers? The two results seem contradictory. Why would relative power efficiency be ~5 times worse for the M1 in multicore vs single core?
I can't find M1 power consumption numbers during SPEC ST tests, but I'd be surprised it they were 14x lower than the 25-29W reported by anandtech for alder lake.
 
Last edited:

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
The API a particular piece of code uses often can't simply be changed on a whim (some benchmarks do have multiple APIs, often with varying degrees of optimizations applied to each, but most don't). So indeed you are testing the performance of the API as much as the hardware. And some GPU benchmarks do use generic APIs like Vulkan, but that can have pros and cons too. GPU performance testing is thus even more full of caveats than CPU testing for that reason. It depends on what you view as the purpose of benchmarking
GPU benchmarks should use native API for each GPU. Would you consider the results of an OpenCL-based benchmark for the Apple GPU or an Nvidia GPU?

Intel claims >40% on SpecInt than standard clang-LLVM which would explain these results. Thus this is all compiler shenanigans.
I hope that Anandtech uses the best compiler for each processor next time.

Has a third party confirmed Intel claims? Phoronix has run benchmarks with the AMD compiler but not with the Intel compiler.
 

jido

macrumors 6502
Oct 11, 2010
297
145
GPU benchmarks should use native API for each GPU. Would you consider the results of an OpenCL-based benchmark for the Apple GPU or an Nvidia GPU?


I hope that Anandtech uses the best compiler for each processor next time.

Has a third party confirmed Intel claims? Phoronix has run benchmarks with the AMD compiler but not with the Intel compiler.
Intel compiler single node:

$1,499+​

Not many people use that compiler in real applications.
 

UBS28

macrumors 68030
Oct 2, 2012
2,893
2,340
Intel could be interesting for the upcoming MAC Pro where heat and power efficiency is not an issue, and you don’t have to deal with software compatibility issues. It just works.

For laptops and tablets, ARM is better in all benchmarks but you got to do deal with software headaches.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
GPU benchmarks should use native API for each GPU. Would you consider the results of an OpenCL-based benchmark for the Apple GPU or an Nvidia GPU?

Full warning: we're going deep into the weeds here.

An API choice and the compiler choice are not synonymous. This is apples and oranges. A program is written in the API it is written in. And that's that. If a program is written in OpenCL, then that's what its written in. If its written in CUDA, that's what it's written in. If it's written in Metal, that's what's written in. Some programs may support multiple APIs of course but even that doesn't alleviate all the issues (see list below). In contrast, I can take SPEC and compile it any C++/Fortran compiler that supports my architecture.

To answer your question: yes, I would definitely accept an OpenCL test on an Nvidia or AMD GPU. If I'm trying to make general claims about hardware performance, then I wouldn't necessarily on an Apple GPU because the API is deprecated (I'm told technically there isn't even a driver, it's a compatibility layer like MoltenVK). In contrast, Nvidia and AMD both still actively support it. However! If your program is written in OpenCL and you want to gauge how it will run on an Apple GPU, then yeah ... the OpenCL GPU benchmark is probably more relevant than a Metal one.

And to turn your question around, what exactly would be the "native compute API" for an AMD GPU? It isn't OpenCL or the Vulkan/DirectX shading language. They're cross hardware. It isn't Metal though an AMD GPU will run Metal on macOS. It obviously isn't CUDA. AMD has tried at various points to make a native language, but their current compute solution is in practice supported by almost none of their own GPUs, including the newest ones (which is extremely frustrating). Just for kicks, Intel will be adding "One API" based on Sycl I believe and it is also open/cross hardware.

So yes this absolutely means for a GPU you are never actually testing "pure hardware capability". You are testing:

1) the properties of the graphics/compute engine
a) if the application has multiple APIs, then the optimization and work that was put into those different APIs for this application​
b) any application optimizations that was put in to that particular GPU​
2) the optimization of the API/drivers for the GPU (if it will run at all)
3) finally the capability of the GPU hardware itself

But getting back to what we're actually discussing: CPU benchmarks. You still have some of this, especially with low level assembly code or architecture specific code like vectorization, but is overall less because you don't have different APIs and you aren't interacting with drivers. But changing compilers for each processor re-introduces additional variance. What you end up testing is the different compilers and not the hardware. For instance ICC tends to be very aggressive with vectorization relative to standard clang-LLVM and GCC. So that's what you would actually be testing ... not the hardware. And compiling with Xcode means just compiling with clang not an optimized version of llvm specifically written for Apple hardware. That's why this:

I hope that Anandtech uses the best compiler for each processor next time.

would in fact be a bad thing. The point of scientific testing is to try to control variables, not add more back in. For GPUs you don't have much choice. A program is written in the API(s) it's written in. For CPU benchmarks, you do have a choice not to reintroduce variables and so you should not do this. You want to compare the same code to the same code as much as possible. For different architectures this is impossible of course as the machine code generated for each arch will by necessity be different, but at least you aren't adding in different compilation strategies into the mix on top of everything else.

Make no mistake, what Intel did (well sort of: they used ICC for AMD) and what you are suggesting is great for marketing slides, but it would make for poor benchmarking for Anandtech meant to give readers an actual point of comparison for how fast their programs are likely to run, which is the point of review sites in the first place.

This link, while old, also explains it well:

 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Can you post a link to these numbers? The two results seem contradictory. Why would relative power efficiency be ~5 times worse for the M1 in multicore vs single core?
I can't find M1 power consumption numbers during SPEC ST tests, but I'd be surprised it they were 14x lower than the 25-29W reported by anandtech for alder lake.

So there's a bit of oddity here: Anandtech reported two sets of numbers for SPEC ST tests - when testing in Linux it was the 25-29W you mention. However, they also looked at power consumption in Windows of SPEC POV ray across different numbers of core configurations and Windows was reporting 55/70w for cpu (estimated)/package power on SPEC POV ray for a single core. I don't know how to explain that, that doesn't seem right.


Listed in red, in this test, all 8P+8E cores fully loaded (on DDR5), we get a CPU package power of 259 W. The progression from idle to load is steady, although there is a big jump from idle to single core. When one core is loaded, we go from 7 W to 78 W, which is a big 71 W jump. Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core. By comparison, for our single thread SPEC power testing on Linux, we see a more modest 25-30W per core, which we put down to POV-Ray’s instruction density.

In contrast the M1 is ~5W/10W for core/package in ST SPEC give or take (a touch higher in SPEC POV ray as that's a heavy test). I think @leman took 71/5 ~= 14, though it should be 55/5 ~= 11 as that's the core to core. Still 11x ... however, that's not what Linux reports for the same test and what Windows is reporting here seems bonkers even for Intel core. So ... I'm not sure.
 
Last edited:
  • Like
Reactions: jeanlain

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
How do you know that?

It seems I can download the Intel compiler for free from https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html

Intel has started pushing oneAPI as their counter to CUDA and the ICC vs nvcc and Nvidia's upcoming HPC compiler. This is a relatively recent change and there are still licensed versions of it with support and so forth. Not sure about commercial vs personal use, but I think probably still free. Outside of HPC ICC is not actually not that common and even in HPC I'd wager GCC is still more common and maybe even clang is too though ICC is definitely used. BTW I don't have numbers to back any of this up, this is just my experience from talking with people and being in scientific computing and watching dev talks from a variety of fields. So take that for what you will. This may change as Intel really seems hell bent on taking on Nvidia in HPC and may push this much more.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.