Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
We're talking about different things here. Costs down are good, but that doesn't mean good software. For example, a cheaper software that accomplishes the same results but 20% slower can be great for a company of 500, but can ultimately be more expensive for a company of 2.
I don't disagree with that, but the cost to rewrite it for either way is a huge cost that most likely can't get approved. Best I can do in such a situation is selective modifications to fit the task a little better. The computers don't create content or make money in any other way, they can only cost more or less, and less is always what gets approved.
For me and my work, Windows is actually more expensive, when I take into account how many hours I gain working on a Mac.
That's good to hear for you, you have what you need. Windows fills the exact same role for me, it saves time and money.

But that's besides the point - we're talking about quality. Even if Windows is more cost efficient, doesn't mean it's good in terms of quality. You said Windows is quite good lately - all I said is: no, I don't see it's different in any way lately. I wasn't talking about its price.
You're being awfully subjective there, and all I'll say back is I have the opposite opinion.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Same for me. I live in CentOS at work.
CentOS? I do a bit of Linux at work too for very specialized tasks, and 1 user PC. (an engineer) running Fedora. And I always have a couple of different builds available in VM's. Great for security and network testing...
 

jjcs

Cancelled
Oct 18, 2021
317
153
CentOS? I do a bit of Linux at work too for very specialized tasks, and 1 user PC. (an engineer) running Fedora. And I always have a couple of different builds available in VM's. Great for security and network testing...
Yep. CentOS 7 (basically, free RHEL), mostly. Good, stable, and capable enough. More so in my line of work than Mac OS, as far as commercial software is concerned. Some things are available like Tecplot, but not everything.... unlike Linux.

I'm an engineer myself. My Mac use was just at home and for personal research use. Work wouldn't pay the Apple tax for less capability and the premium for Mac Pros is insane.
 
  • Like
Reactions: bobcomer

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
intel-laptop-9-2.jpg


Is Intel claiming that its chip could be more efficient than M1?

Source: Ars Technica article https://arstechnica.com/gadgets/202...s-bring-up-to-14-cores-to-high-end-portables/
Intel benchmarks info: https://edc.intel.com/content/www/us/en/products/performance/benchmarks/mobile/
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Man, these graphs get more vague with every presentation. I’m guessing that they would like people to believe at face value that the new processors are more performant (metrics in fine print) at 35 watts than the M1 Max, which is a bold claim. And will increase performance all the way to 75 watts.

Note: in the slide they have the 11980HK drawing 65 watts, which when tested by Anandtech, drew 89 watts. I’m guessing the “watts” at the bottom axis are in nominal (Intel-rated) wattages.

Call me dubious to Intels claims. Their track record hasn’t been good for power efficiency, and have let their processors draw nearly double their own rated wattage in the past.

Are these cpus capable of 35 watts? Maybe at idle, or light load. I suspect that once they go full throttle they’ll go nuclear though.

(Also I have to note that they left out 10th gen on this slide, probably because 11th gen was a performance regression, ha)

Also SoC power? I’m guessing that term is gonna be co-opted by Intel
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230

So they claim that the M1 Max's performance in the 2017 Spec Int test is essentially equivalent to the i9-11980HK. However, Anandtech measured the M1 Max as 37% faster than the latter. The only difference is that Intel used the Intel compiler, ICC, to compile the code for the Intel chips which almost certainly would improve the Intel score, but even then I'm skeptical about a ~40% improvement.


So yeah ... I'm skeptical about their claims.

They also don't report the total Spec score or Spec Float ... to be fair the M1 Max has scores on some individual floating point subtests that are absolutely bonkers.
 
  • Like
Reactions: satcomer and souko

Adarna

Suspended
Jan 1, 2015
685
429

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
I think Intel's appealing to a different use case.

Use case of a laptop user that wants a laptop to move from one power plug to another
Yup. According to the graph Intel seems to claim their 12th gen is more power efficient too. We‘ll see how accurate that graph turns out to be once real world benchmarks are out
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Yup. According to the graph Intel seems to claim their 12th gen is more power efficient too. We‘ll see how that accurate that graph is once real world benchmarks are out

The graph is already contradicted by 3rd party benchmarks:

 
  • Like
Reactions: souko and Romain_H

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
So they claim that the M1 Max's performance in the 2017 Spec Int test is essentially equivalent to the i9-11980HK. However, Anandtech measured the M1 Max as 37% faster than the latter. The only difference is that Intel used the Intel compiler, ICC.
Why did Anandtech not use the Intel compiler? It seems a little unfair.

The graph is already contradicted by 3rd party benchmarks
How could be it? Intel uses Apple benchmark data.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Why did Anandtech not use the Intel compiler? It seems a little unfair.


How could be it? Intel uses Apple benchmark data.

Depends on your definition of fair. Anandtech uses the same compiler (LLVM/GFortran) and same flags (OFast) on every CPU to normalize that out and use compilers that people actually tend to use. Intel in its marketing chooses "best case" compiler for each. But it's less clear what that "best case" always means in practice. It's why you don't generally trust first-party benchmarks and you wait for 3rd party ones to come out.

Intel only sort of uses Apple's data - they measured the score of the M1 Max themselves and then used Apple statements to fill in the rest of the graph.

"Apple M1 Max performance is estimated based on public statement made by Apple on 10/18/2021 and measurements on Apple M1 Max 16" 64GB RAM Model A2485."

However, the contradiction is that Anandtech measured both the M1 Max and the i9-11980HK and the score for the former was 37% higher than the latter but in this graph they are equivalent. Yes, the ICC would improve the Intel score, but again I have trouble believing that it makes Spec Int 37% faster.

Like Apple's, Intel's marketing slide is only comparing relative performance and isn't detailing absolute scores. That makes it difficult to know how they compare to 3rd party benchmarks. So we just have to wait, but that's a big red flag.
 
Last edited:
  • Like
Reactions: souko and Xiao_Xi

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Guys, read the fine print. “M1 Mac performance is estimated based on public claims made by Apple”. Case closed. We have detailed SPEC results for M1 Max and the top desktop Alder Lake. M1 is 10% slower in single core SPECint while consuming 14x less power. And it’s 40% slower in SPEC-int multi while consuming 4x less power.

It’s all a load of crap as usual.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Guys, read the fine print. “M1 Mac performance is estimated based on public claims made by Apple”. Case closed. We have detailed SPEC results for M1 Max and the top desktop Alder Lake. M1 is 10% slower in single core SPECint while consuming 14x less power. And it’s 40% slower in SPEC-int multi while consuming 4x less power.

It’s all a load of crap as usual.
They do claim they measured it themselves too, but that just raises more questions ... as the relative performance of the 11980HK and the M1 Max isn't anywhere close to what we've seen from other sites.

"Apple M1 Max performance is estimated based on public statement made by Apple on 10/18/2021 and measurements on Apple M1 Max 16" 64GB RAM Model A2485."
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Guys, read the fine print. “M1 Mac performance is estimated based on public claims made by Apple”. Case closed. We have detailed SPEC results for M1 Max and the top desktop Alder Lake. M1 is 10% slower in single core SPECint while consuming 14x less power. And it’s 40% slower in SPEC-int multi while consuming 4x less power.

It’s all a load of crap as usual.
“Torture the numbers enough and you can make them say anything.”

I think Intel could honestly just say “we have the fastest* CPU, bar none.” And have better marketing than this bs about power efficiency. It works for NVIDIA, the top tier gpu sells the rest on brand recognition alone, and they don’t talk about power efficiency at all.

Hell it’s like they’re painting a big target on their back when 3rd party tests come out and people clown them again.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
They do claim they measured it themselves too, but that just raises more questions ... as the relative performance of the 11980HK and the M1 Max isn't anywhere close to what we've seen from other sites.

"Apple M1 Max performance is estimated based on public statement made by Apple on 10/18/2021 and measurements on Apple M1 Max 16" 64GB RAM Model A2485."
It says n-copy of spec2017 c/c++ integer benchmarks. In the Anandtech review there were some tests that were heavily core dependent where the gap was smaller or in Intels favor. Maybe they leaned on those?

Yes, the ICC would improve the Intel score, but again I have trouble believing that it makes Spec Int 37% faster.
well in the infamous stockfish thread, the M1 gained much more than that with compiler optimizations. Maybe there’s some new changes to icc that Intel made after Anandtech tested the M1 Max.

EDIT: never mind I’m an idiot, i read the dates wrong, and even then it wouldn’t have mattered.
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
It says n-copy of spec2017 c/c++ integer benchmarks. In the Anandtech review there were some tests that were heavily core dependent where the gap was smaller or in Intels favor. Maybe they leaned on those?


well in the infamous stockfish thread, the M1 gained much more than that with compiler optimizations. Maybe there’s some new changes to icc that Intel made after Anandtech tested the M1 Max.

EDIT: never mind I’m an idiot, i read the dates wrong, and even then it wouldn’t have mattered.

I think the n-copy part just means that they ran the full suite n times and took the geometric mean of the results. Perfectly reasonable.

In the Stockfish thread, those were optimizations to the program code itself to make the program run faster on the M1. In this instance, the program code is stable and what is changing is the compiler which takes the C/C++/fortran code and changes it to assembly machine code. I might be wrong, but while I've heard LLVM produces somewhat slower assembly than ICC for Intel processors (which as you might expect is heavily optimized for Intel processors), but my impression was that this on the order of like 3-10% not 37%. Maybe I'm wrong.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I think Intel could honestly just say “we have the fastest* CPU, bar none.” And have better marketing than this bs about power efficiency.
Intel claims:
"12th Gen Intel Core i9-12900HK is the fastest mobile processor ever."
"12th Gen Intel Core i9-12900HK is the highest performing mobile processor ever."

Anandtech uses the same compiler (LLVM/GFortran) and same flags (OFast) on every CPU to normalize that out and use compilers that people actually tend to use.
I would expect Anandtech to use the best compiler for each processor. If GPU benchmarks use the best API for each GPU, why would benchmarks for CPU not use the best compiler for each CPU?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Man, these graphs get more vague with every presentation. I’m guessing that they would like people to believe at face value that the new processors are more performant (metrics in fine print) at 35 watts than the M1 Max, which is a bold claim. And will increase performance all the way to 75 watts.

Note: in the slide they have the 11980HK drawing 65 watts, which when tested by Anandtech, drew 89 watts. I’m guessing the “watts” at the bottom axis are in nominal (Intel-rated) wattages.

Call me dubious to Intels claims. Their track record hasn’t been good for power efficiency, and have let their processors draw nearly double their own rated wattage in the past.

Are these cpus capable of 35 watts? Maybe at idle, or light load. I suspect that once they go full throttle they’ll go nuclear though.

(Also I have to note that they left out 10th gen on this slide, probably because 11th gen was a performance regression, ha)

Also SoC power? I’m guessing that term is gonna be co-opted by Intel

Are there GPUs on these intel things?
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
I would expect Anandtech to use the best compiler for each processor. If GPU benchmarks use the best API for each GPU, why would benchmarks for CPU not use the best compiler for each CPU?
Not really equivalent. The API a particular piece of code uses often can't simply be changed on a whim (some benchmarks do have multiple APIs, often with varying degrees of optimizations applied to each, but most don't). So indeed you are testing the performance of the API as much as the hardware. And some GPU benchmarks do use generic APIs like Vulkan, but that can have pros and cons too. GPU performance testing is thus even more full of caveats than CPU testing for that reason. It depends on what you view as the purpose of benchmarking:

1) What is the max performance possible?
2) What is are reasonable performance expectations in code yet to be written?
3) What is the actual performance on actual production code used in the wild right now?

Anandtech testing aims for 2) and 3), their choice of compiler and settings reflect that. If you vary the compiler, then you're testing the choice of the compiler as well as the processor. However, even in not changing the compiler, a compiler could produce better or worse optimized assembly for different architectures like ARM vs x86. Nothing is ever perfect.

I should note that I was wrong: the performance delta between LLVM and ICC used to be quite large, larger than I thought ... but emphasis on used to be. The references to that gap are years old and as of now, Intel has adopted LLVM as a backend.


So now I have no idea what performance deltas there might be between Intel's ICC and standard LLVM if any.

EDIT: I should've read more carefully, not all the optimizations were upstreamed to LLVM, thus ICC still boasts a major increase in performance on Intel processors. Intel claims >40% on SpecInt than standard clang-LLVM which would explain these results. Thus this is all compiler shenanigans.
 
Last edited:

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
GPU performance testing is thus even more full of caveats than CPU testing for that reason.
GPU ****ery is the stuff I love the most. I love GCN and Vega for that reason, “how is this number crunching monster so garbage at raster graphics?” It sent me down a rabbit hole.
 
  • Like
Reactions: crazy dave
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.