Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

throAU

macrumors G3
Feb 13, 2012
9,198
7,346
Perth, Western Australia
I think too many people put excessive emphasis on Cinebench scores. 99% of actual user workloads won't even tax the system in the same way as CB, so it's really just a stress test with numbers.
Definitely.

But most people don't know it isn't representative and it gives whichever reviewer using it an easy number to generate and compare.
 
  • Like
Reactions: MRMSFC

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
Definitely.

But most people don't know it isn't representative and it gives whichever reviewer using it an easy number to generate and compare.
Cinebench is pretty good for testing throttling. It definitely shouldn't ever be the only benchmark people use to get a general idea, but it's a good way to get an idea of sustained performance.

Geekbench + Cinebench seem to be the main two benchmarks people recognize and use the most for reviews. I feel like they're both decent benchmarks, but they're not really enough to give an entirely representative picture.
 

Ethosik

Contributor
Oct 21, 2009
8,141
7,119
Didn't Meltdown and Spectre fixes cause a performance degradation too? This just keeps happening for Intel!
 

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
Cinebench is pretty good for testing throttling. It definitely shouldn't ever be the only benchmark people use to get a general idea, but it's a good way to get an idea of sustained performance.

Geekbench + Cinebench seem to be the main two benchmarks people recognize and use the most for reviews. I feel like they're both decent benchmarks, but they're not really enough to give an entirely representative picture.

The best use of Cinebench I've seen is on the JayzTwoCents channel. When he runs CB, he will also have HWMonitor running to track system temps and potential throttling of the system being tested. To me, it's a far more useful and informative approach than just running Cinebench on its own. I have done similar on my MBP, just using iStat Menus instead of HWMonitor.
 
  • Like
Reactions: ArkSingularity

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
The reason the Intel 12+ gen core processors don't have AVX (and thus aren't affected by Downfall) is they were the first to introduce separate E-cores. According to https://www.makeuseof.com/what-is-avx-512-why-intel-killing-it/, with Alder Lake (12th gen):

"While the P-cores use the Golden Cove microarchitecture, the E-cores use the Gracemont microarchitecture. This difference in architectures prevents the scheduler from working correctly when particular instructions can run on one architecture but not on the other. In the case of the Alder Lake processors, the AVX-512 instruction set is one such example, as the P-cores have the hardware to process the instruction, but the E-cores do not. Due to this reason, the Alder Lake CPUs do not support the AVX-512 instruction set. That said, AVX-512 instruction can run on certain Alder Lake CPUs' where Intel has not physically fused them off. To do the same, users have to disable the E-cores during BIOS."

According to https://www.anandtech.com/show/18975/intel-unveils-avx10-and-apx-isas-unifying-avx512-for-hybrid-architectures-#:~:text=This enables support for AVX,Xeon performance (P) cores.&text=Examining the core concept of,have full AVX-512 support. , this will be corrected with AVX10:

"The most significant and fundamental change introduced by AVX10 compared to the previous AVX-512 instruction set is the incorporation of previously disabled AVX-512 instruction sets in future examples of heterogeneous core designs, exemplified by processors like the Core i9-12900K and the current Core i9-13900K. Examining the core concept of AVX10 it signifies that consumer-based desktop chips will now have full AVX-512 support."

As expected, I couldn't find any info. about whether Downfall affects AVX10.

Further, while Intel's current consumer chips dodged this bullet, that's not the case for its Xeons, which do have AVX-512: https://wccftech.com/intel-cpus-witness-downfall-in-performance-after-downfall-vulnerability-mitigations-applied/#:~:text=Coming to the Intel Xeon,grasp over several Intel processors. :

"Moving on to the benchmarks, the Xeon Platinum 8380 was observed in various instances, with the old "390" and the new "3a5" microcodes. As predicted, the processor saw a performance decline in all scenarios. In OpenVKL, the performance drop was recorded at 6%, while in OSPRay 1.2, it reached 34%. AI workloads oversaw a vast drop, with applications such as Neural Magic DeepSparse 1.5, which was expected given that the HPC workloads were predicted to drop."

Plus there's AMD. It does implement AVX-512 even in its consumer chips, and is thus affected by Inception, which appears to be the AMD analog of Downfall: https://www.pcworld.com/article/2033369/amds-inception-bug-looks-serious-for-photo-editors-on-ryzen-pcs.html#:~:text=(Downfall can also be exploited,aren't vulnerable to Downfall. :

"So far, Ryzen gaming doesn’t seem to be affected, with a statistically insignificant 1 percent difference using 3DMark’s “Wild Life” benchmark. Compression using 7Zip demonstrated a 5 percent drop in performance. The time to compile a Linux kernel took 8 percent longer after the microcode was applied....Like Downfall, though, users who work with photography and image-editing apps have reason to be concerned. Though Phoronix’s tests only found a 4 percent decrease using the Darktable RAW photography software, GIMP performance was strongly affected. GIMP, a Photoshop competitor, saw performance plunge by 28 percent using GIMP’s rotate tool. Phoronix noticed a similar 24 percent drop when using the unsharp-mask command as well, and the time to resize an image took 18 percent longer when the microcode patch was applied."

See https://www.phoronix.com/review/intel-downfall-benchmarks for more benchmark details.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,518
19,665
The best use of Cinebench I've seen is on the JayzTwoCents channel. When he runs CB, he will also have HWMonitor running to track system temps and potential throttling of the system being tested. To me, it's a far more useful and informative approach than just running Cinebench on its own. I have done similar on my MBP, just using iStat Menus instead of HWMonitor.

Cinebench exhibits low IPC on Apple Silicon and doesn’t properly utilize the core. It’s not a good stress test for M series.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
They have AVX and AVX2, etc.; just not AVX512
Thanks for the correction. Then why aren't the 12th gen+ Intel core processors affected by Downfall, which targets both AVX2 and AVX-512?


"GDS/Downfall affects the gather instruction with AVX2 and AVX-512 enabled processors. At least the latest-generation Intel CPUs are not affected but Tigerlake / Ice Lake back to Skylake is confirmed to be impacted."
 

galad

macrumors 6502a
Apr 22, 2022
610
492
The issue is in the implementation, probably Intel had been already informed of the issue and they fixed it in time for the 12th.
 
  • Like
Reactions: throAU

leman

macrumors Core
Oct 14, 2008
19,518
19,665
Thanks for the correction. Then why aren't the 12th gen+ Intel core processors affected by Downfall, which targets both AVX2 and AVX-512?

The exploits target not the ISA itself but the defects in a specific implementation. The newer Intel CPUs must be different enough so that the exploit doesn't affect them.
 
  • Like
Reactions: throAU

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
Cinebench exhibits low IPC on Apple Silicon and doesn’t properly utilize the core. It’s not a good stress test for M series.
It chaps my ass that people trot out Cinebench scores for this reason. That and many don’t even know what it tests specifically and how it’s related to performance. Hell, before AMD introduced Zen 1, it was mostly Geekbench that was the de facto benchmark.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,665
It chaps my ass that people trot out Cinebench scores for this reason. That and many don’t even know what it tests specifically and how it’s related to performance. Hell, before AMD introduced Zen 1, it was mostly Geekbench that was the de facto benchmark.


Not quite sure what it is you are trying to say? My main problem with Cinebench is that it is widely use as an estimate of general-purpose CPU performance. This is nonsensical, as it only benchmarks the SIMD subsystem and as such cannot predict how well the CPU will perform on a wide range of popular real-world workloads. It kind of works on x86, but only because all recent x86 CPUs share the same design philosophy and improvements in the SIMD subsystem often go along with the improvements in other areas. But even on x86 it massively overestimates the impact of SMT and large amount of cores in general-purpose workloads. I am not surprised that it's the most popular benchmark these days, at it produces excellent scores for recent x86 designs. Everybody loves high numbers.

On Apple Silicon, Cibenench is practically useless. It suffers from known software optimization issues on ARM and as a consequence achieves poor hardware utilisation on Apple CPUs. This makes it unsuitable as a stress test, and only poorly suited to evaluating the SIMD performance of Apple platforms (and entirely useless as a general-purpose benchmark).
 

Arxr

macrumors member
May 8, 2023
39
16
The exploits target not the ISA itself but the defects in a specific implementation. The newer Intel CPUs must be different enough so that the exploit doesn't affect them.
Intel says that 12th Gen and newer chips, like Alder Lake and Raptor Lake, come with Intel's Trust Domain eXtension or TDX which isolates virtual machines (VMs) from virtual machine managers (VMMs) or hypervisors, hence isolating them from the rest of the hardware and the system.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
Cinebench exhibits low IPC on Apple Silicon and doesn’t properly utilize the core. It’s not a good stress test for M series.

I said that was the best use of CB I've seen - I made no claims regarding its applicability or accuracy on Apple Silicon.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
Not quite sure what it is you are trying to say? My main problem with Cinebench is that it is widely use as an estimate of general-purpose CPU performance. This is nonsensical, as it only benchmarks the SIMD subsystem and as such cannot predict how well the CPU will perform on a wide range of popular real-world workloads. It kind of works on x86, but only because all recent x86 CPUs share the same design philosophy and improvements in the SIMD subsystem often go along with the improvements in other areas. But even on x86 it massively overestimates the impact of SMT and large amount of cores in general-purpose workloads. I am not surprised that it's the most popular benchmark these days, at it produces excellent scores for recent x86 designs. Everybody loves high numbers.

On Apple Silicon, Cibenench is practically useless. It suffers from known software optimization issues on ARM and as a consequence achieves poor hardware utilisation on Apple CPUs. This makes it unsuitable as a stress test, and only poorly suited to evaluating the SIMD performance of Apple platforms (and entirely useless as a general-purpose benchmark).
I didn't know this. That makes it rather impressive that Apple Silicon still manages to score as well as it does on Cinebench.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
what about efficiency gains?
Likely nothing substantial to speak about this generation. It's the same microarchitecture as existing Raptor Lake chips (which itself is very similar and has very few microarchitecture changes from Alder Lake), and it's built on the same "Intel 7" process.

Meteor Lake and Arrow Lake are supposed to come next year, and both of these will be much more exciting. They will be on "intel 4", and Arrow Lake is supposed to come with some pretty substantial IPC gains if the rumors are anything to go by.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,665
I didn't know this. That makes it rather impressive that Apple Silicon still manages to score as well as it does on Cinebench.

In case you are interested, here are some patches Apple has submitted to the Intel Embree library (the raytracing library Cinebench uses). They resulted in 7-15% improved performance on tested scenes. These patches are not yet integrated into the Cinebench.

To be clear, current x86 CPUs will be faster than current Apple CPUs for this type of workload even with perfect optimizations, because Intel has a more capable SIMD subsystem than Apple. The SIMD throughput per clock is comparable (slightly favoring Intel, depending on mix of operations), but Intel has higher clocks and more cache throughput. This is the domain where x86 should be consistently 20-25% faster. Of course, it can change if Apple increases the number of FP units or makes them wider, but that is not a cheap thing to do.
 
  • Like
Reactions: ArkSingularity

throAU

macrumors G3
Feb 13, 2012
9,198
7,346
Perth, Western Australia
Thanks for the correction. Then why aren't the 12th gen+ Intel core processors affected by Downfall, which targets both AVX2 and AVX-512?
Because intel updated the core in other ways - they fixed the problem in the AVX hardware or other part of the CPU which is made vulnerable when running AVX.
 

ChrisA

macrumors G5
Jan 5, 2006
12,917
2,169
Redondo Beach, California
Inefficient? Do you start packing your suitcase the moment the taxi arrives to take you to the airport or do you do it the evening before? Speculation is all about the efficient use of available resources instead of standing there doing nothing until the last possible moment.



Hardware is a leaky abstraction. Can’t write fast software without understanding and utilizing how hardware works. That’s why there are all these O(N) algorithms that end up slower than quadratic in practice.

Is ths really even a hardware bug. I'd bet the CPU is using microcode. If so this is a microcode bug. Microcode is just software.

I remember doing this by hand, years ago on a CDC 6600 mainframe computer. We wrote in assembly language back then. Maybe we would write. multiply and then an add and then a conditional branch. You might have to wait a dozen clock cycles before the branch could even start we we always tried to do "something", maybe fetch a word for memory that would is only be needed if the branch was triggered. This was "speculative execution." the idea was in common use in 1964. "everyone" did this. The CDC 6600 was a RISC machine (back in the early 60s). As smart as we assembly language programmers thought we were the FORTRAN compiler could usually beat us, as it could consider thousands of possible ways to generate code and use the one that ran fastest.

This REALLY does speed up short loops and we just can't not do this. Intel hides it under the ISA but CDC handed the job to the programmer (or compiler)

But it looks like the long term fix is to go back to using RISC and letting the compiler figure this out. I think CDC got it right, 60 years ago. Yes 60, years ago. (No, I am not THAT old. The computer was a near antique when I was working with it.)
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,346
Perth, Western Australia
But it looks like the long term fix is to go back to using RISC and letting the compiler figure this out. I think CDC got it right, 60 years ago. Yes 60, years ago. (No, I am not THAT old. The computer was a near antique when I was working with it.)

The RISC processors are also doing speculative execution these days, because you have to for performance.

Intel tried letting the compiler figure it out with their VLIW architecture, Itanium (a.k.a. "Itanic").

It flopped. Whether that's due to intel's project management, etc. or not is open to debate; but performance was underwhelming (even on native software) and it had little native software. AMD x64 killed it.
 
Last edited:

throAU

macrumors G3
Feb 13, 2012
9,198
7,346
Perth, Western Australia
And Intel's 14th gen CPUs are showing only 1-4% performance increase over 13th gen.


This (5% or thereabouts) is pretty typical for intel for the past 10-15 years really if you actually benchmark stuff without using whatever new instruction set they introduced for some niche application.

14th gen is basically only slightly better binned 13th gen (no new instructions, minor clock bump), so this is no surprise.
 
  • Like
Reactions: Homy

leman

macrumors Core
Oct 14, 2008
19,518
19,665
Is ths really even a hardware bug. I'd bet the CPU is using microcode. If so this is a microcode bug. Microcode is just software.

I remember doing this by hand, years ago on a CDC 6600 mainframe computer. We wrote in assembly language back then. Maybe we would write. multiply and then an add and then a conditional branch. You might have to wait a dozen clock cycles before the branch could even start we we always tried to do "something", maybe fetch a word for memory that would is only be needed if the branch was triggered. This was "speculative execution." the idea was in common use in 1964. "everyone" did this. The CDC 6600 was a RISC machine (back in the early 60s). As smart as we assembly language programmers thought we were the FORTRAN compiler could usually beat us, as it could consider thousands of possible ways to generate code and use the one that ran fastest.

This REALLY does speed up short loops and we just can't not do this. Intel hides it under the ISA but CDC handed the job to the programmer (or compiler)

But it looks like the long term fix is to go back to using RISC and letting the compiler figure this out. I think CDC got it right, 60 years ago. Yes 60, years ago. (No, I am not THAT old. The computer was a near antique when I was working with it.)

It’s not really a bug. It’s just implemented in a way that can be exploited. These are probabilistic attacks that exploit the fact that different code paths can take different time or consume varying amount of energy.

And RISC is just as vulnerable.
 
  • Like
Reactions: MRMSFC and throAU
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.