The Puget Systems person I spoke to *did* dig into the settings used during their manufacturing process and said they specifically set the BIOS to the i9-9900k default TDP. He said they don't ship the "Puget Spirit" system with BIOS configured above the default TDP.
This can be confusing, since what is the "default TDP" -- Intel's default spec of 95W for the i9-9900k or the motherboard vendor's default settings? I asked Puget specifically about the 95W default setting for the i9-9900k, and his answer was in that context.
I think they used the smaller single-fan Noctua NH-U12S. Noctua's spec for the LGA-1151 socket used by the i9-9900k is 95W TDP max:
https://noctua.at/en/tdp-guide_copy
Motherboard vendors often configure their products to overclock or disable TDP limitations by default. Noctua discusses this here:
https://noctua.at/en/how-can-i-dete...lt-and-deactivate-this-automatic-overclocking
Just because the mobo vendor does this doesn't mean the PC system vendor leaves it that way. A vendor like Dell or HP has to support millions of machines and they don't put a Noctua NH-D15 in their business-class PCs.
Regardless of what any PC mfg says, if CineBench R15 produces over 2,000 on an i9-9900k, the CPU is probably not configured for Intel's 95W TDP default. If the R15 score is about 1,700, it is probably running at the 95W TDP default.
I concur with all points highlighted, and those of your previous
post.
I guess what the real issue comes down to is something we haven't seen before. It's my understanding, that up until recently all processors could obtain and sustain turbo frequencies within their stated TDP, plus or minus a few percent. As such, we didn't have this issue we're now facing. The 9900K, left to do what it wants to do, will clearly pull way more power than 95watts.
Now, there are two approaches on how to deal with this. On the one hand, what Apple is doing by restricting the i9 (below the TDP, mind you) and not cooling it adequately, they're effectively sending the message to Intel to say, "No, we're not going to tolerate this mislabeling of your processors." While Apple has always throttled their processors to some degree as they're always pushing the thermal envelop, this (I believe) is the first time they're throttling to this degree of potential.
AnandTech's comprehensive testing shows that at 95watts, between 8–28% performance is left on the table, but, Apple is throttling the i9 to more like 80–85watts. Apple is sending a powerful message to Intel. And, frankly, if they're working to develop their own processors, they have every right to nab at Intel. I'm sure Apple's processors will blow us all out of the water when we hear what they're capable of.
On the other hand, one could glean at the situation and say, okay, times change, but, the TDP rating hasn't changed because the TDP is set for different tiers of the processor market. With respect to this, manufactures such as Puget had this to say about TDP:
My personal opinion - and I think it aligns pretty closely with our company policy, though I wouldn't want to speak for that without consulting other folks here - is that reaching the turbo speeds prescribed by the CPU manufacturer (either Intel or AMD) is not overclocking, so long as voltage is left at default settings.
In other words, I don't care about the TDP
We use CPU coolers which can handle heat far in excess of the TDP on the 9900K, and other models, even for extended periods of time. Because of that, what I want to see is processors reaching and maintaining the turbo clock speeds appropriate for however many cores are currently active. Thermal throttling as a last-ditch protection is fine and good (in case a fan fails, for example) but I don't want to see my processors throttling because of some artificial power draw limitation.
To me, actual overclocking is when you run part of a CPU (the core clock, the memory controller clock, etc) at a speed above what the manufacturer has rated it for. For example, using memory above 2666MHz on the 9900K - or setting it to run at 5.0GHz turbo across all cores, rather than when just 1-2 cores are under load. Overvolting is similar, but instead of changing clock speeds it involves increasing voltage to the CPU - and it is often required in order to enable overclocking to succeed. Both of those push a CPU beyond what the manufacturer has rated it for, though, whereas allowing higher wattage operation (at default clocks and voltages) isn't exceeding any performance specs, it is just allowing more heat to be generated. As long as that is handled responsibly - with a good CPU cooler and plenty of airflow through the system - then I think it is just fine and the "best practice" in my opinion.
Granted, as this response is dated, I'll be reaching out to them for their current clarity on the topic. What side of the field is right, I guess time will show.
[doublepost=1554479357][/doublepost]
I am seriously confused by this entire thread. Are people saying the i9 performance is horrible and the i5 is better? Or is the complaint still not reaching sustained 5 GHz turbo boost?
The i9 in iMac throttles significantly more than the i5. With that, they both seem to be performing fairly equivalent thermally, and the i9 is still shown to outperform the i5. However, depending on the workload and how the i9 is loaded (and thusly throttled), it may effectively have lower clock speeds to the equivalently loaded i5. As some applications benefit from higher clock speed, the i5 may edge out.
In other words, while the i9
should always be faster than the i5, there may be use cases where this isn't so due to it throttling ~80% of it's potential clock speed boost vs ~20% for the i5. Time and benchmarks will continue to follow and paint the picture clearer.