Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Bug-Creator

macrumors 68000
May 30, 2011
1,783
4,717
Germany
Some power consumption creep is expected as the technology matures. For example, early GPUs only consumed 40-50W max, but with the advent of gaming and later GPgPU there was a need for bigger and more powerful devices.

Nope nope and nope again.

Voodoo3 is listed at 15W, a 200MHz Pentium is about the same.
M1Max is 30W so about the same power draw with much more compute.

Thats what I call progress.
 

Bug-Creator

macrumors 68000
May 30, 2011
1,783
4,717
Germany
They should overclock their processors a bit and care a little bit less about thermals.

Those core were designed to run optimal at those clocks. Gaining just 10% more clock (without changing the design) may require twice the power for all we know.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
The ‘E’ core designation is not the same way Apple uses the designation . These are not meant to be phone ‘stand by‘ or watch CPU cores .

they are more space efficient than trying to maximize energy savings. Those are two different goals ( a bit related outcome on performance but different )
One thing I had also thought this meant is that Apple E cores can do the same as Apple’s P cores, just more efficiently. So, with a light workload, a customer wouldn’t notice the diff if it was all E cores. However, for Intel, they reach E cores mainly by removing parts of what makes Pcores… well, P. So, an Intel chip with all E cores wouldn’t be able to run legacy software that has not been rewritten to understand what these new less capable cores do. Instead of, “route this command to any available core” it would be “route this command to an available P core, only”.

And this is why you only get the best performance from the new chips using the latest Windows… they’ve moved part of the scheduler to the OS to help Intel, but old software can still break if routed to an E core.
 

unrigestered

Suspended
Jun 17, 2022
879
840
I think Apple is pushing too much for thermals.

Don’t worry: Apple doesn’t actually care too much about thermals, otherwise it wouldn’t allow 102 degrees C for M1 MBAs and i think it’s even 108 degrees for the M2 for “normal“ operation.
Even the MBPs are set to prefer running at close to 100 degrees when pushed, over ramping up the fan speed too much.
Most Windows laptops are trying to keep the temps below 80 degrees which is better for longer life.
They achieve this by ramping up fan speeds to very (annoyingly) audible levels, which is was Apple doesn’t want and prefers frying your components at maximum tolerable temperatures instead, which might still be ok-ish in the short term, but at the risk of decreasing maximum lifetime by a good bit still
 
  • Haha
Reactions: jdb8167 and Juraj22

Mirachan_007

macrumors member
May 7, 2021
71
39
Hiroshima
Suddenly those PowerBook G5 mockups actually make sense for Intel's forthcoming processors.

powerbook-g51528826907157.jpg
This would be great but hard to carry
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Really it is smart of Apple to not really talk about clocks as it neatly sidesteps such comparisons.

Apple is the only company I know that shows efficiency curves instead just peak performance which is much more useful. Of course, they hide details behind obscure tests which is less useful. Still, I like their marketing much better than Intels.


I think Apple is pushing too much for thermals.

For laptops : It's absolutely phenomenal and they have to keep doing it.
For Desktops : e.g. iMac, Mac Studio, Mac Pro, it's OK but also a bit irrelevant at the same time. They should overclock their processors a bit and care a little bit less about thermals.

I agree. But this is a consequence of going mobile-first. M1 has no vertical scalability whatsoever, let’s see how things will be going forward.

However, for Intel, they reach E cores mainly by removing parts of what makes Pcores… well, P. So, an Intel chip with all E cores wouldn’t be able to run legacy software that has not been rewritten to understand what these new less capable cores do. Instead of, “route this command to any available core” it would be “route this command to an available P core, only”.

Where did you get this idea from? The only feature Intels E-cores don’t support is AVX-512 which was never properly supported on Intel consumer platforms anyway. They are perfectly capable of running any modern x86-64 code.

Don’t worry: Apple doesn’t actually care too much about thermals, otherwise it wouldn’t allow 102 degrees C for M1 MBAs and i think it’s even 108 degrees for the M2 for “normal“ operation.
Even the MBPs are set to prefer running at close to 100 degrees when pushed, over ramping up the fan speed too much.
Most Windows laptops are trying to keep the temps below 80 degrees which is better for longer life.
They achieve this by ramping up fan speeds to very (annoyingly) audible levels, which is was Apple doesn’t want and prefers frying your components at maximum tolerable temperatures instead, which might still be ok-ish in the short term, but at the risk of decreasing maximum lifetime by a good bit still

There is no empirical evidence whatsoever that running laptop chips at high temperatures meaningfully affects expected lifespan. Let’s stop spreading these myths already. Besides, your definition of thermals is of very little use as it doesn’t convey any relevant information. From the usability perspective “thermals” are fan noise, ability to maintain performance under load and chassis (external) temperature. There is no reason whatsoever to care about the internal temperature of the chip as long as it’s operating as expected.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Where did you get this idea from? The only feature Intels E-cores don’t support is AVX-512 which was never properly supported on Intel consumer platforms anyway. They are perfectly capable of running any modern x86-64 code.
Thanks, just looked for the information I’d glossed over previously and it appears to be more related to the E cores being seen as “another system”, so software written to prevent running in a VM or attempting a form of copy protection (as old games would) wouldn’t run. The solution in some cases, if the systems allows it, is to disable the E cores.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Thanks, just looked for the information I’d glossed over previously and it appears to be more related to the E cores being seen as “another system”, so software written to prevent running in a VM or attempting a form of copy protection (as old games would) wouldn’t run. The solution in some cases, if the systems allows it, is to disable the E cores.

Ah, you are talking about the anti-cheat/DRM utilities? Those are a dumpster fire anyway, not really Intels fault that they don’t work.
 
  • Like
Reactions: Unregistered 4U

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
There is no empirical evidence whatsoever that running laptop chips at high temperatures meaningfully affects expected lifespan. Let’s stop spreading these myths already. Besides, your definition of thermals is of very little use as it doesn’t convey any relevant information. From the usability perspective “thermals” are fan noise, ability to maintain performance under load and chassis (external) temperature. There is no reason whatsoever to care about the internal temperature of the chip as long as it’s operating as expected.
The temperature sensors on the M2 are undocumented. As far as I can tell there is no consensus on what sensors match up with the M1 sensors or if they do at all. At least one open source developer was convinced that the M2 idled at 51 °C because one of the sensors reported that number at all times. Apparently this temperature reading is a calibration value—PMU tcal and PMU2 TR0Z.

Here is a list of the HID SMC sensors read from IOHIDEventSystemClient.
Screen Shot 2022-08-19 at 3.41.20 PM.png


And then there are the SMC key value pairs you get iterating over "AppleSMC" using IOKit. I find 111 keys that could reasonably be temperatures if you just look for anything that starts with a "T" and is a value at or higher than the ambient temperature.

Anyone telling you that the M2 temperatures are higher than the M1 would need to explain how they know this. If it is just using an existing commercial or open source temperature sensor, I wouldn't be very confident that the temperatures are actually measuring the same thing between the two SoCs.
 
  • Like
Reactions: Analog Kid

venom600

macrumors 65816
Mar 23, 2003
1,310
1,169
Los Angeles, CA
The power consumption here is nothing compared to a tumble dryer.

Those have dedicated circuits on home electrical panels. One outlet, one circuit. The average home circuit can handle 1500W... and they are usually split across multiple outlets in a room (or across more than one room in older homes).

The nVidia 4090 will reportedly draw upwards of 800W under load. Add this chip and we are at 1250W. That's just for two components. Add in the motherboard, storage, RAM, cooling (especially liquid cooling), whatever RGB lighting effects someone might want, add in boards and it's easy to see that you could need a 2000W power supply. Not to mention whatever the monitor and external devices consume.

We are literally at a point where a single computer from a single plug on one outlet can potentially draw more than enough under load to trip the circuit breaker in most homes.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
Those have dedicated circuits on home electrical panels. One outlet, one circuit. The average home circuit can handle 1500W... and they are usually split across multiple outlets in a room (or across more than one room in older homes).

The nVidia 4090 will reportedly draw upwards of 800W under load. Add this chip and we are at 1250W. That's just for two components. Add in the motherboard, storage, RAM, cooling (especially liquid cooling), whatever RGB lighting effects someone might want, add in boards and it's easy to see that you could need a 2000W power supply. Not to mention whatever the monitor and external devices consume.

We are literally at a point where a single computer from a single plug on one outlet can potentially draw more than enough under load to trip the circuit breaker in most homes.
Okay, that is insane, especially in apartments or other places where it is difficult to add extra circuits. And most of those places may have wiring well setup for kitchen/bathroom appliances, but non-kitchen/bathroom rooms tend to have been designed on the assumption that you'd have as few circuits and as many plugs as the electrical code allows.

I was a Windows guy in the days of hotburst and I don't understand how we suddenly seem to have landed at worse than those days all of a sudden. Unless, and I guess this is the core point, only absolutely crazy people buy these things. If the normal high-end GPU is the 4060/4070, the normal high-end CPU is some i7 with a ~100W peak TDP, etc, and this thing is just the superhotrodcraziness equivalent of Alpina and Brabus cars.
 

pshufd

macrumors G4
Original poster
Oct 24, 2013
10,146
14,573
New Hampshire
I think Apple is pushing too much for thermals.

For laptops : It's absolutely phenomenal and they have to keep doing it.
For Desktops : e.g. iMac, Mac Studio, Mac Pro, it's OK but also a bit irrelevant at the same time. They should overclock their processors a bit and care a little bit less about thermals.

I'm fine with their focus on thermals, even on the desktop.
 
  • Like
Reactions: tmoerel

pshufd

macrumors G4
Original poster
Oct 24, 2013
10,146
14,573
New Hampshire
The focus on thermals also forces them to actually create ever more performant processors, for real… instead of depending on jacking up the power draw.

Look at the weather all over the world. Parts of China have had record amounts of 40 degree (C) weather. Europe has had hot weather as has the US. India has had hot weather as well. Who wants to spend money on air conditioning when it's hot outside? In China, there have been power cuts ordered for foreign companies including Apple and Tesla which may impact their production. The power cuts are so that households can have air conditioning.

We have some parts of the US where water levels in lakes and rivers are low and the areas depend on hydroelectric. Same in other parts of the world.

Back in the 1970s when natural resources were more scarce, we actually had PSAs on saving electricity, natural gas, and oil. We had a lot of it cheap for so long that we thought it our right. I like my 27 inch iMac but it will eventually get replaced with Apple Silicon.


 
  • Like
Reactions: Unregistered 4U

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I've read that they are going to make a mobile line with only efficiency cores. It should be interesting to see what the efficiency is like.

Pretty good chance that is aimed at least as much at low cost as it is to mobile. Mobile here is low end chrome books , and netbook like Widows laptops. These are Celeron and Pentium products. Going to see lots of placements in upper end home NAS units and low power consumption servers.



this is way far away from the phone or watch sense of mobile .

If Qualcomm or Mediatek threw a halfway serious, affordable tablet/ laptop chip at these kinds of products they’d be on thin ice. These will do ok against phone chips being pressed into service in a chrome book. more so because don’t Scarface too much for efficiency rather than hitting lower power consumption metrics .

The current N series are capped at 4 cores so going to 8 would be step up for the low power server roles .


Unlikely to see the Turbo clocks as high in the top end of mainstream Gen12 SoC line up. much less the extreme way out of the efficient design range of these Gen 13 line up at the top end .
 

arvinsim

macrumors 6502a
May 17, 2018
823
1,143
Look at the weather all over the world. Parts of China have had record amounts of 40 degree (C) weather. Europe has had hot weather as has the US. India has had hot weather as well. Who wants to spend money on air conditioning when it's hot outside? In China, there have been power cuts ordered for foreign companies including Apple and Tesla which may impact their production. The power cuts are so that households can have air conditioning.

We have some parts of the US where water levels in lakes and rivers are low and the areas depend on hydroelectric. Same in other parts of the world.

Back in the 1970s when natural resources were more scarce, we actually had PSAs on saving electricity, natural gas, and oil. We had a lot of it cheap for so long that we thought it our right. I like my 27 inch iMac but it will eventually get replaced with Apple Silicon.


I am glad that Apple Silicon is focusing on efficiency.

Hope that AMD and Intel can catch up.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
Those have dedicated circuits on home electrical panels. One outlet, one circuit. The average home circuit can handle 1500W... and they are usually split across multiple outlets in a room (or across more than one room in older homes).

The nVidia 4090 will reportedly draw upwards of 800W under load. Add this chip and we are at 1250W. That's just for two components. Add in the motherboard, storage, RAM, cooling (especially liquid cooling), whatever RGB lighting effects someone might want, add in boards and it's easy to see that you could need a 2000W power supply. Not to mention whatever the monitor and external devices consume.

We are literally at a point where a single computer from a single plug on one outlet can potentially draw more than enough under load to trip the circuit breaker in most homes.
Lol, 800 watts? Try 1200 if you have a dual 16 pin connector. The current Kingpin 3090Ti can pull 1275W (it has 2 of the new PCIe 5 power connectors). Though that is an AIB card, the FE card (from Nvidia) is rated for 450W. It is likely the FE 4090 will have 1 PCIe 5 connector so it would be capped at 600W.

But yes your point does stand, these components take a lot of power these days. The folks getting the highest of high end parts don't care though.
 
  • Wow
Reactions: pastrychef

neinjohn

macrumors regular
Nov 9, 2020
107
70
I remember reading a interview on Anandtech with the head of the overclocking department on Intel and thinking it was most important job at the company since AMD waked up. Those kind of stock overclocking are insane.
 
  • Like
Reactions: diamond.g

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
One thing I had also thought this meant is that Apple E cores can do the same as Apple’s P cores, just more efficiently. So, with a light workload, a customer wouldn’t notice the diff if it was all E cores.

Technically I don't think that is true. It isn't documented well , but it appears a P core complex has a 'hidden' AMX (apple matrix) co-processor in it. However, Apple doesn't expose access to that processor to 'normal' applications (or even most of their own software). You can't get to it except through Apple's Acceleration library code. The acceleration library has a "slower" way of doing the same problems. If something is run the 'slow' way enough it probably will get promoted to a P core anyway.

Apple core complexes are different but it is a cleaner transition of loads because the differences are carefully managed.

Intel has a relatively much more messy difference. There is some evidence that Intel "glued" their P and E cores later after they had been featured designed separately. So they present when queried in certain ways as two distinct x86_64 APIs . Technically that is true. The P cores natively (at least for Gen12) have documented AVX-512 instructions (there are fully accessible in the Xeon SP (Sapphire Ridge) implementations of that baseline design). For Gen 12 (Alder Lake they are turned off somewhat late in the roll out process). Somewhere inside of Intel there was some internal battle on whether some Gen12 configuration (without E cores) would have it turned on and others turned off ( or even perhaps that would push some relatively large hackery into OS kernels to have some awkward "stop and restart" failover to P cores). ). There were some hidden hooks in the UEFI/BIOS to turn AVX-512 on/off. Intel has now just completely fused it off inside the current stepping iterations of the Gen12 processor packages.

However, for Intel, they reach E cores mainly by removing parts of what makes Pcores… well, P. So, an Intel chip with all E cores wouldn’t be able to run legacy software that has not been rewritten to understand what these new less capable cores do. Instead of, “route this command to any available core” it would be “route this command to an available P core, only”.

Not really. There are differences but not this (when the P cores are properly set). The E cores were just a substantively different approach to the design. It does leave out AVX-512. But what Intel laregly did was take Skylake ( gen 7 , Xeon W-2100 , Xeon W-6100 ) and make it much smaller. Brought AVX2 and some AI/ML stuff for enhanced SIMD but left out the AVX-512 to save lots of space. Designed for lower Turbo clocks and really focused on die space efficiency.

It isn't trying to be the 'old school' Atom processor. At the Architecture day intro in 2021 , Intel had lots of comparisons to Skylake.

"...
When comparing 1C1T of Gracemont against 1C1T of Skylake, Intel’s numbers suggest:

  • +40% performance at iso-power (using a middling frequency)
  • 40% less power* at iso-performance (peak Skylake performance)
*'<40%' is now stood to mean 'below 40 power'

When comparing 4C4T of Gracemont against 2C4T of Skylake, Intel’s numbers suggest:

  • +80% performance peak vs peak
  • 80% less power at iso performance)peak Skylake performance
We pushed the two Intel slides together to show how they presented this data.
...."


So they are chasing performance, but just not willing to sacrifice lots of area to do it. That's why it was aimed at Intel 4 (old 7nm). Some area space wins they were going to get with just a smaller fab process. But the baseline design was on a flex deployment baseline so could 'fall back' to Intel 7 ( Enhanced Super Fin '10nm' ).

They took SMT out because it has a die space consumption overhead (in addition to some issues if don't implement it securely ). With a smaller core, if they just want a higher thread count just throw more cores into the SoC . ( this core was always suppose to also go into 20+ core special market server chips. Something similar to the Xeon D class. ) . Note in the above when have just one of these cores on there is a bigger power drop relative to Skylake than if have 4 cores on. Once get up into the double digit Gracemount/E cores it isn't not a huge power saver. It is better (enough to be significantly helpful) , but not huge.


The P core (Golden Cove) cores are coupled as much to the Xeon SP line up constraints as to the mid-high range desktop (and high range laptop ) ones. Bigger area (at higher prices) for more features and higher Turbo ranges with "as much as you need" wall socket power. It is not as clear this was suppose to be on Intel 4 when they initially scoped it out. ( It would have helped to control the die area consumption problem.)


Reportedly Gen13 ( Raptor Lake ) has some modified Redwood Cove (**) cores. It wouldn't be surprising if the stuff that is completely fused off in Gen12 P cores is just removed. Why didn't Intel do that in the first round ? Probably because didn't have time or resources because running around chasing a bunch of crazy forest fires across the product line and internal strife. (e.g., desktop Rocket Lake (Gen11 ) was likely a wasted effort misadventure over the long term. ).


And this is why you only get the best performance from the new chips using the latest Windows… they’ve moved part of the scheduler to the OS to help Intel, but old software can still break if routed to an E core.

Intel didn't move the scheduler to the OS. The scheduler is/was in the OS. What they have done is provider the OS scheduler with more concrete quantifiable data so it can do its job better with less "guessing".
Again not too different than what Apple does. Apple has a few cases where OS scheduler can push some overhead of the juggling to the processor, but it is still more a matter of bubbling up good , informative metrics to the OS scheduler to get better allocations of resources.

Userland software should provide hints and/or suggestions were threads should go. The OS scheduler should be doing the actual scheduling work though because to do the job right need to have the aggregate picture. Apps only have a picture of what they are doing.

As pointed out in another response, there were/are problems with DRM mechanisms and 'too brittle for their own good' apps that freak out if all the CPU cores don't present the 100% identical info. That code is crufty anyway. That kind of code shouldn't hold back processor package evolution.


Edit: (**) Intel code name bingo tripped me up. :) Raptor Lake has Raptor Cove cores. Redwood Cove is for Gen14 and a bigger step change. The "lake"and "cove" matching is indicative that the core adjustment is specific to that incremental change being made. Architecture documentation wise Raptor and Golden cove are the same. How much it is a marketing name change ( trying to indicate more progress than there is ) and how much an implementation difference ( better design libraries , security fixes , bug fixes, etc ) we'll see.
 
Last edited:
  • Like
Reactions: Unregistered 4U

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Technically I don't think that is true. It isn't documented well , but it appears a P core complex has a 'hidden' AMX (apple matrix) co-processor in it. However, Apple doesn't expose access to that processor to 'normal' applications (or even most of their own software). You can't get to it except through Apple's Acceleration library code. The acceleration library has a "slower" way of doing the same problems. If something is run the 'slow' way enough it probably will get promoted to a P core anyway.

There are AMX units in both P-core and E-core complexes. The units in E-cores are smaller and have lower throughput.

 
  • Like
Reactions: Unregistered 4U

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
Those have dedicated circuits on home electrical panels. One outlet, one circuit. The average home circuit can handle 1500W... and they are usually split across multiple outlets in a room (or across more than one room in older homes).

The nVidia 4090 will reportedly draw upwards of 800W under load. Add this chip and we are at 1250W. That's just for two components. Add in the motherboard, storage, RAM, cooling (especially liquid cooling), whatever RGB lighting effects someone might want, add in boards and it's easy to see that you could need a 2000W power supply. Not to mention whatever the monitor and external devices consume.

We are literally at a point where a single computer from a single plug on one outlet can potentially draw more than enough under load to trip the circuit breaker in most homes.
what homes? average homes, support up to 6000W, even more, depending on the country/city/house. Maybe i miss understood you
If only 1500W is supported...you turn 1 high end PC +1 1200BTU AC and you are done
But still a PC that draw almost over 1200W is insane for the times that we are living
 
Last edited:

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
350W CPU? Yea that’s exactly what Mother Nature needs right now when a lot of electricity still comes from non-renewable sources…..
Seems silly to complain about a CPU using the equivalent of three incandescent light bulbs for actual computer use tasks, when people are burning a a million times that amount ‘mining’ for useless crypto like bitcoin.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.