Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We don't know why.

Or, in our post-modern / post-structuralist world; nothing is real, everything is relative, and reality therefore, is an infinite panorama of individualistic woe interminably engaged in stupefying, hair splitting, protestations of grief while remaining wilfully ignorant of anything vaguely reasoned or reasonably useful.

Well . . . , void of any usefulness beyond promoting our own idiosyncratic, onanistic, solipsistic, isolated irrelevancies to one another, that is!


i.e. Agent Smith was right:

 
Or, in our post-modern / post-structuralist world; nothing is real, everything is relative, and reality therefore, is an infinite panorama of individualistic woe interminably engaged in stupefying, hair splitting, protestations of grief while remaining wilfully ignorant of anything vaguely reasoned or reasonably useful.

Well . . . , void of any usefulness beyond promoting our own idiosyncratic, onanistic, solipsistic, isolated irrelevancies to one another, that is!


i.e. Agent Smith was right:


I've not idea what this means or how it relates to my comment. Can I ask you to strip out the poetry and pop culture references and state it more plainly?
 
:), I was just commenting on how this, and many other threads, tend to spiral around a particular individual's penchant for maintaining singular arguments / POVs, contrary to the time and effort many make in explaining various detail.

In the end, the discussion appears not to be about product A or B, this or that observation, better analysis, worse outcomes or whether we're ascertaining an objective skew on the reality of a particular circumstance etc . . . It's about us (humans), and how we 'want' to see things in a particular way, often and ultimately, for no other reason than we chose it.

i.e. As Agents Smith argues, even if we were presented with the most attainably perfect outcome, it would still fall short in our idiosyncratic, abstract reasoning. Why? Because we didn't choose it!

Nietzsche's last man.
 
  • Like
Reactions: name99 and goldpin
The A17 is in all cases more efficient at a given clock. That is, for any frequency X, the A17 will use less energy than an A16.
Does A17 run at the same clock frequency as A16 in daily tasks? If yes, A17 is more efficient. If no, it may or may not. For instance, it is less efficient in Geekbench.
 
Its more efficient, but It can consume more power, thats why 15 PM dont have more battery Life in some reviews, thats why some people are conplaining about heat and battery life
Finally we are approaching agreement.

What you say may well be true now, but that says nothing about the A17, and everything about how Apple chooses to use it. They may well introduce an efficiency mode in a point release of the OS this year, which will save some power. Or they may retune it to run at the A16's clock except when it's running in "game mode" (like the Mac), and may or may not expose a control for that (which amounts to the same thing, but looks different PR-wise). Or they may just not care, because it's still pretty awesome as-is. After all, most people don't buy potato phones no matter how good the battery life is, unless they can't afford anything better.

Or hey, maybe we'll all remember that there is a low-power mode already implemented in the phone, and we can always turn that on if we're in a battery panic...

Does A17 run at the same clock frequency as A16 in daily tasks? If yes, A17 is more efficient. If no, it may or may not. For instance, it is less efficient in Geekbench.
I think you're making the same mistake. You're making claims about the A17's efficiency, but what you really are talking about is the iPhone 15's efficiency. Those are two separate things- though both are interesting and worth remarking on.
 
You're making claims about the A17's efficiency, but what you really are talking about is the iPhone 15's efficiency. Those are two separate things- though both are interesting and worth remarking on.
I think the comparison should be made at the frequency of the SoC running the daily tasks.

Anyway, if you compare SoCs at the same clock frequency, you also have to compare performance at that frequency. So the improvement between A17 and A16 is the improvement in IPC, not the increase in Geekbench score. How much has IPC increased and how much has power consumption decreased?
 
Finally we are approaching agreement.

What you say may well be true now, but that says nothing about the A17, and everything about how Apple chooses to use it. They may well introduce an efficiency mode in a point release of the OS this year, which will save some power. Or they may retune it to run at the A16's clock except when it's running in "game mode" (like the Mac), and may or may not expose a control for that (which amounts to the same thing, but looks different PR-wise). Or they may just not care, because it's still pretty awesome as-is. After all, most people don't buy potato phones no matter how good the battery life is, unless they can't afford anything better.

I'm not sure I'm understanding the argument... The A17 is more efficient. It can do the same work for less energy. So the fact it can do more work for more energy isn't any kind of problem. If it's getting hot, or draining the battery faster it's because you're getting more from it.

If you're doing the same thing at a higher clock rate, then that thing will complete more quickly. Energy is power*time so it would consume more power for less time and the more efficient chip will consume less power over all.

Or am I missing the point of the argument...

I think you're making the same mistake. You're making claims about the A17's efficiency, but what you really are talking about is the iPhone 15's efficiency. Those are two separate things- though both are interesting and worth remarking on.

That's my point when I say we don't know why we're hearing reports of higher power drain (and also many reports of not higher power drain). There are many, many reasons beyond the core efficiency that would cause this. First and foremost: software bugs. Is it sleeping as expected between workloads, is it properly powering down unused components, is it running the correct code or spinning in a loop unchecked. Is the heat coming from something completely unrelated to the A17 in the system?
 
  • Like
Reactions: altaic
If you're doing the same thing at a higher clock rate, then that thing will complete more quickly. Energy is power*time so it would consume more power for less time and the more efficient chip will consume less power over all.
You need to take special care in the calculation if the time is linear in frequency and the consumption is not as shown in the fitted curves.
 
Curiosity question: Why are M1/M2 numbers 'scattered' on the graph? There seems to be more noise in those numbers. Is that because there's likely more going on in the background of MacOS compared to iOS?

The iPhones have less thermal headroom, so the core will gradually decrease its performance as the test is running to manage the heat and power consumption. That’s why you see a wider range of frequencies on A-series.

M-series has more thermal headroom and can more easily maintain the frequency/power at a steady level. The clusters correspond to different models (e.g. M2/M2 Pro/M2 Max) which are all clocked slightly differently as well as single-/multi-core frequency points.
 
For many, the A17's competition is the A16, not Qualcomm's latest SoC. So they may be disappointed if the A17 is less efficient than the A16.

That might be so, however I am more interested in discussing empirical findings and what these might mean for the platform rather than emotional state of people I don’t know.

How do you know that A17 is more efficient than A16 at reduced frequency? Are you comparing A17 at reduced frequency with A16 at maximum frequency or both at reduced frequency?

It’s in the graphs I posted. I have enough samples to show that A17 consumes a bit less power at the same frequency over a portion of the frequency range. Of course, one should keep in mind that these are maximal values - real world use might be more complex due to power gating.
 
That might be so, however I am more interested in discussing empirical findings and what these might mean for the platform rather than emotional state of people I don’t know.



It’s in the graphs I posted. I have enough samples to show that A17 consumes a bit less power at the same frequency over a portion of the frequency range. Of course, one should keep in mind that these are maximal values - real world use might be more complex due to power gating.
These tests are also for simple integer math routines. Other components of the core may see different degrees of efficiency improvements. It would be interesting to run the monitoring at the same time as a more extensive benchmark suite like Geekbench.
 
You need to take special care in the calculation if the time is linear in frequency and the consumption is not as shown in the fitted curves.
Sure, but if they have the ability to scale the clock but are running it at a higher clock without a benefit in efficiency then it's simply a tuning problem which I'd consider a bug. Apple has made the point in the past that race to sleep is an important part of their power management though, so even if the A17 is pulling a hair more power for the workload the system power can be presumed to be lower.
 
f they have the ability to scale the clock but are running it at a higher clock without a benefit in efficiency then it's simply a tuning problem which I'd consider a bug.
What if the increase in IPC was not enough and the SoC team had to increase the clock frequency to make it more marketable? AMD/Intel/Nvidia tend to "overclock" their products to win benchmarks for marketing purposes.

I have enough samples to show that A17 consumes a bit less power at the same frequency over a portion of the frequency range.
Is it so strange that a new SoC on a new node is more efficient than the previous generation in the clock frequency range it uses? Are there examples where this has not happened?
 
Is it so strange that a new SoC on a new node is more efficient than the previous generation in the clock frequency range it uses? Are there examples where this has not happened?

Why would it be strange?
 
We don't know why.
Finally we are approaching agreement.

What you say may well be true now, but that says nothing about the A17, and everything about how Apple chooses to use it. They may well introduce an efficiency mode in a point release of the OS this year, which will save some power. Or they may retune it to run at the A16's clock except when it's running in "game mode" (like the Mac), and may or may not expose a control for that (which amounts to the same thing, but looks different PR-wise). Or they may just not care, because it's still pretty awesome as-is. After all, most people don't buy potato phones no matter how good the battery life is, unless they can't afford anything better.

Or hey, maybe we'll all remember that there is a low-power mode already implemented in the phone, and we can always turn that on if we're in a battery panic...


I think you're making the same mistake. You're making claims about the A17's efficiency, but what you really are talking about is the iPhone 15's efficiency. Those are two separate things- though both are interesting and worth remarking on.
Its easy to know, you get an A15 and an A17, A15 have 3.2 clocks and A17 have 3.78, you play a game, both play at 60 fps, you get lower power consumption in A17, we all know that, the graph makes sense.

The thing is, thats not the only use case in a phone, you load an app, the A17 will boost to 3.78 throwing efficiency away, the A15 boosts to 3.2, it loads the app 2 tenths later and if you track the power consumption the A17 will have consumed more energy for the same load (load an app). If you use a lot of burst loads like app loading, going from app to app, load a webpage, enter in an Instagram or Twitter profile, your A17 will consume more power than the A15 or A16. Thats why we are reading people with heat and battery problems in the 15 Pro and there are reviews where the 15 PM losses against 14 PM in battery life. Im sure they increased the voltage a lot to get that 3.78 Ghz frequency. If you could configure the A17 to reach max 3.2 Ghz you would have more battery life than a 13 PM and 14 PM in every scenario, constant load and burst load. Its like having two cars, one has lower fuel consumption in the highway but you floor it in every acceleration. If your usage have lots of traffic stops you end having more fuel consumption.
 
Is it so strange that a new SoC on a new node is more efficient than the previous generation in the clock frequency range it uses? Are there examples where this has not happened?
Plenty of examples. Look no further than Intel, when they move to a new node, the top frequencies are often much lower than what they were able to obtain on the last, now fine-tuned node. This will also happen now with meteor lake, no way they will hit close to 6ghz on the new node, it will take some time And tweaking to the design.

Remember that the whole architecture/pipeline has a frequency target during design, you can’t just arbitrarily increase clock speed after the fact for mass production.
 
BTW, Meteor Lake is targeted for laptop and low power, so the point is that they can’t just increase clock speed to 6,0 ghz even though it is an new node, and viola, have a desktop winner. That is not how things work.
 
One example is enough. What are the CPU names? So I can check.
Pentium 4's were notorious for this. They were less efficient than Pentium III's at a given clock speed, but that was more than just a node shrink.

The Netburst architecture they used was designed to give a high performance at very high clock speeds (5GHz or so). But they were never able to reach those speeds for technical reasons and the performance suffered and it was known for its high power draw. The Core series of CPUs that followed were based on the Pentium III as a consequence.
 
Last edited:
  • Like
Reactions: MRMSFC and Xiao_Xi
One example is enough. What are the CPU names? So I can check.
Intel did better than I remembered, but Rocket Lake CPU hit 5,3ghz, while Alder Lake i9-12900K hit 5,2 ghz. My main point being that that node is now going to hit a little over 6 GHz with 14900K. The advance is probably a mix of design tweaks and a more mature node.

Anyway, I would but shocked if Meteor Lake hit north of 6,0 GHz out of the gate.
 
Pentium 4's were notorious for this. They were less efficient than Pentium III's at a given clock speed,
Do you mean Pentium 4's Willamette and Pentium III's Coppermine?

Although both were made at 180 nm, Willamette looks like a promising candidate.

It's funny how clueless Intel was at the time.
Realistically speaking, we should be able to see NetBurst based processors reach somewhere between 8 – 10GHz in the next five years before the architecture is replaced yet again. [...] Obviously this 8 – 10GHz clock range would be based on Intel’s 0.07-micron process that is forecasted to debut in 2005. These processors will run at less than 1 volt, 0.85v being the current estimate.
 
The merit is to be more efficient at the clock frequency at which SoCs run daily tasks.

That's the theory. In practice we've seen a steady increase in power consumption of CPUs over the yers as the industry pursues higher performance.
 
  • Like
Reactions: Queen6 and MRMSFC
That's the theory. In practice we've seen a steady increase in power consumption of CPUs over the yers as the industry pursues higher performance.
CPUs, GPUs, even memory and storage. And the Raspberry Pi 5 comes with a heatsink and fan now.

It makes my jaw drop.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.