Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
According to this site, it's only 10% faster and they used geekbench. GPU is 20% faster, but Apple added an extra GPU core. So it's not really one-to-one comparison.


If only a 10% performance gain at the same power consumption is what you expected from the 3nm chip, then you are easy to please.

Reviewers of the iPhone 15 Pro expected also alot more from the 3nm chip, so I'm not the only one.
TSMC has said all along that N3 is either 10% better performing than N5P at the same power, or uses 25% less power at the same performance. This is exactly what the benchmarks are seeing. People expecting more are expecting too much.
 
Nice cherrypicking. Not are you only cherrypicking an outdated Intel chip, the comparison doesn’t even make sense.

To judge the 3nm performance, you need to compare it to the 5nm chip of the previous Apple Silicon version. And it doesn’t look impressive.

I expected alot more from the 3nm die shrink.
Did you expect more from a 4nm die shrink?

Because I fail to see why 2900 in a PHONE power envelope is somehow worse than 2500 in a LAPTOP power envelope...
 
  • Like
Reactions: souko and MRMSFC
We are discussing the efficiency/performance of the A17 on N3. This person is running 3dmark and measuring a battery percentage drop over that time. That’s fine as a rough measurement of phone efficiency, but doesn’t yield much information about the efficiency of the A17 unless we know what’s happening to the other components.
It is, however, a fine methodology for rebutting the claims that "iPhone 15 Pro is a disaster"...
Which is what half the posts here (and more on the broader internet) are claiming.
 
It is, however, a fine methodology for rebutting the claims that "iPhone 15 Pro is a disaster"...
Which is what half the posts here (and more on the broader internet) are claiming.
To be honest I’m not sure how this relates to my point. The person I replied to stated their snapdragon 870 was more efficient than the A17 due to their method of running a benchmark while measuring the drop in battery percentage. I stated that isn’t the way to measure the A17’s efficiency.

EDIT: turns out I misread the original post. Apologies.
 
Last edited:
It’s in the chart I posted at the start of this thread… for performance you can check out Geekbench
How do you know the consumption of both SoC during the Geekbench testbench? What numbers did you use to get 25%?
 
doing something slower always uses less energy, but no-one is interested in 1Hz CPUs.
Not sure it matters here, but that's not strictly true... It's certainly not true down to the 1Hz clock rate (which I realize was an exaggeration for effect).

The logic is leaking while it's running, so there's a constant power term in addition to the dynamic term and that constant tends to get larger as process geometry shrinks. So there's a point where running slower saves dynamic power but the benefit is lost to leakage.

This is further true at the system level, where finishing faster might mean turning off the display more quickly, but that's beyond what we can account for in this discussion...
 
  • Like
Reactions: altaic
How do you know the consumption of both SoC during the Geekbench testbench? What numbers did you use to get 25%?

My stress test measures power consumption running a demanding workload at a given frequency. Power consumption running Geekbench or anything else intensive at marching frequency will be the roughly same. I did verify this using powermetrics on my Mac, and I see no reason why other Apple hardware will behave any differently.

P.S. Apple Silicon does feature fine-grained power gating to save energy, but a demanding workload should tax just enough subsystems that this won’t change the results too much.
 
Nice work! Did you choose a degree-four polynomial by doing a log-log plot and finding you got a good fit to a straight line with a slope of four? Would you be willing to share this data? I'd like to play with some curve fitting myself.

If you do, let me know how closely it hews to our f^2.15 estimate of yore near the operating point. 😉

"In theory, theory and practice agree. In practice, they don't."
 
My stress test measures power consumption running a demanding workload at a given frequency. Power consumption running Geekbench or anything else intensive has the same power consumption. I did verify this using powermetrics on my Mac, and I see no reason why other Apple hardware will behave any differently.
I guess you have used 6.5W for M2 and 5W for A17, so you get around 25% (6.5-5)/6.5*100=23%.

So, assuming the A17 Pro consumes 5W at 3.8GHz and scores 2900 points and the A16 consumes 4W at 3.5GHz and scores 2500 points, the increase in consumption is greater than the increase in points on Geekbench.
 
1) Zen uses larger transistors, as does any design that is striving for higher frequency. Apple gets its IPC wins from using lots of transistors, which means density and smaller transistors.

2) What do you think critical path IS? Why can't I run critical path faster? Because critical path cycle time is determined by the sum of the switching times of the sequence of transistors that make up the critical path!
You can see some discussion of these elements here: https://www.realworldtech.com/fo4-metric/
although I think *everyone* would agree that 6 to 8 FO4 is insanely low, you do much better overall by using a slightly longer cycle length that allows for at least some degree of superscalar/OoO/speculation smarts, and taking the IPC win over the frequency loss.
Thanks for your response.

1) I didn't realize this. Time to do more reading.

2) Right, but not knowing that switching time was variable meant I thought it was all down to depth + wiring delay. Anyway, thanks for the link, that was a useful refresher on stuff I'd forgotten too many years ago.
 
According to this site, it's only 10% faster and they used geekbench. GPU is 20% faster, but Apple added an extra GPU core. So it's not really one-to-one comparison.


If only a 10% performance gain at the same power consumption is what you expected from the 3nm chip, then you are easy to please.

Reviewers of the iPhone 15 Pro expected also alot more from the 3nm chip, so I'm not the only one.
Then they are as ill-informed as you.

Expectations for a pure die shrink would be +12-15% clock at equivalent power. [Edit: actually less, since the A16 is on N4, not N5- so perhaps Apple actually hit that number, whatever it is.] But Apple put in substantial design work and got a bit less than that. That leaves you with two choices about what to believe:

1) You believe that Apple is so stupid that they invested a ton in design work to get worse results.

2) You believe that Apple is clever and looks to the future and across its entire product line. That their lead in phone CPU is so substantial that they can afford to give up a little in pure efficiency in order to build a core that can scale to much higher clocks, given power and thermal headroom. In other words, in a Mac.

We won't know for sure which of those two things is true until the M3s ship in a desktop configuration (Studio and/or Pro), though the 14/16" Pro laptops will probably give us strong indications. But based on past performance, I'd bet real money on option #2.
 
So, assuming the A17 Pro consumes 5W at 3.8GHz and scores 2900 points and the A16 consumes 4W at 3.5GHz and scores 2500 points, the increase in consumption is greater than the increase in points on Geekbench.

Yes, all the data so far suggests that A17 is slightly less efficient that A16 at its peak and slightly more efficient than A16 when operating at reduced frequencies.

For discussion on this, I’d like to refer to you to the excellent post by @Confused-User just above this one.
 
To be honest I’m not sure how this relates to my point. The person I replied to stated their snapdragon 870 was more efficient than the A17 due to their method of running a benchmark while measuring the drop in battery percentage. I stated that isn’t the way to measure the A17’s efficiency.
I said otherwise, please, the only thing you do is telling people they are wrong, no argument, and you dont even know what is a burst load, there are more things than a graph in chip efficiency
 
That their lead in phone CPU is so substantial that they can afford to give up a little in pure efficiency in order to build a core that can scale to much higher clocks, given power and thermal headroom.
For many, the A17's competition is the A16, not Qualcomm's latest SoC. So they may be disappointed if the A17 is less efficient than the A16.

all the data so far suggests that A17 is slightly less efficient that A16 at its peak and slightly more efficient than A16 when operating at reduced frequencies.
How do you know that A17 is more efficient than A16 at reduced frequency? Are you comparing A17 at reduced frequency with A16 at maximum frequency or both at reduced frequency?
 
I said otherwise,
I read your post again and indeed you did say otherwise. I misread it initially and I am incorrect in stating that. Apologies.
please, the only thing you do is telling people they are wrong, no argument, and you dont even know what is a burst load, there are more things than a graph in chip efficiency
this however is not a fair summation of my posts.
 
  • Like
Reactions: souko
Not sure it matters here, but that's not strictly true... It's certainly not true down to the 1Hz clock rate (which I realize was an exaggeration for effect).

The logic is leaking while it's running, so there's a constant power term in addition to the dynamic term and that constant tends to get larger as process geometry shrinks. So there's a point where running slower saves dynamic power but the benefit is lost to leakage.

This is further true at the system level, where finishing faster might mean turning off the display more quickly, but that's beyond what we can account for in this discussion...
Only if you implement the logic in fast semiconductor transistors...
If you want low enough energy, you can use much less leaky (and slower) materials or, hell, ratchet-and-pawl style micro-mechanical movements that toggle "occasionally" based on environmental noise :)

This is of course a general principle. The same holds true for your car. It will be more gas efficient at low speeds (but no-one cares!) and if you want absolute efficiency, to hell with speed, you switch to considering very different sorts of designs from a car...
 
For many, the A17's competition is the A16, not Qualcomm's latest SoC. So they may be disappointed if the A17 is less efficient than the A16.


How do you know that A17 is more efficient than A16 at reduced frequency? Are you comparing A17 at reduced frequency with A16 at maximum frequency or both at reduced frequency?
Look at the **** curves! Consider eg
At every frequency, the A17 curve draws less power (to do the same level of work) AND has slightly higher IPC (ie is in fact doing work at a slightly higher rate for the same frequency).

The complaint people appear to have (it's hard to tell when most of the people chiming in have no idea what they are trying to say beyond "Apple sux") is that at the end of the curve, the A17 curve pushes higher in frequency and power.
ie JUST LIKE the A16 curve did relative to A15. And JUST LIKE the A15 curve did relative to A14.

But if there's one constant in internet discussion, it's that people feel they have ZERO obligation to look at any sort of data or history before throwing out their opinions...

1695799726677-png.2282316
 
For many, the A17's competition is the A16, not Qualcomm's latest SoC. So they may be disappointed if the A17 is less efficient than the A16.
The A17 is in all respects superior (see below).
How do you know that A17 is more efficient than A16 at reduced frequency? Are you comparing A17 at reduced frequency with A16 at maximum frequency or both at reduced frequency?
Because he gathered data points at various frequencies. The A17 is in all cases more efficient at a given clock. That is, for any frequency X, the A17 will use less energy than an A16.

IIRC, it will also do slightly more work as it has a very modest (small single-digit) IPC gain iso-clock.
 
  • Like
Reactions: souko
Only if you implement the logic in fast semiconductor transistors...
If you want low enough energy, you can use much less leaky (and slower) materials or, hell, ratchet-and-pawl style micro-mechanical movements that toggle "occasionally" based on environmental noise :)

This is of course a general principle. The same holds true for your car. It will be more gas efficient at low speeds (but no-one cares!) and if you want absolute efficiency, to hell with speed, you switch to considering very different sorts of designs from a car...
Ok, so maybe I shouldn't have been so timid about leaving the scope of the data presented...

In every one of those cases, there is energy being expended while you're waiting for your slow process to complete. If you're going to go into micro mechanics and internal combustion we may as well account for the metabolic energy consumed while you're waiting for your technology to respond. Your computer runs on a coin cell, but you've burned 3 pizzas waiting for your calculation to complete.

My point here is simply that dynamic power isn't the only power sink in a system. Optimal efficiency can sometimes be found by increasing processing speed. This is the whole reason for hurry-up-and-wait processing models.
 
The A17 is in all respects superior (see below).

Because he gathered data points at various frequencies. The A17 is in all cases more efficient at a given clock. That is, for any frequency X, the A17 will use less energy than an A16.

IIRC, it will also do slightly more work as it has a very modest (small single-digit) IPC gain iso-clock.
Its more efficient, but It can consume more power, thats why 15 PM dont have more battery Life in some reviews, thats why some people are conplaining about heat and battery life
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.