Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
I really do wonder what Apple is doing with the M3 lineup. M3 doesn’t have double the GPU cores as A17 Pro. M3 Pro has 2 extra efficiency cores but M3 Max has 4 extra performance cores but doesn’t have the 2 extra efficiency cores. I’m so confused.
Chips are tools optimized for the job to be done, not for being the exact double or half of the next item in the lineup. Look at the requirements and limitations of the devices a chip goes into and its optimal core-count should become obvious. What would you want from an entry-level MacBook Pro and what from the maxed out fastest MacBook Pro? You wouldn't go to BMW and demand that the engines going into M3, M4, M5 and M6 form a neat lineup when compared with each other. Instead the size and weight class of each car informs this decision.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Chinese reviewers are on fire!
WOW!

My dream scenario actually came true. Basically. I wanted the iPhone to be a game console when plugged into a monitor (with power delivery) and a Bluetooth controller. I dreamt about this 5 - 6 years ago.

I wish Apple would make this kind of support official and sell an official Apple-branded controller to make this setup more legit.
 
  • Like
Reactions: T'hain Esh Kelch

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
I wish Apple would make this kind of support official and sell an official Apple-branded controller to make this setup more legit.
I have no idea where these wishes for an Apple controller are coming from. Their support for all major console control systems make these as legit as possible (they even sell PS5 controllers on their store). However if they aren’t informing customers of the possibility of using these to play games on their phones then they probably should be doing that more.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
I have no idea where these wishes for an Apple controller are coming from. Their support for all major console control systems make these as legit as possible (they even sell PS5 controllers on their store). However if they aren’t informing customers of the possibility of using these to play games on their phones then they probably should be doing that more.
Yes, I'm well aware that Apple has good support for third party controllers.

What I'm asking for is for Apple to not only support it but to actively endorse it. Making their own gaming controller would provide this signal to both gamers and developers.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I have no idea where these wishes for an Apple controller are coming from. Their support for all major console control systems make these as legit as possible (they even sell PS5 controllers on their store). However if they aren’t informing customers of the possibility of using these to play games on their phones then they probably should be doing that more.
Game controller from Apple is coming. They are working on it for quite some time now.

And its not only for iPhone, but mainly: Apple Vision Pro.
 
  • Like
Reactions: Chuckeee

T'hain Esh Kelch

macrumors 603
Aug 5, 2001
6,475
7,410
Denmark
WOW!

My dream scenario actually came true. Basically. I wanted the iPhone to be a game console when plugged into a monitor (with power delivery) and a Bluetooth controller. I dreamt about this 5 - 6 years ago.

I wish Apple would make this kind of support official and sell an official Apple-branded controller to make this setup more legit.
They heavily promoted the iPhone as an AAA console during the presentation, and it is now 27 years since they released their game controller, so they have already achieved this! And even then, they also made a big thing out of 3rd party controller support, sooo..
 

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
How much less power draws 8W's from 5W's of power?


Smaller node means that you can LOWER THE VOLTAGE at the same frequency. Not that magically 5W's of power on 3 nm process will draw less power than 5W's on 5 nm process. You people clearly do not understand how energy works.

5W of power is 5W's of power regardless of process node.

The standard formula for calculating wattage is (W = V x A). Assuming the amperage remains the same, a lower voltage being used will result in a lower wattage. Measuring the power draw on the SoC itself only tells you a small part of the story, because you have to also account for the other components such as display, cellular modem, WiFi, camera system, taptic engine, etc. - all of which also draw power from the battery.

Apple has increased the maximum frequency, while also increasing the maximum power draw for the SOC, which results in thermal throttling over longer period of time.

Lower voltage at the same frequency means: lower power draw while doing the same thing that is framerate limited to previous generation, which results in snappier feel of gaming on this phone over previous gen, but that is caused by higher UNDERUTILIZATION of components, rather than higher power efficiency.

Interesting how you equate lower voltage to lower power draw here just after claiming the opposite in the first section.

Thermal throttling is not something that's guaranteed when a SoC is running at max frequency. If the device can siphon enough heat away from the SoC, TJMax doesn't come into play. Furthermore, that max frequency is not something many devices continually run at.

But over longer period of time - the power draw, and battery drain, will be higher than previous generation.


And lastly. If A16 CPU was limited to 5W's of power, and A17 Pro is limited to 8W's of power - that is 60% power draw increase over previous generation. For 10% higher performance.

Power consumption has NEVER had a direct correlation to performance in ANY CPU lineup, So this entire argument is meaningless at best, and outright misleading on its face.
 
  • Like
Reactions: Adult80HD

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
The standard formula for calculating wattage is (W = V x A). Assuming the amperage remains the same, a lower voltage being used will result in a lower wattage. Measuring the power draw on the SoC itself only tells you a small part of the story, because you have to also account for the other components such as display, cellular modem, WiFi, camera system, taptic engine, etc. - all of which also draw power from the battery.



Interesting how you equate lower voltage to lower power draw here just after claiming the opposite in the first section.

Thermal throttling is not something that's guaranteed when a SoC is running at max frequency. If the device can siphon enough heat away from the SoC, TJMax doesn't come into play. Furthermore, that max frequency is not something many devices continually run at.



Power consumption has NEVER had a direct correlation to performance in ANY CPU lineup, So this entire argument is meaningless at best, and outright misleading on its face.
1) If smaller node allows you to use lower voltage for the same frequency, how come A17 Pro has higher CPU peak power draw than A16, if, supposedly, amperage has not been changed?

2) If you can do 800 Mhz on the GPU with 0.5v and 0.8v it will mean you will use less power, with 0.5v. It also means your hardware has higher potential, but its bound by other external limits, like in the case of Genshin impact gaming: framerate lock to 60 Hz. You can achieve that 60 Hz on A16 with 5 GPU cores, at 800 MHz at 0.8v(hypothetically) and on A17 Pro - with the same 5 GPU cores, at 800 Mhz at 0.5v. It means that you have one GPU core being inactive.

Secondly. Apple increased maximum frequency from 3.5 to 3.7 GHz, and maximum power draw for the CPU from 11 to 14W's of power. Sustained load on the CPU - results in much higher thermal throttling because of the higher power draw.

3) Power consumption always has direct correlation to CPU performance from particular CPU architecture lineup.

Core i3 12100 has 58W TDP with 4C/8T core config.
Core i3 12100T has 35W TDP with 4C/8T config.

One has 3.5 GHz base clock the other has 2.2 GHz base clock(at rated TDP).
 

picpicmac

macrumors 65816
Aug 10, 2023
1,239
1,833
Is there such a thing as scientific test for battery life testing?
Of course there are, but they don't mean what you may want them to mean. E.g. IEC 62133-2:2017 .

But what you mean is not really battery life, but system operation between recharging.

Batteries are safety hazards and how much a Li-ion battery can be charged and how quickly it can charge and be discharged is a thing over which engineers worry.

How much energy will be allowed to be stored in any given product thus is determined by the safety (and hence battery regulations like the one above.)

So whatever people are reporting about their new iPhone 15Pro phones is a result of the entire system design and safety, not necessarily about a given chip (eg. A17).

This is why trying to draw conclusions about a specific chip energy use from casual, non rigorous "testing" of an entire system (e.g. iPhone) is not something I'd recommend.

To put another way: iPhone 15Pro battery life depends upon more than a specific chip.


 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
For anyone that can answer, how do we know the increased power consumption is from the cpu cores and not, say, from the redesigned gpu cores?

For example: let’s say the CPU actually increased in efficiency, but those gains were negated by the massively changed GPU.

We have rumors that raytracing was shelved on the A16 because it was too power hungry, so it may be that any cpu efficiency gains were mitigated by a more power hungry gpu.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
For anyone that can answer, how do we know the increased power consumption is from the cpu cores and not, say, from the redesigned gpu cores?

For example: let’s say the CPU actually increased in efficiency, but those gains were negated by the massively changed GPU.

We have rumors that raytracing was shelved on the A16 because it was too power hungry, so it may be that any cpu efficiency gains were mitigated by a more power hungry gpu.

Here you have almost exact explenation of what is happening with new SOC in terms of power, clocks and performance.

3 nm process is not bad, its most likely that Apple is unable to eek out every last bit out of that process to gain every possible efficiency gain available, anymore.

Apple ALREADY is losing perf/watt, and ultimate performance to Qualcomm GPUs. Keep this in mind guys. And Qualcomm GPU is on 5 nm process, remember. So its not because of physical process, its because of Apple sub-par physical design, because of talent drain.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
3) Power consumption always has direct correlation to CPU performance from particular CPU architecture lineup.

Core i3 12100 has 58W TDP with 4C/8T core config.
Core i3 12100T has 35W TDP with 4C/8T config.

One has 3.5 GHz base clock the other has 2.2 GHz base clock(at rated TDP).

Frequency , not power, is the primary effect there. The 'cost' of higher frequency is higher power draw, but the performance is not coming from pushing more power into the chip. It is drawing more power.

It isn't just 'same architecture'. Same baseline architecture of a 175mm^2 die with say 6 cores versus a 230mm^2 die with 10 cores and bigger GPU will also draw more power. billions more transistors will draw incrementally more power than billions less. If the architecture is not borked , those additional cores will buy more aggregate performance. But again, the power comes from 'more' ( die , frequency ,etc. ) and is the secondary effect.

It is secondary because the architecture/implementation will vary just how much more power gets consumed. There are impacts around Frequency that are coupled to the design/implementation. Grossly applying more power isn't necessarily get the same amount of increased performance ( or any performance if way down the diminishing returns zone for that fab process's characteristics. )
 
Last edited:

Retskrad

macrumors regular
Apr 1, 2022
200
672
I don’t see how anyone can see new iPhones have bad battery life two years in a a row now (14 Pro and 15 Pro’s) and still say Apple’s chip division hasn’t lost most of its best chip designers. I watched MKBHD‘s podcast today and even they commented that the iPhone 15 Pro Max has worse battery life than even the iPhone 14 Pro Max, which already had poor battery life. The A17 Pro has such poor inefficiency that it peaks like a M-series chip! This is unacceptable from Johnny Srouji’s team.
 
  • Like
Reactions: huge_apple_fangirl

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053

caribbeanblue

macrumors regular
May 14, 2020
138
132

Here you have almost exact explenation of what is happening with new SOC in terms of power, clocks and performance.

3 nm process is not bad, its most likely that Apple is unable to eek out every last bit out of that process to gain every possible efficiency gain available, anymore.

Apple ALREADY is losing perf/watt, and ultimate performance to Qualcomm GPUs. Keep this in mind guys. And Qualcomm GPU is on 5 nm process, remember. So its not because of physical process, its because of Apple sub-par physical design, because of talent drain.
I feel it's still too early to say whether this is an industry-wide or Apple-specific problem. Because it's not like their competitors both on the mobile and x86 side are light years ahead of Apple at this point. People have been saying Apple's improvements have slowed down and they have brain drain for 2 years now, but we're still not at a point where people have caught up with even the M1 on all aspects. Apple has been steadily increasing the E-core performance by 20-30% every year for at least 5 years and that didn't stop this year with A17. The P-core improvements have been smaller with it mostly relying on freq. improvements than μarchitectural improvements, but they did receive slight changes to enable those freq. increases and they're still really close if not beating Intel & AMD in single threaded performance on both mobile and desktop, even with the huge power draw of those chips.

They're still the leader in the areas they were good at 3 years ago, and they're continuing to preserve their leadership in those areas while giving people even higher peak performance, and most of the time increasing perf/watt at lower wattages.

If there'll be anything close to an inflection point, I think it'll happen in 2024, where both Zen 5 and Intel's architectures could match Apple in terms of single-thread IPC and go much beyond them with much faster frequencies (to have the bragging rights of having the fastest 1C perf. on the market), instead of merely matching them with a lot higher frequency like currently, and have equal feature sets. But we'll still see if they catch up in things like idle and moderate use power draw on mobile, Apple's advantage still isn't small there, and battery life (you will definitely have to turn off Turbo Boost and PBO on Intel & AMD, of course).
 
  • Like
Reactions: souko

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Chuckle, MaxTech regurgitating someone else's charts for ad clicks doesn't materially say much. Max Tech's understanding is mostly around "mo money , mo money , mo money" via getting folks to click his stuff.
So just because Vadim is commenting on the charts - it makes the charts wrong?
I feel it's still too early to say whether this is an industry-wide or Apple-specific problem. Because it's not like their competitors both on the mobile and x86 side are light years ahead of Apple at this point. People have been saying Apple's improvements have slowed down and they have brain drain for 2 years now, but we're still not at a point where people have caught up with even the M1 on all aspects. Apple has been steadily increasing the E-core performance by 20-30% every year for at least 5 years and that didn't stop this year with A17. The P-core improvements have been smaller with it mostly relying on freq. improvements than μarchitectural improvements, but they did receive slight changes to enable those freq. increases and they're still really close if not beating Intel & AMD in single threaded performance on both mobile and desktop, even with the huge power draw of those chips.

They're still the leader in the areas they were good at 3 years ago, and they're continuing to preserve their leadership in those areas while giving people even higher peak performance, and most of the time increasing perf/watt at lower wattages.

If there'll be anything close to an inflection point, I think it'll happen in 2024, where both Zen 5 and Intel's architectures could match Apple in terms of single-thread IPC and go much beyond them with much faster frequencies (to have the bragging rights of having the fastest 1C perf. on the market), in terms merely matching them with a lot higher frequency like currently, and have equal feature sets. But we'll still see if they catch up in things like idle and moderate use power draw on mobile, Apple's advantage still isn't small there, and battery life (you will definitely have to turn off Turbo Boost and PBO on Intel & AMD, of course).
ARM designs already are ahead in some factors from Apple design.

x86 will catch up with Apple in no time. Lunar Lake will be Intel's first M1 - type product, and MTL-P also will bring massive efficiency gains, while delivering very decent peak performance, within that very good efficiency.

Strix Point, from AMD will be the first true M3 Pro competition, because of its robustness, and first time in very long time, first product with 256 bit DDR memory bus on the mainstream platform.

And all of this is just the beginning.
 

APCX

Suspended
Sep 19, 2023
262
337
So just because Vadim is commenting on the charts - it makes the charts wrong?

ARM designs already are ahead in some factors from Apple design.

x86 will catch up with Apple in no time. Lunar Lake will be Intel's first M1 - type product, and MTL-P also will bring massive efficiency gains, while delivering very decent peak performance, within that very good efficiency.

Strix Point, from AMD will be the first true M3 Pro competition, because of its robustness, and first time in very long time, first product with 256 bit DDR memory bus on the mainstream platform.

And all of this is just the beginning.
So we’re taking Intel and Amd marketing as the truth now? Interesting.

Also geekerwan have left doubts over their power consumption figures. They don’t directly disclose how they ascertain them.

It might be worth coming back to boast after Intel and Amd have surpassed Apple, not before. Otherwise you end up looking like those who made the same proclamations for the past few years.
 
Last edited:

Macintosh IIcx

macrumors 6502a
Jul 3, 2014
627
613
Denmark

Here you have almost exact explenation of what is happening with new SOC in terms of power, clocks and performance.

3 nm process is not bad, its most likely that Apple is unable to eek out every last bit out of that process to gain every possible efficiency gain available, anymore.

Apple ALREADY is losing perf/watt, and ultimate performance to Qualcomm GPUs. Keep this in mind guys. And Qualcomm GPU is on 5 nm process, remember. So it’s not because of physical process, it’s because of Apple sub-par physical design, because of talent drain.
You do realize that he is just commenting on the same material from the Chinese reviewer that we have already been through several days ago, right?
 
  • Like
Reactions: souko

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
I find it hilarious that the figure used to criticise the A17 Pro's efficiency is the one where you can see it delivering A16 Bionic levels of performance at ~20% less wattage. Because everyone seems to be focusing of the rightmost point on the graph, but...

Screenshot 2023-09-22 at 18.47.32.png


So that particular graph alone seems to indicate that the A17 Pro is in fact significantly more efficient than the A16 Bionic (for the same performance, 11.2W -> 9.4W is a 16% decrease in power consumption). With the only caveat that the A17 Pro is allowed to go further to the right along the performance/power curve, which naturally causes it to be less efficient than at lower frequencies... as with any processor.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I find it hilarious that the figure used to criticise the A17 Pro's efficiency is the one where you can see it delivering A16 Bionic levels of performance at ~20% less wattage. Because everyone seems to be focusing of the rightmost point on the graph, but...

View attachment 2275966

So that particular graph alone seems to indicate that the A17 Pro is in fact significantly more efficient than the A16 Bionic (for the same performance, 11.2W -> 9.4W is a 16% decrease in power consumption). With the only caveat that the A17 Pro is allowed to go further to the right along the performance/power curve, which naturally causes it to be less efficient than at lower frequencies... as with any processor.
5650 pts - 9.4W of power draw.
6200 pts - 14W of power draw.

4.6W higher power draw, which results in 45% higher power for 10% performance increase.

Its NOT the same node. Its brand new node, one, another step in physical design. To increase performance on the same node by 10%, you have to increase power draw by 23%.

And you actually have lost perf/watt with new node. This is likely not because of the physical process itself, but the brain drain that affects what Apple is capable of doing with PDKs(Process Development Kits).

P.S. N3 process was supposed to bring 35% higher efficiency at the same performance, so again - Apple has not maxed out the physical design of what is possible on this node if they achieved only 16% lower power at the same performance, with updated CPU architecture.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,199
Probably Qualcomm will lose perf/watt too when it will go to the 3nm
I wonder when it will be, because if its in 2025... i guess the A18 will be even better
 

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
And you actually have lost perf/watt with new node. This is likely not because of the physical process itself, but the brain drain that affects what Apple is capable of doing with PDKs(Process Development Kits).
Hmm, maybe I should have made thicker lines. Please see the revised graph below:

Screenshot 2023-09-22 at 19.48.49.png


5650 pts - 9.4W of power draw.
6200 pts - 14W of power draw.

4.6W higher power draw, which results in 45% higher power for 10% performance increase.
It's called the performance/power curve because the performance is not linear with power. What are you trying to say with this? You can also say the exact opposite if you move to the left of the curve:

5650 pts - 9.4W of power draw
2400 pts - 1.8W of power draw

7.6W less power draw, which results in 80% lower power for 42% performance decrease. Apple has been saved! Rejoice!

And you actually have lost perf/watt with new node. This is likely not because of the physical process itself, but the brain drain that affects what Apple is capable of doing with PDKs(Process Development Kits).
To have lost perf/watt with the new node, it'd mean that the performance curve of the A17 Pro is at some point under the performance curve of the A16 Bionic. But that doesn't happen, at any point un the graph. The A16 Bionic can't score higher than the A17 Pro, at any power. The opposite is true:
- At any given power, the A17 Pro scores more points than the A16 Bionic.
- At any given performance, the A17 Pro uses less power than the A16 Bionic.

The graph just extends further to the right. If the A16 Bionic could be clocked higher, it would use more than 14W and still wouldn't get to 6200 points.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.