From technical point of view - he is guessing.He says
8, 18, 32, 64 for binned GPU cores
10, 20, 40, 80 for max GPU cores
He might be guessing some of these but he definitely did find M3 Pro with 18 GPU cores
From technical point of view - he is guessing.He says
8, 18, 32, 64 for binned GPU cores
10, 20, 40, 80 for max GPU cores
He might be guessing some of these but he definitely did find M3 Pro with 18 GPU cores
Chips are tools optimized for the job to be done, not for being the exact double or half of the next item in the lineup. Look at the requirements and limitations of the devices a chip goes into and its optimal core-count should become obvious. What would you want from an entry-level MacBook Pro and what from the maxed out fastest MacBook Pro? You wouldn't go to BMW and demand that the engines going into M3, M4, M5 and M6 form a neat lineup when compared with each other. Instead the size and weight class of each car informs this decision.I really do wonder what Apple is doing with the M3 lineup. M3 doesn’t have double the GPU cores as A17 Pro. M3 Pro has 2 extra efficiency cores but M3 Max has 4 extra performance cores but doesn’t have the 2 extra efficiency cores. I’m so confused.
WOW!Chinese reviewers are on fire!
YouTuber Plays Resident Evil Village On iPhone 15 Pro Using External Monitor & DualSense Controller, Results Usher A New Age Of Mobile Gaming
A user connected his iPhone 15 Pro to an external monitor via USB-C and paired his DualSense to play Resident Evil Village on a bigger screen.wccftech.com
I have no idea where these wishes for an Apple controller are coming from. Their support for all major console control systems make these as legit as possible (they even sell PS5 controllers on their store). However if they aren’t informing customers of the possibility of using these to play games on their phones then they probably should be doing that more.I wish Apple would make this kind of support official and sell an official Apple-branded controller to make this setup more legit.
Yes, I'm well aware that Apple has good support for third party controllers.I have no idea where these wishes for an Apple controller are coming from. Their support for all major console control systems make these as legit as possible (they even sell PS5 controllers on their store). However if they aren’t informing customers of the possibility of using these to play games on their phones then they probably should be doing that more.
Game controller from Apple is coming. They are working on it for quite some time now.I have no idea where these wishes for an Apple controller are coming from. Their support for all major console control systems make these as legit as possible (they even sell PS5 controllers on their store). However if they aren’t informing customers of the possibility of using these to play games on their phones then they probably should be doing that more.
You got a source for that?Game controller from Apple is coming. They are working on it for quite some time now.
And its not only for iPhone, but mainly: Apple Vision Pro.
They heavily promoted the iPhone as an AAA console during the presentation, and it is now 27 years since they released their game controller, so they have already achieved this! And even then, they also made a big thing out of 3rd party controller support, sooo..WOW!
My dream scenario actually came true. Basically. I wanted the iPhone to be a game console when plugged into a monitor (with power delivery) and a Bluetooth controller. I dreamt about this 5 - 6 years ago.
I wish Apple would make this kind of support official and sell an official Apple-branded controller to make this setup more legit.
How much less power draws 8W's from 5W's of power?
Smaller node means that you can LOWER THE VOLTAGE at the same frequency. Not that magically 5W's of power on 3 nm process will draw less power than 5W's on 5 nm process. You people clearly do not understand how energy works.
5W of power is 5W's of power regardless of process node.
Apple has increased the maximum frequency, while also increasing the maximum power draw for the SOC, which results in thermal throttling over longer period of time.
Lower voltage at the same frequency means: lower power draw while doing the same thing that is framerate limited to previous generation, which results in snappier feel of gaming on this phone over previous gen, but that is caused by higher UNDERUTILIZATION of components, rather than higher power efficiency.
But over longer period of time - the power draw, and battery drain, will be higher than previous generation.
And lastly. If A16 CPU was limited to 5W's of power, and A17 Pro is limited to 8W's of power - that is 60% power draw increase over previous generation. For 10% higher performance.
1) If smaller node allows you to use lower voltage for the same frequency, how come A17 Pro has higher CPU peak power draw than A16, if, supposedly, amperage has not been changed?The standard formula for calculating wattage is (W = V x A). Assuming the amperage remains the same, a lower voltage being used will result in a lower wattage. Measuring the power draw on the SoC itself only tells you a small part of the story, because you have to also account for the other components such as display, cellular modem, WiFi, camera system, taptic engine, etc. - all of which also draw power from the battery.
Interesting how you equate lower voltage to lower power draw here just after claiming the opposite in the first section.
Thermal throttling is not something that's guaranteed when a SoC is running at max frequency. If the device can siphon enough heat away from the SoC, TJMax doesn't come into play. Furthermore, that max frequency is not something many devices continually run at.
Power consumption has NEVER had a direct correlation to performance in ANY CPU lineup, So this entire argument is meaningless at best, and outright misleading on its face.
Of course there are, but they don't mean what you may want them to mean. E.g. IEC 62133-2:2017 .Is there such a thing as scientific test for battery life testing?
For anyone that can answer, how do we know the increased power consumption is from the cpu cores and not, say, from the redesigned gpu cores?
For example: let’s say the CPU actually increased in efficiency, but those gains were negated by the massively changed GPU.
We have rumors that raytracing was shelved on the A16 because it was too power hungry, so it may be that any cpu efficiency gains were mitigated by a more power hungry gpu.
3) Power consumption always has direct correlation to CPU performance from particular CPU architecture lineup.
Core i3 12100 has 58W TDP with 4C/8T core config.
Core i3 12100T has 35W TDP with 4C/8T config.
One has 3.5 GHz base clock the other has 2.2 GHz base clock(at rated TDP).
Here you have almost exact explenation of what is happening with new SOC in terms of power, clocks and performance.
And you're being generous in your assessment of his technical knowledge.Chuckle, MaxTech regurgitating someone else's charts for ad clicks doesn't materially say much. Max Tech's understanding is mostly around "mo money , mo money , mo money" via getting folks to click his stuff.
I feel it's still too early to say whether this is an industry-wide or Apple-specific problem. Because it's not like their competitors both on the mobile and x86 side are light years ahead of Apple at this point. People have been saying Apple's improvements have slowed down and they have brain drain for 2 years now, but we're still not at a point where people have caught up with even the M1 on all aspects. Apple has been steadily increasing the E-core performance by 20-30% every year for at least 5 years and that didn't stop this year with A17. The P-core improvements have been smaller with it mostly relying on freq. improvements than μarchitectural improvements, but they did receive slight changes to enable those freq. increases and they're still really close if not beating Intel & AMD in single threaded performance on both mobile and desktop, even with the huge power draw of those chips.
Here you have almost exact explenation of what is happening with new SOC in terms of power, clocks and performance.
3 nm process is not bad, its most likely that Apple is unable to eek out every last bit out of that process to gain every possible efficiency gain available, anymore.
Apple ALREADY is losing perf/watt, and ultimate performance to Qualcomm GPUs. Keep this in mind guys. And Qualcomm GPU is on 5 nm process, remember. So its not because of physical process, its because of Apple sub-par physical design, because of talent drain.
So just because Vadim is commenting on the charts - it makes the charts wrong?Chuckle, MaxTech regurgitating someone else's charts for ad clicks doesn't materially say much. Max Tech's understanding is mostly around "mo money , mo money , mo money" via getting folks to click his stuff.
ARM designs already are ahead in some factors from Apple design.I feel it's still too early to say whether this is an industry-wide or Apple-specific problem. Because it's not like their competitors both on the mobile and x86 side are light years ahead of Apple at this point. People have been saying Apple's improvements have slowed down and they have brain drain for 2 years now, but we're still not at a point where people have caught up with even the M1 on all aspects. Apple has been steadily increasing the E-core performance by 20-30% every year for at least 5 years and that didn't stop this year with A17. The P-core improvements have been smaller with it mostly relying on freq. improvements than μarchitectural improvements, but they did receive slight changes to enable those freq. increases and they're still really close if not beating Intel & AMD in single threaded performance on both mobile and desktop, even with the huge power draw of those chips.
They're still the leader in the areas they were good at 3 years ago, and they're continuing to preserve their leadership in those areas while giving people even higher peak performance, and most of the time increasing perf/watt at lower wattages.
If there'll be anything close to an inflection point, I think it'll happen in 2024, where both Zen 5 and Intel's architectures could match Apple in terms of single-thread IPC and go much beyond them with much faster frequencies (to have the bragging rights of having the fastest 1C perf. on the market), in terms merely matching them with a lot higher frequency like currently, and have equal feature sets. But we'll still see if they catch up in things like idle and moderate use power draw on mobile, Apple's advantage still isn't small there, and battery life (you will definitely have to turn off Turbo Boost and PBO on Intel & AMD, of course).
So we’re taking Intel and Amd marketing as the truth now? Interesting.So just because Vadim is commenting on the charts - it makes the charts wrong?
ARM designs already are ahead in some factors from Apple design.
x86 will catch up with Apple in no time. Lunar Lake will be Intel's first M1 - type product, and MTL-P also will bring massive efficiency gains, while delivering very decent peak performance, within that very good efficiency.
Strix Point, from AMD will be the first true M3 Pro competition, because of its robustness, and first time in very long time, first product with 256 bit DDR memory bus on the mainstream platform.
And all of this is just the beginning.
You do realize that he is just commenting on the same material from the Chinese reviewer that we have already been through several days ago, right?
Here you have almost exact explenation of what is happening with new SOC in terms of power, clocks and performance.
3 nm process is not bad, its most likely that Apple is unable to eek out every last bit out of that process to gain every possible efficiency gain available, anymore.
Apple ALREADY is losing perf/watt, and ultimate performance to Qualcomm GPUs. Keep this in mind guys. And Qualcomm GPU is on 5 nm process, remember. So it’s not because of physical process, it’s because of Apple sub-par physical design, because of talent drain.
5650 pts - 9.4W of power draw.I find it hilarious that the figure used to criticise the A17 Pro's efficiency is the one where you can see it delivering A16 Bionic levels of performance at ~20% less wattage. Because everyone seems to be focusing of the rightmost point on the graph, but...
View attachment 2275966
So that particular graph alone seems to indicate that the A17 Pro is in fact significantly more efficient than the A16 Bionic (for the same performance, 11.2W -> 9.4W is a 16% decrease in power consumption). With the only caveat that the A17 Pro is allowed to go further to the right along the performance/power curve, which naturally causes it to be less efficient than at lower frequencies... as with any processor.
Hmm, maybe I should have made thicker lines. Please see the revised graph below:And you actually have lost perf/watt with new node. This is likely not because of the physical process itself, but the brain drain that affects what Apple is capable of doing with PDKs(Process Development Kits).
It's called the performance/power curve because the performance is not linear with power. What are you trying to say with this? You can also say the exact opposite if you move to the left of the curve:5650 pts - 9.4W of power draw.
6200 pts - 14W of power draw.
4.6W higher power draw, which results in 45% higher power for 10% performance increase.
To have lost perf/watt with the new node, it'd mean that the performance curve of the A17 Pro is at some point under the performance curve of the A16 Bionic. But that doesn't happen, at any point un the graph. The A16 Bionic can't score higher than the A17 Pro, at any power. The opposite is true:And you actually have lost perf/watt with new node. This is likely not because of the physical process itself, but the brain drain that affects what Apple is capable of doing with PDKs(Process Development Kits).