Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is super cool. But what I mean is that these projections could theoretically go on forever. In reality we know that it does not scale indefinitely.
If we control for the TDP by keeping it constant in our comparisons, what do you reckon would an M3 or M3 Max CPU top at in terms of frequency? If I read your graph properly, at 10W per thread we should expect at least 4.5Ghz?

I'd expect M3 series to hit somewhere between 4.2 and 4.5 Ghz with peak per-core power consumption in the ballpark of 10 watts. Whether it's what we will get, we'll have to wait and see.
 
  • Like
Reactions: AgentMcGeek
so the whole 15 pro overheating fiasco is not true ?

When one talks about iPhone pro overheating, what do they mean exactly? Is it

- high temperatures while charging? (I wouldn’t know about it, my iPhone seems to be fine)
- the fact that the phone gets warm under demanding load and reduces the frequency to compensate (this is true and I don’t see anything wrong about it)
- something else?
 
I'd expect M3 series to hit somewhere between 4.2 and 4.5 Ghz with peak per-core power consumption in the ballpark of 10 watts. Whether it's what we will get, we'll have to wait and see.
Thanks! 🙏 I wanted to make sure I was interpreting it right.
 
When one talks about iPhone pro overheating, what do they mean exactly? Is it

- high temperatures while charging? (I wouldn’t know about it, my iPhone seems to be fine)
- the fact that the phone gets warm under demanding load and reduces the frequency to compensate (this is true and I don’t see anything wrong about it)
- something else?
This is the problem with all the talk about throttling/heating/overheating.

The actual *meaning* of terms get lost entirely as people start substituting “gets warm” with “overheating” or “it’s warm so it must be throttling”.
 
  • Like
Reactions: Allen_Wentz
This is the problem with all the talk about throttling/heating/overheating.

The actual *meaning* of terms get lost entirely as people start substituting “gets warm” with “overheating” or “it’s warm so it must be throttling”.

Yeah, that’s why I try to avoid the term “throttling“ whenever possible. People tend to load it up with too much emotional connotation
 
I'd expect M3 series to hit somewhere between 4.2 and 4.5 Ghz
i expect same. (at best)

if we´re (very) lucky, a M3 Studio Max might go up to 4.5Ghz.

i think 4.3Ghz for a M3miniPro is allready a quite optimistic one.
4.2Ghz maybe more likely. M3base much likely less, imo.

i would also expect that they will increase the difference between a baseMini and a studioMax.
my most optimistic guess: 4.1 - 4.5Ghz (at best). M3base to M3StudioMax.

In fact, i´d be quite happy with 4.3Ghz ! if its a M3proMini, or a M3StudioMax to reach that, we´ll see.
Yet, less would be a major disappointment.
4.5GHZ on a M3Studio would be fantastic ! knock on wood


Thanks @leman for your work here !


personally i think, apple is and HAS to plan towards the future !
i think we are just on the verge of a new timeage.
They need to plan further ahead, thinking all the "possible" tasks and workloads thru, we´ll see coming the next years.
So my strong guess is, that their main parameter they´re focusing on, is more towards a somehow linear development curve => towards the future. Rather than just to focus towards "to deliver to us now"
 
This is the problem with all the talk about throttling/heating/overheating.

The actual *meaning* of terms get lost entirely as people start substituting “gets warm” with “overheating” or “it’s warm so it must be throttling”.

And "it must be throttling" with "bad design". The proper way of thinking about throttling is that it allows the maximum amount of performance a system can give at every given moment accounting for the system conditions. If it's not designed to throttle, it's leaving performance on the table....
 
I'd expect M3 series to hit somewhere between 4.2 and 4.5 Ghz with peak per-core power consumption in the ballpark of 10 watts. Whether it's what we will get, we'll have to wait and see.
4267 MHz would put it right in line with LPDDR5X’s I/O bus clock. I could see that scaling to 4800 MHz for bursts which is also a multiple of LPDDR5X’s memory array clock (533 MHz).
 
First of all, @leman, thanks so much for all this work. You're starting to answer some questions that most others don't even know to ask.

I don’t think it works that way. Also, there are significant performance differences for some SPEC subtest showing that A17 indeed has more integer execution units. You won’t see this in my test since I run a dumb loop with a divide instruction. Would be interesting to use a more complex workload but I don’t know what would be a suitable one without exploding the app. Maybe some random number generation or crypto.
It definitely doesn't work that way. Maybe it's been done somewhere sometime but in general, it's one process per wafer. "Chips" that use multiple processes like the AMD Ryzen/Epyc, or the forthcoming Intel mobile chip, are MCMs (more than one chip).

By the way, to put this into perspective, here is the predicted curve for A15 using the same method as I used for the A17, with M2 data overlayed on top of it. Observe how the A15 data offers a reasonable prediction for M2 and how much faster this curve climbs than the A17 one (provided in grey for comparison) once we get behind 3.5 ghz?
[...]
View attachment 2282500

You say A15 in your text (which I suspect is accurate) but the chart says A14. Can you clarify?

Yep, it makes more sense to scale your desktop tech down to the mobile needs (assuming you can deliver the needed power efficiency of course) than to scale your mobile tech to desktop needs (where you will quickly run out of steam as Apple did with A14...)

Can you cite any evidence for that assertion? AFAIK, we have only one successful transition in either direction, ever, which is... Apple CPU cores. (Arguably their GPUs too.) Intel failed miserably after many years of effort. Nobody else has even tried. Apple's "running out of steam" was still a smashing success.

Actually, if you expand your view to include server chips, we have more examples of ARM growing into server space (Amazon, etc.). There's still no instance ever of desktop/server tech growing down into mobile.
 
4267 MHz would put it right in line with LPDDR5X’s I/O bus clock. I could see that scaling to 4800 MHz for bursts which is also a multiple of LPDDR5X’s memory array clock (533 MHz).
Interesting but I'm not clear that that matters at all. This isn't a Zen chip. With the SLC sitting between RAM and everything on the chip, RAM timing may be totally irrelevant. Can anyone who actually knows about this (@name99?) say?
 
You say A15 in your text (which I suspect is accurate) but the chart says A14. Can you clarify?

Typo, sorry.


Can you cite any evidence for that assertion? AFAIK, we have only one successful transition in either direction, ever, which is... Apple CPU cores. (Arguably their GPUs too.) Intel failed miserably after many years of effort. Nobody else has even tried. Apple's "running out of steam" was still a smashing success.

Actually, if you expand your view to include server chips, we have more examples of ARM growing into server space (Amazon, etc.). There's still no instance ever of desktop/server tech growing down into mobile.

I was just thinking out loud, might got carried away a bit. What I mean that CPUs are usually designed for a certain range. If one focuses on the low performance/low power, I would assume that scaling up might be difficult (We can see this in earlier Atom designs). But a higher performing design can be clocked down (of course, not always).

What made Apple a bit unique is that they focused on high performanc/low power, which gave them a nice foundation. But the 5N gen appear to be severely limited in their peak power consumption, which is not optimal for the desktop. We will see if 3N changes this.

BTW, AMD's current designs could make decent smartphones. Zen4 happily drops down to 2-3 watts and challenges the performance of mid-range ARM cores at that power consumption. I think the main reasons we down see this stuff often are of business nature.
 
Last edited:
This is very interesting data. However what do you mean by
"- A17 does use significantly more power than the previous A-series in the usual operational range"?

Obviously at any particular frequency, A17 is lower than any predecessor.
So everything hinges on the issue of what counts as "usual operational range"? How did you determine that?

I'm not criticizing you! I just think that the obvious attempt to get the FULL range of points is not the same thing as capturing the most FREQUENT points. If it were, then almost every point would be in the E-only power/frequency region, very few in the high frequency P-region...


It does seem to be the case that Apple is ALLOWING the P cores to go higher than before. Why might they do that?
It's hard to be sure (especially when most of the data we've seen so far is by people who don't think like engineers and never investigate carefully) but I could imagine the following sequence of thoughts:

- prior iPhones have been used as phones, meaning there's a low cap to how much work one expects to do in one "burst" of computing, and there was no point in even thinking about allowing frequency/power to go beyond this point

- with USB-C that somewhat changes. One can now imagine at least some users using their phones as game consoles, getting power from the USB connection as they simultaneously display on screen. Under these circumstances, it's not crazy to let power go higher (still, of course, maintaining limits that make sense, based on thermals, capacitors, and so on). Zealous gamers can even use external cooling, if they want, to reduce the thermal impact.

- once we accept point 2, the question then becomes, what should we do on battery? And that's a judgement call. Do you want to protect the zealous gamers from their own folly by limiting how fast they can drain the battery? Or do you say they know what they are doing, they can buy chargers and external battery packs, and let them go crazy?

I suspect Apple, via telemetry, will monitor how this plays out. If they see an "unacceptable" level of rapid battery drain, maybe in a future OS update they will limit high frequency to bursts of a few seconds, UNLESS you are on external power?
 
So no efficiency gains from A16 to A17, performance gains came directly from frequency increase, node process gains went to GPU and other modules?
That's not quite true. The graph is noisy enough that one can't really say that.
(a) If what you were saying were true, we should be able to see A14 as clearly different from A17. But A14 and A17 fall (for the purposes of the noisiness of this graph) on the same line. Or are you going to suggest that from A14 to A17 there has been zero improvement in IPC?
The best you can really do is consider upper and lower lines. Better IPC means more of the points congregate closer to the "lower" (ie lower and to the right) line, and even by eye it looks like A14 points are more towards the upper line; A17 points more towards the lower line.

(b) Simply increasing frequency without dropping IPC requires constant new smarts. If Apple doesn't give us more IPC over the next few designs, I'll be sad! But I don't think it's true that there is no more IPC to be gained, OR that Apple's current team don't know how to get there.
I think it's more the case that every design is a compromise. Right now Apple's MOST IMPORTANT task is to deal with scalability, to satisfy the highest end customers who might consider ditching them for a 64 core AMD design with kick-ass nVidia GPU.
To that end, I expect every aspect of "this year's" (ie the A17 and M3 designs) was considered with that in mind. Get right the things that matter at the high end (and are essentially invisible on a phone!) like the new coherency protocol, GPU work distribution, larger (or more) CPU clusters as I described, VM support, probably TLB support for very large RAM sizes (eg large page support). Add in to the CPU what was "easy" or ready to go (wider, higher GHz) but nothing more than that. Apple aren't dumb (in spite of the internet crowd who feel the SoC team is now a collection of dribbling morons); they are surely well aware of the points I keep stressing, like how simply adding more resources (eg wider) without changing algorithms, has limited value.
But THIS YEAR is not about optimizing the algorithms (with the time and risk that takes), it's about scaling up to match a large AMD+nVidia system; and phones clearly will benefit the absolute least from that work...

I mock Intel occasionally (and the crazier Intel supporters frequently!) but honestly I'm sympathetic to the choices they have made with MTL, for the same reasons.
MTL is (IMHO) a poor underlying design direction, but given that IS the direction you have chosen, the decision that the cores are essentially identical to the Raptor Lake cores is the same sort of risk management. Focus on the part that is tricky, and MATTERS MOST for the overall strategic direction (in Intel's case, getting all the chiplet to chiplet communication, clocking, and power balancing correct) and leave the difficult core improvements to next year.
 
  • Like
Reactions: Macintosh IIcx
When we talk about rumors or leaks, it’s kinda difficult to point towards an official source, because obviously neither Apple nor TSMC are going to make those internal details official. You’d have to trust the leakers that periodically report those details that later blogs like MacRumors or 9to5mac use to write their articles.
The issue is less "source" than consilience – does the claim fit with everything else we know?
For example I would assume "catastrophic" yields means something like Apple cannot provide iPhones in the desired quantities. Do we see any evidence of that?
The release dates and country tiers basically match previous years with, selling out, if anything, more muted than in previous years. Prices are not out of line with what we would expect. There's no attempt to try to salvage huge numbers of poor chips (for example release the Pro Max with 6 GPU cores and the Pro with 5 GPU cores).

So I call BS because there's zero ACTUAL evidence for the claim, merely a whole lot of people thinking it would be very convenient for their belief systems if the claim were true.
 
  • Like
Reactions: smalm and altaic
I mock Intel occasionally (and the crazier Intel supporters frequently!) but honestly I'm sympathetic to the choices they have made with MTL, for the same reasons.
MTL is (IMHO) a poor underlying design direction, but given that IS the direction you have chosen, the decision that the cores are essentially identical to the Raptor Lake cores is the same sort of risk management. Focus on the part that is tricky, and MATTERS MOST for the overall strategic direction (in Intel's case, getting all the chiplet to chiplet communication, clocking, and power balancing correct) and leave the difficult core improvements to next year.
As infinitely mockable as Intel defenders are, I have to say that Intel is doing well with the hand they’ve been dealt.

If cmaier’s posts were anything to go by, x86 is a bear to make efficient, and there’s so much technical debt built up over fifty years that it must make Intel’s senior engineers lives difficult. Let alone that they let AMD leapfrog them with Zen.
 
  • Like
Reactions: Populus
Typo, sorry.
*Which one*???
I was just thinking out loud, might got carried away a bit. What I mean that CPUs are usually designed for a certain range. If one focuses on the low performance/low power, I would assume that scaling up might be difficult (We can see this in earlier Atom designs). But a higher performing design can be clocked down (of course, not always).
You know, that sounds right intuitively, but... we still have no examples of that ever happening.
What made Apple a bit unique is that they focused on high performanc/low power, which gave them a nice foundation. But the 5N gen appear to be severely limited in their peak power consumption, which is not optimal for the desktop. We will see if 3N changes this.
I think you know this already, but almost everyone talking about N5 and N3 are in error. We hear that N3 is limiting efficiency, or N5 limited clocks, but that's crap. N5 didn't limit clocks on the M2; Apple's design did. The same exact process is approaching 6GHz in AMD cores. This isn't a failure of process *or* design. It's a choice. Almost certainly, all the bitching and moaning about N3 likewise misses the mark.
BTW, ARMs current designs could make decent smartphones. Zen4 happily drops down to 2-3 watts and challenges the performance of mid-range ARM cores at that power consumption. I think the main reasons we down see this stuff often are of business nature.
You mean AMD designs. Yes, true. But you'd still need an E core. I don't think even the Zen 4c core is up (down) to that though I can't say for sure.
 
  • Like
Reactions: altaic
This is very interesting data. However what do you mean by
"- A17 does use significantly more power than the previous A-series in the usual operational range"?

Obviously at any particular frequency, A17 is lower than any predecessor.
So everything hinges on the issue of what counts as "usual operational range"? How did you determine that?

Yeah, that’s a good point. I am only testing the system behavior at peak loads (non-trivial time spend keeping the threads busy), so when I talk of the “usual operational range” what I really mean is the frequency range you encounter under this type of use. I don’t make any claims about the power state or CPU frequency for “normal” interactive apps which spend most of their time sleeping.

I think I already commented somewhere hear that I don’t consider higher power consumption of A17 at its peak frequency concerning in a phone. Sustained peak loads are rather rare on a smartphone, everyday stuff is about energy consumption. And even if peak power is higher, the average energy use will likely stay the same or lower. And if you happen to have a sustained workload (e.g. gaming) it has already been demonstrated that the SoC will settle at a power level that offers good combination of performance and efficiency.
 
The issue is less "source" than consilience – does the claim fit with everything else we know?
For example I would assume "catastrophic" yields means something like Apple cannot provide iPhones in the desired quantities. Do we see any evidence of that?
The release dates and country tiers basically match previous years with, selling out, if anything, more muted than in previous years. Prices are not out of line with what we would expect. There's no attempt to try to salvage huge numbers of poor chips (for example release the Pro Max with 6 GPU cores and the Pro with 5 GPU cores).
So I call BS because there's zero ACTUAL evidence for the claim, merely a whole lot of people thinking it would be very convenient for their belief systems if the claim were true.

Prices are not out of line because…

TSMC Not Charging Apple for Defective 3nm Chips Ahead of iPhone 15 Pro Introduction


And yeah, the poor yields have been referenced many times from many different sources… luckily, they apparently have enough chips for the 15 Pro and 15 Pro Max, but still…

TSMC Struggling to Make Enough 3-Nanometer Chips for Apple


I’m not making anything up, I’m just saying what I’ve been reading during almost a year, on this and other sites, about the exclusivity deal of this N3B process between TSMC and Apple. I personally prefer to wait for a more streamlined process like the N3E, and I wish the next M3 chips come based on this new process and the A18 architecture. Otherwise… I’ll have to wait until the M4s.

But hey! That’s just me. Anyone who wants to get their N3B based devices, is free to do so.
 
*Which one*???

I just had another look and you are right. I plotted the wrong data over the predicted curve ^^ Thanks for noticing this, I fixed the plot. The curve was correct though.

I think you know this already, but almost everyone talking about N5 and N3 are in error. We hear that N3 is limiting efficiency, or N5 limited clocks, but that's crap. N5 didn't limit clocks on the M2; Apple's design did. The same exact process is approaching 6GHz in AMD cores. This isn't a failure of process *or* design. It's a choice. Almost certainly, all the bitching and moaning about N3 likewise misses the mark.

Yes, absolutely! When I talk about Apple's N5 or N3 I mean Apples family of designs using the process. E.g. Apple's N5 is Firestorm and its refinements (A15, A16).

You mean AMD designs. Yes, true. But you'd still need an E core. I don't think even the Zen 4c core is up (down) to that though I can't say for sure.

Yes, AMD. I should really type slower, sorry. Agree on the E-core, that requires a completely different design. BTW, my data also has a lot of into on E-core usage, might be interesting to look at this as well. I'll try to carve out some time in the next few days.

P.S. We should switch names :)
 
Prices are not out of line because…

TSMC Not Charging Apple for Defective 3nm Chips Ahead of iPhone 15 Pro Introduction


And yeah, the poor yields have been referenced many times from many different sources… luckily, they apparently have enough chips for the 15 Pro and 15 Pro Max, but still…

TSMC Struggling to Make Enough 3-Nanometer Chips for Apple


I’m not making anything up, I’m just saying what I’ve been reading during almost a year, on this and other sites, about the exclusivity deal of this N3B process between TSMC and Apple. I personally prefer to wait for a more streamlined process like the N3E, and I wish the next M3 chips come based on this new process and the A18 architecture. Otherwise… I’ll have to wait until the M4s.

But hey! That’s just me. Anyone who wants to get their N3B based devices, is free to do so.
I was going to say that the problem is not that you're making things up, but that your brain is disengaged when you're reading. But in truth, that's unfair to you. There's a tremendous amount of crap being published by people who should know better - and even more by people whose bosses should have known better than to let them write about things they're clueless about. If you're not deep in the industry, or at least remarkably well informed, how can you have a working bs sensor?

In short: The thing about charging for defective chips is crap, based on a misunderstanding about how TSMC sells its products. Apple is either buying complete wafers at a certain price per wafer regardless of die status, or they're buying defect-free dies at a higher unit price. That's exactly like everyone else. How that balances out is between Apple and TSMC, and Apple had the whip hand to some extent because nobody else significant was willing to pay for the process. There was no exclusivity, and as it turned out, there was adequate supply.

Speaking of which, that MR story about N3 supply was speculative... and it's also half a year out of date.
 
No efficiency gains from A16 to A17... but I think it's much more interesting to compare A17 to M2. A16 was a one-time design aimed at optimising performance and efficiency in a mobile phone, and Apple tweaked the structure sizes and used an optimised N4 node to get there. I am getting more and more convinced that A17 P-cores instead are developed for the desktop and essentially continue where A16 stopped, but with a wider frequency range in mind.

There was some preliminary die analysis showing that the A17 cores shrunk in size compared to previous designs, so cost factor could also be in play.
Where do you predict the M3 and M3 Pro/Max chips will be, based on how many cores the M-series tend to have compared to the A-series?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.