Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Great work.
But I don't see why Apple would push the power higher than for the M2. According to the curves, they could have gone to 4GHz for the Mac Studio but they didn't. If they use the same wattage, we can expect 4 GHz for the M3 but not more.
 
Great work.
But I don't see why Apple would push the power higher than for the M2. According to the curves, they could have gone to 4GHz for the Mac Studio but they didn't. If they use the same wattage, we can expect 4 GHz for the M3 but not more.
The A14/M1 and A15/M2 were not designed for those clock frequencies. The A17 appears to be, and that’s what you’re failing to pick up.
 
I don’t understand people snubbing the N3B process on this forum, if the die passes tests, it will/should function as designed.. are folks lying in bed worrying about the many EUV layers?

Apple having the resources to design for an interim node is awesome, put N3 in our hands, and is likely providing valuable telemetry(and $) that contributes to N3* evolution.. seems like partnership to me.
 
Great work.
But I don't see why Apple would push the power higher than for the M2. According to the curves, they could have gone to 4GHz for the Mac Studio but they didn't. If they use the same wattage, we can expect 4 GHz for the M3 but not more.

Well, for one, if they don’t push the power higher the Mac won’t be any faster than the iPhone. And that would be weird. And second, pushing towards the higher frequency/power is the only way to make bigger Mac’s more enticing to the user. Basically, fixing the areas where M1/2 fall short.

But of course you are right that we simply don’t know and the curves are not hard evidence. The picture I’ve posted might as well be an artifact of the fit. Maybe M3 will have the same 0.5 ghz lead over A17 as previous chips had over their respective A-series, and that’s it.
 
  • Like
Reactions: krell100 and souko
I don’t understand people snubbing the N3B process on this forum, if the die passes tests, it will/should function as designed.. are folks lying in bed worrying about the many EUV layers?

Some people seem to have expected that moving to 3N would double the battery life. How exactly it worked in their head, I don’t know. So far, the improvements we observe are pretty much in line with what TSMC has promised over the 5N process.
 
  • Like
Reactions: souko and Xiao_Xi
Some people seem to have expected that moving to 3N would double the battery life. How exactly it worked in their head, I don’t know. So far, the improvements we observe are pretty much in line with what TSMC has promised over the 5N process.
I don’t know about others, but let me know how it worked in my head. Do you remember previous process nodes? Do you remember jumps like those from 10nm A11 to the 7nm A12? Or the jump between the 7nm A13 to the 5nm A14? Well, I expected that kind of performance/efficiency jump with the 3nm. And I still expect it, from next year N3E process.

As I said on another thread, maybe we’re hitting a wall where the improvements will be modest until TSMC switches to GAA FET, but honestly I expected more from this A17 Pro, both in performance and, especially, in efficiency.

I’m okay with people happy with their iPhone 15 Pro, but one would expect to express their opinions freely, just like in any other thread. I myself was pretty hyped with the switch to 3nm process and, honestly, I don’t think this is representative of previous “big jumps” in manufacturing technology.
 
I don’t know about others, but let me know how it worked in my head. Do you remember previous process nodes? Do you remember jumps like those from 10nm A11 to the 7nm A12? Or the jump between the 7nm A13 to the 5nm A14? Well, I expected that kind of performance/efficiency jump with the 3nm. And I still expect it, from next year N3E process.

But we did get the same kind of performance/efficiency jump, didn't we?

A11 to A12 improved GB6 single by ~ 250 points
A13 to A14 improved GB6 single by ~ 400 points (that was a big one, as A14 got a bunch more execution units)
A16 to A17 improved GB6 single by ~ 300 points (A17 also got a bunch more execution units over previous gen, but you are running into diminishing returns here...)

It's just if you express them as percentages it looks less impressive with the scores getting higher. I think @Andropov posted a nice chart somewhere showing that GB scores for the last dozen or so iterations were increasing linearly.

And regarding efficiency, sure, A17 uses more power than Apple's 5N CPUs, but Apple's 5N CPUs also use more power than the 7nm ones. In my tests A13 tops out at 2.8 watts, considerably lower than later designs.

As I said on another thread, maybe we’re hitting a wall where the improvements will be modest until TSMC switches to GAA FET

Or we might be hitting a wall with IPC and modern CPU architectures, so the main way to improve performance would be increasing the power consumption.

Or, as I speculate, A17 is designed for desktop needs, hence the slightly higher power consumption (but the chip itself can be pushed much further for top-class desktop performance). However, what am I saying, A17 is already top-class desktop performance.
 
But we did get the same kind of performance/efficiency jump, didn't we?

A11 to A12 improved GB6 single by ~ 250 points
A13 to A14 improved GB6 single by ~ 400 points (that was a big one, as A14 got a bunch more execution units)
A16 to A17 improved GB6 single by ~ 300 points (A17 also got a bunch more execution units over previous gen, but you are running into diminishing returns here...)

It's just if you express them as percentages it looks less impressive with the scores getting higher. I think @Andropov posted a nice chart somewhere showing that GB scores for the last dozen or so iterations were increasing linearly.

And regarding efficiency, sure, A17 uses more power than Apple's 5N CPUs, but Apple's 5N CPUs also use more power than the 7nm ones. In my tests A13 tops out at 2.8 watts, considerably lower than later designs.



Or we might be hitting a wall with IPC and modern CPU architectures, so the main way to improve performance would be increasing the power consumption.

Or, as I speculate, A17 is designed for desktop needs, hence the slightly higher power consumption (but the chip itself can be pushed much further for top-class desktop performance). However, what am I saying, A17 is already top-class desktop performance.
Yeah, thank you for your elaborated and respectful answer, I agree. I didn’t look at the specific scores but if those are the improvements, then there’s a comparable increase, even if it’s at the expense of higher power consumption. Maybe, as you say, this CPUs are designed as desktop class and they are expected to be used as the foundation of the M3 SoC family. We’ll see.
 
  • Love
Reactions: sumarlidason
But we did get the same kind of performance/efficiency jump, didn't we?

A11 to A12 improved GB6 single by ~ 250 points
A13 to A14 improved GB6 single by ~ 400 points (that was a big one, as A14 got a bunch more execution units)
A16 to A17 improved GB6 single by ~ 300 points (A17 also got a bunch more execution units over previous gen, but you are running into diminishing returns here...)

It's just if you express them as percentages it looks less impressive with the scores getting higher. I think @Andropov posted a nice chart somewhere showing that GB scores for the last dozen or so iterations were increasing linearly.

And regarding efficiency, sure, A17 uses more power than Apple's 5N CPUs, but Apple's 5N CPUs also use more power than the 7nm ones. In my tests A13 tops out at 2.8 watts, considerably lower than later designs.



Or we might be hitting a wall with IPC and modern CPU architectures, so the main way to improve performance would be increasing the power consumption.

Or, as I speculate, A17 is designed for desktop needs, hence the slightly higher power consumption (but the chip itself can be pushed much further for top-class desktop performance). However, what am I saying, A17 is already top-class desktop performance.
Technically the A16-A17 is from N4 to N5. Yes, yes, I know that N4 is an N5 derivative but N4 did bring performance and efficiency improvements vs N5 so it is important to compare the A17 to the A15 if you're goal is comparing N5 to N3
 
  • Like
Reactions: souko
Technically the A16-A17 is from N4 to N5. Yes, yes, I know that N4 is an N5 derivative but N4 did bring performance and efficiency improvements vs N5 so it is important to compare the A17 to the A15 if you're goal is comparing N5 to N3

I know, but I didn’t want to be accused of cherry picking the more favorable comparison point :)
 
I don’t understand people snubbing the N3B process on this forum, if the die passes tests, it will/should function as designed.. are folks lying in bed worrying about the many EUV layers?

Apple having the resources to design for an interim node is awesome, put N3 in our hands, and is likely providing valuable telemetry(and $) that contributes to N3* evolution.. seems like partnership to me.
Uhm.
N3B, three years after N5 (one year delayed) is a very weak node transition compared to what node transitions used to yield. So folks that don’t have their fingers on the pulse of lithographic technology were/are bound to have unrealistic expectations.
But even delayed, there are obviously reasons why it was abandoned by TSMC and not adopted by anyone but Apple. Exactly what those reasons are, is a bit foggy even where lithography folks hang out, but rest assured that TSMC has little reason to develop a dead-end node that they can’t find customers to, and Apple would prefer if the node had been available according to original plan and if their designs had carried forward smoothly on tweaked versions of the node.

It definitely didn’t go to plan.

Today, the properties of a new commercial node is close enough to its predecessor that it is difficult to assess from the small performance deltas reported by the foundry and the specifics of a particular chip implemetation (and error bars in measurement approach) whether this node performs to plan. But its rejection by foundry and customers is sure to have had good reason - you don’t create an orphan process node, nor customers loose a year in time to market for nothing.
 
Well, for one, if they don’t push the power higher the Mac won’t be any faster than the iPhone. And that would be weird. And second, pushing towards the higher frequency/power is the only way to make bigger Mac’s more enticing to the user. Basically, fixing the areas where M1/2 fall short.
You're saying that since Apple clocked the A17 Pro to use more power than the A15/A16, they may do the same for the M3 compared to the M2. That's possible, but it remains to be seen.
 
You're saying that since Apple clocked the A17 Pro to use more power than the A15/A16, they may do the same for the M3 compared to the M2. That's possible, but it remains to be seen.

No, what I am saying is that limiting the CPU core to 5-6 watts peak (as M1/M2 do) is not using the full capability of a desktop chassis. For example, even the 14” Pro will be able to easily handle 10-12 watts single-core peak with no increase in noise etc. , and the extra power should result in a decent performance improvement. And on stationary desktop, you have even more thermal headroom. There is just no good reason to limit a workstation CPU to those kind of power levels. On the high-performance laptop Apple is currently competing against CPUs that peak at 20-25 watt single-core, and on the desktop even 30-35 watts. Even with their massive IPC and node advantage it’s not a fight Apple can win at 5 watts. But if they design a core that can relax the power restrictions they should be able to easily outperform the others without really suffering any adverse effect.
 
No, what I am saying is that limiting the CPU core to 5-6 watts peak (as M1/M2 do) is not using the full capability of a desktop chassis.
I agree, but Apple did just that with the M1/M2. I'm not sure why they should do something different now. According to the curves you made, clocking an M2X higher in a desktop would have resulted in nice gains. But Apple uses the same clock speed throughout.
 
I agree, but Apple did just that with the M1/M2. I'm not sure why they should do something different now. According to the curves you made, clocking an M2X higher in a desktop would have resulted in nice gains. But Apple uses the same clock speed throughout.

Yes, and that’s the second part of the argument. It makes some sense to view A14 and its refinements as mobile-first designs, which only target a particular (low) power range and are unable to break out of it. Now the question is whether A17 has similar constraints or whether they are relaxed. I think there are a few interpretations here:

- Apple run into a wall and had to increase the wattage to improve the soeeds
- Apple designed the new u-arch to run at higher power and performance
- N3 has problems and has to be driven with higher than expected voltage
- etc.

I don’t think that the evidence we have until now is enough to rule out any of these. My heart goes to option two of course, but that’s because it’s what I want to see and because I believe in Apples ability to execute.

What we can quite confidently claim, however, is that Apples N3 design offers considerable improvements in both power consumption and performance compared to Apples refined N5 design. What we see here is exactly what TSMC has promised.
 
What's the first desktop to help prove/disprove these interpretations?

A new, 25th anniversary iMac with active cooling, prior to year-end? With best-in-class performance?
 
I agree, but Apple did just that with the M1/M2. I'm not sure why they should do something different now. According to the curves you made, clocking an M2X higher in a desktop would have resulted in nice gains. But Apple uses the same clock speed throughout.
How do you know?
You can't just pump more voltage into a chip and have it get indefinitely faster!

There are at least two constraints:
- at some point the transistors cannot physically switch faster than a certain rate, regardless of voltage. For all you and I know, Apple is operating M1 and M2 maximum frequencies at close to that rate.
- beyond a certain voltage, electromigration and other effects start to damage the chip, initially at a slow rate (slow enough that the chip won't die before it's no longer of interest to the user; but at high enough voltages, fast enough that people will notice)

Apple DOES use different clock rates for M2, but not dramatically different. Always to stay away from the above two problems, but in each case (phone, M, Max) making slightly different tradeoffs as to power/heating vs performance.
(A15 ~ 3.23 GHz, M2 ~ 3.5GHz, M2 Max ~3.6GHz)

As I have pointed out, the power/performance tradeoff for iPhone could be a little different give the new use cases that USB-C opens up. If so, we may (possibly...) find that M3 and M3 Max are not AS highly clocked relative to the A17 max frequency some claim to be seeing. We'll just have to see.
 
Uhm.
N3B, three years after N5 (one year delayed) is a very weak node transition compared to what node transitions used to yield. So folks that don’t have their fingers on the pulse of lithographic technology were/are bound to have unrealistic expectations.
But even delayed, there are obviously reasons why it was abandoned by TSMC and not adopted by anyone but Apple. Exactly what those reasons are, is a bit foggy even where lithography folks hang out, but rest assured that TSMC has little reason to develop a dead-end node that they can’t find customers to, and Apple would prefer if the node had been available according to original plan and if their designs had carried forward smoothly on tweaked versions of the node.

It definitely didn’t go to plan.

Today, the properties of a new commercial node is close enough to its predecessor that it is difficult to assess from the small performance deltas reported by the foundry and the specifics of a particular chip implemetation (and error bars in measurement approach) whether this node performs to plan. But its rejection by foundry and customers is sure to have had good reason - you don’t create an orphan process node, nor customers loose a year in time to market for nothing.
Oh FFS.
TSMC has a long history of creating "orphan" nodes. 20nm was this way, as was 10nm.
These are not failures, they are an inevitable part of being cautious. You design the new node as best you can, but as soon as you see where it's problematic, you make the appropriate adjustments. At some point you converge on something that's about optimized for a certain degree of effort, and that optimized setup becomes a long-lived node.

If every new process you create is *perfect* at the start, you're not being aggressive enough; just like if every new process is broken at the start, you're clearly being too aggressive.
TSMC looks to me at pretty much exactly where it should be. No outright disasters, but every so often a stretch attempt that's a little too ambitious and needs to be tweaked from the original design.
 
i expect same. (at best)

if we´re (very) lucky, a M3 Studio Max might go up to 4.5Ghz.

i think 4.3Ghz for a M3miniPro is allready a quite optimistic one.
4.2Ghz maybe more likely. M3base much likely less, imo.

i would also expect that they will increase the difference between a baseMini and a studioMax.
my most optimistic guess: 4.1 - 4.5Ghz (at best). M3base to M3StudioMax.

In fact, i´d be quite happy with 4.3Ghz ! if its a M3proMini, or a M3StudioMax to reach that, we´ll see.
Yet, less would be a major disappointment.
4.5GHZ on a M3Studio would be fantastic ! knock on wood


Thanks @leman for your work here !


personally i think, apple is and HAS to plan towards the future !
i think we are just on the verge of a new timeage.
They need to plan further ahead, thinking all the "possible" tasks and workloads thru, we´ll see coming the next years.
So my strong guess is, that their main parameter they´re focusing on, is more towards a somehow linear development curve => towards the future. Rather than just to focus towards "to deliver to us now"
To give a little bit of a context: M1’s single-core boost frequency was clocked 6.8% higher than the A14, from 3000MHz to 3204MHz. The M1 Pro and above didn’t clock much higher than that (3230 vs. 3204MHz, +0.8%). The M2 widened the gap with the A15 to 8%, from 3230MHz to 3490MHz. M2 Pro and above started clocking even higher, at 3.68GHz vs. 3.49GHz. So they've already been starting to focus on unlocking more performance on the Mac side of things.
A17 increased the single-core boost clock by 17% since A14, from 3000MHz to 3780MHz. If the improvement from A16 and A17 stack up for the M3, whose predecessor was based on A15, I can see the M3 Pro and above clocking in at 4.3GHz, if not higher (and 4.1GHz for M3). Though maybe this dynamic power range increase is only for the iPhone, but I find that very hard to believe.
I agree, but Apple did just that with the M1/M2. I'm not sure why they should do something different now. According to the curves you made, clocking an M2X higher in a desktop would have resulted in nice gains. But Apple uses the same clock speed throughout.
That's not true, The M2 Pro and above clock 5.4% higher than the base M2, which is a different (hopefully trend) than what happened with the M1.
 
If the improvement from A16 and A17 stack up for the M3, whose predecessor was based on A15, I can see the M3 Pro and above clocking in at 4.3GHz, if not higher (and 4.1GHz for M3).
So, our gut-feel tells us both the same ;)

I would expect to see a wider jump in frequenzy from "pro" to "studio-max" with the M3, vs. what we had with the M2.
The question might be if they are able to bring something like sayed "extreme", respectivly the "ultra".....able to go further ahead, going full steam ?

My persomal take out from here for myself is, that it might be even more important to wait for the M3 Studio than i allready thought it could be. Even if i would tend to opt for a M3pro (in case they continue with "mini" vs. "Studio" line)
The Studio will allow for more cooling, which might become now more a factor with the coming M3.
 
  • Like
Reactions: sumarlidason
i expect same. (at best)

if we´re (very) lucky, a M3 Studio Max might go up to 4.5Ghz.

i think 4.3Ghz for a M3miniPro is allready a quite optimistic one.
4.2Ghz maybe more likely. M3base much likely less, imo.

i would also expect that they will increase the difference between a baseMini and a studioMax.
my most optimistic guess: 4.1 - 4.5Ghz (at best). M3base to M3StudioMax.

In fact, i´d be quite happy with 4.3Ghz ! if its a M3proMini, or a M3StudioMax to reach that, we´ll see.
Yet, less would be a major disappointment.
4.5GHZ on a M3Studio would be fantastic ! knock on wood


Thanks @leman for your work here !


personally i think, apple is and HAS to plan towards the future !
i think we are just on the verge of a new timeage.
They need to plan further ahead, thinking all the "possible" tasks and workloads thru, we´ll see coming the next years.
So my strong guess is, that their main parameter they´re focusing on, is more towards a somehow linear development curve => towards the future. Rather than just to focus towards "to deliver to us now"
My expectation is that M3 will be about more than effective clock speed. I expect (hope for) some architectural magic out of Apple at the high end (MPs and Studios). We will see.
 
Well, for one, if they don’t push the power higher the Mac won’t be any faster than the iPhone. And that would be weird. And second, pushing towards the higher frequency/power is the only way to make bigger Mac’s more enticing to the user. Basically, fixing the areas where M1/2 fall short.

But of course you are right that we simply don’t know and the curves are not hard evidence. The picture I’ve posted might as well be an artifact of the fit. Maybe M3 will have the same 0.5 ghz lead over A17 as previous chips had over their respective A-series, and that’s it.
You suggest that "if they don’t push the power higher the Mac won’t be any faster than the iPhone" but it seems me that as we move to the high end there is lots more to performance than just clocks. Architectural changes, multiple chips, interaction with UMA RAM, etc.
 
You suggest that "if they don’t push the power higher the Mac won’t be any faster than the iPhone" but it seems me that as we move to the high end there is lots more to performance than just clocks. Architectural changes, multiple chips, interaction with UMA RAM, etc.

There are different ways to quantify performance of course. Here I am talking about single-core CPU performance. Limiting a CPU core to smartphone-level power consumption on a desktop computer is leaving potential performance on the table, no matter how one looks at it.
 
  • Like
Reactions: Macintosh IIcx
How do you know?
You can't just pump more voltage into a chip and have it get indefinitely faster!

There are at least two constraints:
- at some point the transistors cannot physically switch faster than a certain rate, regardless of voltage. For all you and I know, Apple is operating M1 and M2 maximum frequencies at close to that rate.
- beyond a certain voltage, electromigration and other effects start to damage the chip, initially at a slow rate (slow enough that the chip won't die before it's no longer of interest to the user; but at high enough voltages, fast enough that people will notice)[...]
It's very interesting to me that you mention these two arguments specifically- and since it's you, my reaction is "what am I missing" instead of "gee that's dumb", so I will appreciate any enlightenment on offer.

1) Is switching speed really likely to be an issue? My understanding was that they are using the same process in the A15/16 as AMD is for their latest Zens, and that the transistors are fundamentally the same. And AMD is hitting well north of 5GHz. Is there more to it than that?
2) Why didn't you mention critical paths in the logic? I was under the impression that these tend to be the real clock limits on most designs. No?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.