Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
The M1 chip was based on A14, and the M2 chip in the Air appears to be based on the A15. The N3 process chips released in 2023 aren't going to be A15-based.

So, again, it won't make sense to call them M2.
Assuming Apple uses TSMC 3nm for the upcoming 14"/16" MBP, what would be less strange to the customer, realizing that Apple skips the M2 SoC for the MBP or that the performance jump in MBPs is much greater than in MBAs?

If Apple would name the hardware by the year they release them instead of with the SoC they use, they wouldn't have this problem.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Assuming Apple uses TSMC 3nm for the upcoming 14"/16" MBP, what would be less strange to the customer, realizing that Apple skips the M2 SoC for the MBP or that the performance jump in MBPs is much greater than in MBAs?

If Apple would name the hardware by the year they release them instead of with the SoC they use, they wouldn't have this problem.
No one is going to be confused by Apple offering a more advanced chip in 2023 on the MBP than what they offered on the Air in 2022. Indeed, they're going to expect it. What would seem strange would be if, a year after offering the Air (a low-end product) with an M2, they offer the high end MBP with...yes, still an M2. Hunh?

Plus it's such bad marketing to offer what really is a significantly more advanced chip and still give it the same.

And technically, it's bad nomenclature. You want to create confusion? Take two entirely different generations of chips, built on different processes, and with different microarchitectures, and give them the same name: "Wait, are we talking about the 2022 M2, or the 2023 M2?...."
 
Last edited:
  • Like
Reactions: hans1972

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
No one is going to be confused by Apple offering a more advanced chip in 2023 on the MBP than what they offered on the Air in 2022. Indeed, they're going to expect it. What would seem strange would be if, a year after offering the Air (a low-end product) with an M2, they offer the high end MBP with...yes, still an M2. Hunh?

Plus it's such bad marketing to offer what really is a significantly more advanced chip and still give it the same.

And technically, it's bad nomenclature. You want to create confusion? Take two entirely different generations of chips, built on different processes, and with different microarchitectures, and give them the same name: "Wait, are we talking about the 2022 M2, or the 2023 M2?...."

Now that Apple's released an iteration of "M" chips, it's still difficult to glean any pattern in their cadence :). Whatever we try to infer will be subject to (hopefully rare) affects of the pandemic.

I too have faith that Apple will pull out all stops to ship 3nm M chips as soon as they can (2023). I mean, what good is it to be first at that trough if you're going to sit on those chips for months? I do think Apple may skip "Pro/Max/Ultra" chips in any given year because the volume sold of those systems isn't as high as systems with the base chip (iPad Pro/iPad Air/Macbook Air/Macbook Pro/iMac/???/iPadMini Pro) - so maybe those are on a different cadence? I'm just happy Intel/AMD/Samsung/Qualcomm/Google are all flush with enough cash to keep competing in this market.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
And technically, it's bad nomenclature. You want to create confusion? Take two entirely different generations of chips, built on different processes, and with different microarchitectures, and give them the same name: "Wait, are we talking about the 2022 M2, or the 2023 M2?...."
Agree. 14"/16" MBP should use a more powerful SoC than the MBA (with a different name), but most customers shouldn't care whether that more powerful SoC use TSMC 5nm or TSMC 3nm.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Except the N3-based chips, in addition to being built on a different process from the N4P used for the M2 Air, are also going to have a different microarchitecture. I.e., both "tick" and "tock".

First, the M2 isn't likely on N4P. If they are based on the A15 then it is N5P.

An absolute hard technological requirement coming from where?

They is no "Need to". There are some marketing "would be nice to" , but need? No.

The recent rumours are that the "M2 Pro and Max" are 12 cores ( which could easily just be 8+4 ) and < 40 GPU cores ( so could use the same 10 core GPU building block that the M2 has). Just shrink that.

It would be a substantially smaller die for both variants ----> can make more chips from substantially fewer wafers.

[ Apple has to use more wafers ? If N3 is taking several weeks longer to come out of the fab pipeline process, then that can be offset by getting more dies from wafers that will appear at end of the pipeline at a slower pace. So if get 80 instead of 100 wafers out the pipeline for a fixed amount of time but get the same number of usable dies then why not? It is the number of dies, not wafers, that go into products. And if the M2 systems were projected to sell even more units than the M1 generation then ... again need even more workable dies. So making more per wafer makes sense. ]

The performance gap from the M1 versions can be the 2E cores , 16-> 20 GPU / 32 -> 38 GPU core count increases, the NPU boost (that the baseline M2 has) , and better uncore ( maybe pick up DisplayPort 2.0 which wouldn't necessarily trigger a "micro arch generation shift" bump to move the generation number. The Pro/Max had ProRes and the plain M1 didn't. That wasn't a generation number shift. Neither would be adding AV1 to Pro/Max at least decode. )., N3 would give them more wiggle room on some single threaded drag racing clock bumps. All of those would be enough to do "oh look" way faster than Mac Intel models (even much of the entry Mac Pro 2019 on some tasks ). And incrementally better enough on the M1 variants.

Shrinking the Max is critical if they want to do a Ultra package inside of 1x reticle. So "just shrink" means actually can do the product versus not. So "just smaller" is a clear 'win' there. Being able to even make the product is critical step. ( if don't go N3 for the M2 Max then decent chance there is no Ultra directly derived from the standard Max die. ) Using N5P for A15 and M2 would slightly "suck" as get a more expensive die out it. As the dies get bigger that N5P process factor sucks more and more. That the 400mm^2 20% bloat is another 80mm^2 . That is about the size of an average A-series chip implementation. It can be real money down the drain in the aggregate , if going to sell millions of these. Let alone be blocked from using the necessary package technology.



On N3, will the floorplan need to get rejiggered ? Probably. Will they have to use some different implementation techniques in some places ? Probably. Does the instruction set semantics have to change because using N3 ? No; absolutely not. The software/firmware stack on top doesn't "have to" see any difference what so ever. That is whole reason why have an abstraction barrier; so it doesn't have to. If the software stack sees the same thing then it is the same generation.


The M1 chip was based on A14, and the M2 chip in the Air appears to be based on the A15. The N3 process chips released in 2023 aren't going to be A15-based.

An A15 microarchitecture on N5P or on N3 would still be an A15. It would be addition work than the normal path evolution (N5 and N3 need different design rules but porting to N3 is not impossible. )., It is lots less work than tweaking the microarchitecture and the process node at the same time.

AMD has a Ryzen 6000 APU that is on TSMC N6. The Zen 3 come out on N7. There are still a Zen 3 cores on the Ryzen 6000 . Can try to get bogged down in " But AMD changed from Ryzen 5000 to 6000 so the number had to change" , skips over the fact that AMD did not change the number on "Zen" . Both have Zen 3.
AMD is going to roll out a "AMD Mendicino" entry level APU with Zen 2 and RNDA 2 GPU cores on TSMC 6 . Yes, going from N7 -> N6 is an easier design rule port .


Apple can bubble up which ever marketing number they want to the package. Apple also doesn't have 6-8 packages per subset of the product line. (not much empty space left in the 5000 numbering scheme so jump to 6000 scheme to roll out another 6-89 packages). The 8CPU-8GPU is a M2 and the 8CPU-10GPU is a M2 . Quite illustrative that Apple is not doing so same package number scheme that AMD/Intel use. In part, because they have about an order of magnitude fewer packages types to sell. Not trying to sell everything to everybody. So do not need that many numbers churned each generation. If the microarchitecture is a '2' then put a '2' on the package ID.
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
Except the N3-based chips, in addition to being built on a different process from the N5P used for the M2 Air, are also going to have a different microarchitecture. I.e., both "tick" and "tock".

The M1 chip was based on A14, and the M2 chip in the Air appears to be based on the A15. The N3 process chips released in 2023 aren't going to be A15-based.

So, again, it won't make sense to call them M2.
In theory it should be based of the A16 and the newer process node, should said process provide enough maturity to achieve high enough yields.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
First, the M2 isn't likely on N4P. If they are based on the A15 then it is N5P.
That was a brain fart on my part. When I wrote "5 nm N4P", I should have written "5 nm N5P". Thanks for catching that. I've corrected my posts.
Shrinking the Max is critical if they want to do a Ultra package inside of 1x reticle.
cmaier mentioned shrinking may also be critical if they want to add hardware ray tracing and fit inside the reticle, because he's heard RT can take up a lot of space.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
That was a brain fart on my part. When I wrote "5 nm N4P", I should have written "5 nm N5P". Thanks for catching that. I've corrected my posts.

No problem.

cmaier mentioned shrinking may also be critical if they want to add hardware ray tracing and fit inside the reticle, because he's heard RT can take up a lot of space.

I suspect N3 for RT would be more critical for relatively smaller dies as oppose to a reticle busting one ( or some kind of largish die combo) that was trying to be a AMD 7900 or Nvidi 4090 'killer' .

Apple is probably deeper in "trouble" trying to power their AR/VR headset with solely limited battery SoC , but 4K per eye. MetalFX will help ( render 2K to produce 4k ) and foveated ( maybe just put most of the rays through just 1K part of the image and cut back on ray tracing where can't see detail. ) . Super tight, low range power constraints, but want to do high resolution. A much smaller die that is more skewed to the specialized problem will help. (N3 is helpful so that a limited amount of RT specialized fixed function logic does get squeezed out due to space limitations. )



Apple is still squeezing RT performance out of what they got.


Apple helped get 30% increase out of Blender Cycles by just reorganizing the work and the working set memory layout. In part, because there were memory throughput blocks. RT hardware ( for much faster compute ) when there is no huge aggregate bandwidth increase to feed that isn't going to produce the results that all the Nvidia hardware RT lust is focused on. It is still LPDDR5(something) RAM.

Whereas if they bring more data closer to the compute units they have then can get performance uplift because avoiding the more distance levels of the memory hierarchy. How Apple has balanced out the memory hierarchy is substantively different that what Nvidia has done so the impact that hardware RT would bring isn't necessarily the same.

The bring more data closer is useful outside of Ray Tracing also. There is more synergistic "bang for the buck" if help multiple stages of computation during the display pipeline. Those are probably going to get higher transistor budget priority in the GPU cores complexes than something relatively narrow. RT wouldn't necessarily be low priority but wouldn't be surprising if it didn't make the first cut on N3 implementations ( for Macs. The AR/VR focused SoC is probably has different priorities. )

I haven't gotten through all of my WWDC to do list but I not seeing a "beat the whole farm on ginormous GPU hardware" roadmap so far. There is lots more "work harder to effectively go more embarrassingly parallel to line up with our approach to memory " than "revolution coming". There are lots of iterative improvement and trying to help folks catch up to what they have put out so far. ( e.g. what is going on with the hiccups of Intel Alchemist roll out and that drivers and GPU optimizations for that specific implementation are the major stumbling blocks. Nvidia's somewhat "lost generation of the 1000 series RT hardware... similar issue where software isn't at the same level. Metal isn't taking full advantage of RNDA 2 RT support and RNDA3 is about to out in 6 months or so. etc. )
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.