Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

gojilla

macrumors newbie
Original poster
Sep 7, 2022
1
0
The A16 is 4nm, does this change the roll out schedule for 3nm? I thought we would be seeing 3nm in 2023 but I don't think we have seen a change in nm size for only 1 year (4nm for only 1 year).
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
The A16 is 4nm, does this change the roll out schedule for 3nm?

No. Years ago N4 was being targeted at being in High Volume Manufacturing status in the 1H of 2022. N3 was also years ago targeted at being in HVM status in 2H of 2022. Anyone doing rational A16 planning would have picked N4 over N3

the iPhone needs to have HVM turned on somewhere in the April-June time frame to meet the entirely technology arbitrary deadline of shipping iPhone volume by late September. Anything that is in the 2H year XXXX list is going on the 'no fly list' for that design.

At this point apple has more non-Phone SoCs on iterative design schedules than it has phone SoCs. (M series , Watch, and reportedly AR/VR Soc). So here is lots of stuff that Apple could 'land' on N3 in the first half of 2023 before the A17 could pick it up in April-June timeframe. If N3 has even longer bake times that would be "March-May" timeframe.

The plain , iPhone targeting A-series doesn't always have to go first to a new process. The A10X (contemporary "big die" SoC) was on 10nm before the A11 got it.

TSMC fab process evolution cycles don't happen on precise 12 month only iterations. Moorse's law was 18+ months long cycle. Falling off of Moore's law is only going to get longer, not shorter. And landed on exactly 24 months is extremely unlikely to. The fundamental issues that that Apple's date for the phone has little to do with technology natural iterations. It is just marketing hocus pocus. From time to time it is just not going to synchronize up.


I thought we would be seeing 3nm in 2023 but I don't think we have seen a change in nm size for only 1 year (4nm for only 1 year).

There is no real rule that the plain A-series has to get stuff 'first'. That is the underlying fallacy here. Let go of that and there is no 'problems' here at all. Apple rolls out N3 to whatever is most appropriate at the time that N3 is ready to go. Done.

Rolling out N3 to a "large" die with a kitchen sink of stuff with different design objectives is actually a far better match for the new unique properties of N3 ( higher density and FinFlex ( different transistor design objectives meet on a single die) than cranking out yet another modemless phone SoC for a huge screen phone.

Or if needed a very small SoC to go into an even smaller form factor with higher power and multiple screen computational demands, all constrained by a limited battery.
 
  • Like
Reactions: Tagbert and ingambe

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Just FYI: TSMC's marketing terminology is confusing but, according to this TSMC announcement, N4 and N4P (what the A16 uses) are both still 5 nm:

1662675637389.png

https://pr.tsmc.com/english/news/2874
 
  • Like
Reactions: Tagbert

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
There is no real rule that the plain A-series has to get stuff 'first'. That is the underlying fallacy here. Let go of that and there is no 'problems' here at all. Apple rolls out N3 to whatever is most appropriate at the time that N3 is ready to go. Done.
Agreed. Indeed, if you're just at the start of production for a new process (as is the case for the N3), it makes sense for lower-volume products to get the new process first (giving the process time to ramp up in production before it's directed to the higher-volume products, i.e., the iPhones).

Conversely, it makes sense for the products with the simplest chips (the iPhones) to get the new microarchitecture first, and then for Apple to roll it out to successively more complicated chips*: M#, M# Pro/Max, M# Ultra—and, lastly, the M# "Extreme" (as it has been called here on MR, though I'd prefer M# Garuda). That's what they've been doing thus far.

*That's because I envision the designers starting by laying out the A# chip microarchitecture and then, once that's completed, tackling the larger ones, rather than designing them all at once. Specifically, I assume the understanding they get by completing the A# chip microarchitecture informs their design of the M#, which in turn informs their design of the M# Pro/Max, and so on. But maybe that's not how it works....
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Just FYI: TSMC's marketing terminology is confusing but, according to this TSMC announcement, N4 and N4P (what the A16 uses) are both still 5 nm:

Peoples going long way to confusing themselves. "N5 family" is not N5. In the first, N5 is used as an adjective of another noun ('family'). There is a set of basic design rule compatible implementations. That doesn't mean the cell libraries used in each member of that family is exactly the same on the implementation side. However, at the higher design rule level the designs are relatively inexpensively portable to other members of the family. ( not so cheap as just select the N4 'button' and just reprint. But far more affordable changes than it was going from N7 -> N5 or 16nm -> N7 ).

The other members of the N5 Family are specialized ( optimized ) differences that do bring substantive benefits to a design ( and possibly some negative trade-offs). TSMC customers can save money by iterating inside the same family for a generation or two before moving on to a far more expensive design costs for a more major node shift.

There are very real substantive changes can make to a 3D transistor object that go beyond just measuring the one dimensional width of an subcomponent object.
 
  • Like
Reactions: uller6

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Agreed. Indeed, if you're just at the start of production for a new process (as is the case for the N3), it makes sense for lower-volume products to get the new process first (giving the process time to ramp up in production before it's directed to the higher-volume products, i.e., the iPhones).

Not all low volume is the same. A car maker than needs 800 wafers for relatively cheap car backup camera processors is not a good candidate for a pipe cleaner.

You select a pipe cleaner on a die that can afford to take the hit on lots of defective dies. Either because it is a relatively low volume die demand because it has a very high price tag (and high margins to pay for more 'dead' defective silicon) or a much lower priced die (with decent margins) whose run rate will be so long that will aggregate that 'dead die' costs over one , two , or three orders of magnitude higher numbers of dies.

1,000 dies that have $1,500 profit margin is same amount of revenue as 100,00 dies that have $15 profit margin.

TSMC gets paid for the wafer if zero dies come out working or not. For an N5 (and up) wafer that is around $17K. The more defective dies out of that wafer the more 'defect die' overhead each one of the working dies has to pay for. A $17K wafer that produces 10 working dies has a cost basis of $1,700/die. 300 working dies per wafer that drops to $57. If the 300 dies are being sold to end users at $47 ... that isn't a good pipe cleaner (unless there is a some humongous cost recovery later when cost structure shifts . ). In that 10 working per wafer if the profit margin is $2K per chip then still making $300 profit per die. That works as a pipe cleaner ( both TSMC and customer are making money. The customer could be making lots more, but the customer is not paying for TSMC's temporary yield issues. )


If the lower-volume is too low . The wafer run rate will be too smaller to generate significant statistical data to do process improvements on. If one customer comes in and order 4 wafers and then disappears for 2 months. That isn't going to help much. A sustained extended feedback loop is what is helping to cleaning the 'pipe'. Not some short burst like push a actual pipeline 'pig' through a real physical pipe.



Conversely, it makes sense for the products with the simplest chips (the iPhones) to get the new microarchitecture first, and then for Apple to roll it out to successively more complicated chips*: M#, M# Pro/Max, M# Ultra—and, lastly, the M# "Extreme" (as it has been called here on MR, though I'd prefer M# Garuda). That's what they've been doing thus far.

First, that is different from 'pipe cleaning'. That is more so about which one gets to tape out first and the first set of extra extremely low volume first silicon dies for verification testings. Second, SoC are far modular than that. Currently the P core complex consists of 4 cores clustered around a common L2 cache area. Getting to more P cores just means putting more 4 complexes onto the design. Each group of cores really doesn't work that significantly different from each other. It is largely just more. ( there are extremely narrow things like interrupt vectors that need to get longer with more cores/complexity , but that can done in a significantly modular fashion also if plan ahead in the design. )

The P (and E ) clusters can be 'chopped' . If planned ahead of time should be straightforward to create a 2 core P/E complex from a working 4 complex . Or scale up go from working 2 core to 4 core complex.

They are likely got major sections of the SoC blocked off and tons of design verification and bug testing being done on those individual blocks before putting all of that together. Get your parts working and then put higly working parts together into a SoC and then test that ( which ideally is mainly integration 'bugs' and not deep seated problems in the individual building blocks).


The are more "uncore" parts in an Ultra than in a plain A-series. Those can be worked on in parallel because the Soc has a modular infrastructure.

So lots of this is done all at once. In a very similar fashion in the way that the design of the A-series is pipelined. Multiple SoC generations in R&D stages at the same time.


*That's because I envision the designers starting by laying out the A# chip microarchitecture and then, once that's completed, tackling the larger ones, rather than designing them all at once. Specifically, I assume the understanding they get by completing the A# chip microarchitecture informs their design of the M#, which in turn informs their design of the M# Pro/Max, and so on. But maybe that's not how it works....

Actively managing the complexing EARLY in the design process leads to better outcomes. If have working (highly correct) , functional building 'lego' blocks you can build stuff much easier then if trying to build bigger things with brand new (less tested ) even bigger blocks (with lots of redundant functionality)

The A-series and M-series sharing the same building 'blocks' saves time , effort and money. Can't really do extremely highly shared blocks is completely ignore the eventual contexts where the blocks are going to need to go.
 
  • Like
Reactions: conmee and ingambe

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
The A16 is 4nm, does this change the roll out schedule for 3nm? I thought we would be seeing 3nm in 2023 but I don't think we have seen a change in nm size for only 1 year (4nm for only 1 year).
Rumor is that the M2 Pro/Max chips will be 2nd Gen 5N process. However, that was do to TSMC having capacity booked for 3nm for Intel. That booked capacity is gone as Intel released it, so there is a chance Apple could do 3nm. Moreover considering that the M2 chips are hotter.
 

altaic

macrumors 6502a
Jan 26, 2004
711
484
Rumor is that the M2 Pro/Max chips will be 2nd Gen 5N process. However, that was do to TSMC having capacity booked for 3nm for Intel. That booked capacity is gone as Intel released it, so there is a chance Apple could do 3nm. Moreover considering that the M2 chips are hotter.
Sounds like a BS rumor to me.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Rumor is that the M2 Pro/Max chips will be 2nd Gen 5N process.

5N? N5. second gen N5 --> N5P. N5P went to high volume manufacturing status back in 1H '21 (over a year ago). Apple already shipped that for A15 and M1 Pro/Max back in 2021. Doing it again on the M2 versions would buy them not much at all. The availiblity would be there, but that is not the latest HVM status process for the "N5 family".

However, that was do to TSMC having capacity booked for 3nm for Intel. That booked capacity is gone as Intel released it, so there is a chance Apple could do 3nm. Moreover considering that the M2 chips are hotter.

Pragmatically not. If Intel canceled in the last 3-6 months that is pragmatically too late to retarget the larger M2 dies. The baseline design of the M2 was likely laid down years ago. Doing an 'easy' retarget would be a quarter or two delay. Targeting something 'hard' like N3 would be on the long side of that at best. It isn't like setting the photocopier to 60% reduction from 100% and hitting the "go" button and stuff pops out the other side.

If Apple planned 2 years ago for N3 for M2 "Max" or M2 "desktop" then that would be fine. They would have the design ready but perhaps not the wafer starter allocation to provision a higher volume SoC. ( was sharing reserved starts with Intel so probably picked a lower volume product SoC , or a product with lower SoC demand). There is more to a Mac production than just the SoC. Even if they could get much higher run rates on a N3 SoC , that doesn't mean can get more RAM and other secondary chips to flush out the rest of the system at the "last minute" either.

Targeting the largest M2 "Max-class" die with N4 really doesn't dig much into its issues as a really 'chubby' chiplet. As a monolithic die it is OK. Double and quadrupling it into a package brings some unnecessary overhead if do not shift to N3. Decent chance Apple was looking to get smaller when designing two or more years ago.


Intel pulling out of N3 wafer slots if done early enough could help the N4/N5P/N4P lines . But probably not going to change Apple's flow in the relatively short term much at all. TMSC also likely going to bring N3E online faster to soak up the wafer usage gap. ( presuming N3E is easier to make).
 

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
That the comment doesn’t really make any sense.
the user thinks that Tim will not adopt 3nm in 2023 and squeeze this "4nm" for longer time for more profit margins. So it makes sense but i dont agree with him. I am pretty sure next year will be the year of 3nm for apple...iphones (at least the iphone pro lineup) and some macs too
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
the user thinks that Tim will not adopt 3nm in 2023 and squeeze this "4nm" for longer time for more profit margins
Oh I understood the comment, it’s simply that it makes no sense to think that. If milking some certain process is what Tim Cook was known for then Apple wouldn’t have the reputation of getting the latest nodes (nor would they have the kind of relationship they do with TSMC).

Just seemed brainless is all.
 
  • Like
Reactions: headlessmike

EugW

macrumors G5
Jun 18, 2017
14,890
12,859
"Milking" the old process often also doesn't make sense from the cost perspective. One big advantage of the new process means getting significantly higher efficiency, which can benefit many things, not the least of which is battery size. Even chip cost can go down over the life of the chip, since the chip sizes on newer processes are smaller.

An A17 on 5nm would likely be huge, which would be costly by 5nm standards and would cause all sorts of design problems related to heat and power utilization.
 
  • Like
Reactions: Tagbert

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
I was just reading this anandtech article (https://www.anandtech.com/show/17452/tsmc-readies-five-3nm-process-technologies-with-finflex)

In that article the talk about differences between N3 and N3E:

N3
- Will ship in 2H22
- 1.7x density v N5
- worse yields than N3E

N3E
- Will ship in 1H23
- 1.6x density v N5
- better yields than N3

On top of that there's another article that shows TSMC is 'only' staring 1,000 wafers a month on N3. (Noted here: https://forums.macrumors.com/thread...duce-1-000-n3-wafers-per-month-in-q4.2355262/)

What struck me was that N3 was denser than N3E. Given all that - doesn't it point to Apple using N3 for soon-to-be-released Mac Pro? A low-volume, in-need-of-the-highest-density SOC?

Then use N3E for future mac chips for M3/Pro/Max/Ultra futures?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.