Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
This is massive oversimplification. Empirical data shows that fast degradation of modern silicon chips only starts at temperatures over 120C. It’s no coincidence pretty much every manufacturer sets 100-105C as the highest safe operating temperature.




Apple has been routinely running their chips (be it Intel or Apple Silicon) at 100C for at least a decade. They are still regarded as one of the most reliable brands out there.

There is no downside not to, really. Quite the opposite, trying to keep the chip at low temperstures will cost you either performance or size/weight (or both).

While that has been the case for decades at this point, one has to wonder whether we will see those max temps drop as the industry moves to smaller and smaller processes, especially once they are measured in angstroms instead of nanometers.
 
  • Like
Reactions: Populus

tenthousandthings

Contributor
May 14, 2012
276
323
New Haven, CT
Hi! I find this thread really interesting. Please excuse me if I’m not speaking with as much knowledge as some of you.

Given that the latest rubles place the first M3 devices well into next year, when the chips manufactured on the new N3E process… do you think the M3 will be based on the A17 architecture and using the N3B node? Or they will instead use the N3E node and use the new architecture that we will see later in the year with the A18?
If Apple follows the same development pattern as A14/M1 and A15/M2, then the M3 will use the N3B node.

One thing that I don't think is mentioned in this long and complicated (look starting around #559 or so for the main discussion of what 3nm means) thread is that N3P will be an optical shrink of N3E and not N3 [April 2023]. Thus the name change from N3 to N3B, to avoid confusion. A lot of speculation has resulted from this unusual move by TSMC, but I think it is wrong to jump to the conclusion that Apple and TSMC are unhappy with N3B (formerly N3). It's more that N3B is what Apple wanted. It's not for everyone. TSMC presented two overview papers about this, outlining the differences between N3B and N3E. The partnership between TSMC and Apple is real. There are trade secrets involved. It is disruptive, and has been since the A8 and iPhone 6 (2014). It is a mistake to dismiss the importance of this alliance.

Anything is possible and M3 could be on N3E. But I'd be surprised (not that that would be surprising). This will all converge back together for N2, when the TSMC transistors change over to GAA field-effect transistors (TSMC is calling these "Nanosheet" transistors). Thus, both N3B and N3E/N3P are the last FinFET-based process nodes.
 

Populus

macrumors 603
Aug 24, 2012
5,942
8,412
Spain, Europe
But the GAA 2nm process won’t be ready until 2025, and I wouldn’t expect the 2nm process to be available until 2026 at least, with an hypothetical M5.

I didn’t dismiss the relationship between Apple and TSMC, but I think the N3B will be a short lived process just like the 10nm of the A11 was, that’s all.

Now that Kuo and Gurman are pointing towards 2024 as the release year of the M3 macs, I think it’s likely that the M3 is based on the N3E process.

Following the reasoning of M1 being based on A14 and M2 being based on A15 (which is true), why wouldn’t the M3 be based on the A16 architecture and the “4nm” (5nm++) process?

At this point it’s just speculation but I like to share my thoughts.
 
  • Like
Reactions: tenthousandthings

tenthousandthings

Contributor
May 14, 2012
276
323
New Haven, CT
But the GAA 2nm process won’t be ready until 2025, and I wouldn’t expect the 2nm process to be available until 2026 at least, with an hypothetical M5.

I didn’t dismiss the relationship between Apple and TSMC, but I think the N3B will be a short lived process just like the 10nm of the A11 was, that’s all.

Now that Kuo and Gurman are pointing towards 2024 as the release year of the M3 macs, I think it’s likely that the M3 is based on the N3E process.

Following the reasoning of M1 being based on A14 and M2 being based on A15 (which is true), why wouldn’t the M3 be based on the A16 architecture and the “4nm” (5nm++) process?

At this point it’s just speculation but I like to share my thoughts.
Oh, sorry, I didn't mean to suggest that you had dismissed that. It wasn't clear (at least to me) that you thought M3 would be N3E, I thought you were just asking a question about which. So that critique was aimed at other comments in this thread and elsewhere, not yours.

Interesting you should mention the A11 -- TSMC 10nm was also used for the A10X, indeed it was the first consumer 10nm product, and the only time Apple and TSMC have changed process nodes midstream, from 16nm for the A10 to 10nm for the A10X. 10nm is the last node that isn't broken down into named generations like N7 and N7P, so I don't know if there are refinements in the "10nm" process used for A10X versus A11. There might well be.

To answer your (rhetorical) question about A14/M1 and A15/M2, I'd argue that the M1 (released only a month after the A14) is the outlier, an artifact of the transition to Apple silicon in Macs. The A-to-M cadence is likely to be closer to A15-to-M2 going forward, with nine months between A15 and M2. So maybe six months between A17 in September 2023 and M3 in, say, March 2024? So having the M3 come out, say, next month using the A16 architecture on N4P would be, let's see, fourteen months after the A16. That doesn't fit the A15-to-M2 model:

[Note: I don't track all devices, just those I think illustrate how the product line is the driver here, not the silicon.]

A8 (September 2014) iPhone 6 [TSMC 20nm]
A8X (October 2014) iPad Air 2 [TSMC 20nm]

A9 (September 2015) iPhone 6S :: iPad 5 [TSMC 16nm]
A9X (November 2015) iPad Pro 1 [TSMC 16nm]

A10 Fusion (September 2016) iPhone 7 :: iPad 6 :: iPad 7 [TSMC 16nm]
A10X Fusion (June 2017) iPad Pro 2 [TSMC 10nm]

A11 Bionic (September 2017) iPhone 8, iPhone X [TSMC 10nm]

A12 Bionic (September 2018) iPhone XS :: iPad Air 3 :: iPad 8 [TSMC 7nm gen1 "N7"]
A12X Bionic (October 2018) iPad Pro 3 [TSMC N7] (7-core GPU)
A12Z Bionic (March 2020) iPad Pro 4 :: macOS Developer Transition Kit [TSMC N7] (8-core GPU)

A13 Bionic (September 2019) iPhone 11 :: iPad 9 [TSMC 7nm gen2 "N7P"]

A14 Bionic (October 2020) iPhone 12 :: iPad Air 4 :: iPad 10 [TSMC 5nm gen1 "N5"]
M1 (November 2020) Mini :: iMac :: iPad Pro 5 :: iPad Air 5 [TSMC N5]
M1 Pro/Max (October 2021) MacBook Pro [TSMC N5]
M1 Ultra (March 2022) Mac Studio (also M1 Max) [TSMC N5]

A15 Bionic (September 2021) iPhone 13 :: iPhone 14 [TSMC 5nm gen2 "N5P"]
M2 (June 2022) MacBook Air :: iPad Pro 6 :: Mini (also M2 Pro) [TSMC N5P]
M2 Pro/Max (January 2023) MacBook Pro [TSMC N5P]
M2 Ultra (June 2023) Mac Studio (also M2 Max) :: Mac Pro [TSMC N5P]

A16 Bionic (September 2022) iPhone 14 Pro :: iPhone 15 [TSMC 5nm gen4 "N4P"]

A17 Pro (September 2023) iPhone 15 Pro [TSMC 3nm gen1 "N3B"]
 
Last edited:

Populus

macrumors 603
Aug 24, 2012
5,942
8,412
Spain, Europe
Oh, sorry, I didn't mean to suggest that you had dismissed that. It wasn't clear (at least to me) that you thought M3 would be N3E, I thought you were just asking a question about which. So that critique was aimed at other comments in this thread and elsewhere, not yours.

Interesting you should mention the A11 -- TSMC 10nm was also used for the A10X, indeed it was the first consumer 10nm product, and the only time Apple and TSMC have changed process nodes midstream, from 16nm for the A10 to 10nm for the A10X. 10nm is the last node that isn't broken down into named generations like N7 and N7P, so I don't know if there are refinements in the "10nm" process used for A10X versus A11. There might well be.

To answer your (rhetorical) question about A14/M1 and A15/M2, I'd argue that the M1 (released only a month after the A14) is the outlier, an artifact of the transition to Apple silicon in Macs. The A-to-M cadence is likely to be closer to A15-to-M2 going forward, with nine months between A15 and M2. So maybe six months between A17 in September 2023 and M3 in, say, March 2024? So having the M3 come out, say, next month using the A16 architecture on N4P would be, let's see, fourteen months after the A16. That doesn't fit the A15-to-M2 model:

[Note: I don't track all devices, just those I think illustrate how the product line is the driver here, not the silicon.]

A8 (September 2014) iPhone 6 [TSMC 20nm]
A8X (October 2014) iPad Air 2 [TSMC 20nm]

A9 (September 2015) iPhone 6S :: iPad 5 [TSMC 16nm]
A9X (November 2015) iPad Pro 1 [TSMC 16nm]

A10 Fusion (September 2016) iPhone 7 :: iPad 6 :: iPad 7 [TSMC 16nm]
A10X Fusion (June 2017) iPad Pro 2 [TSMC 10nm]

A11 Bionic (September 2017) iPhone 8, iPhone X [TSMC 10nm]

A12 Bionic (September 2018) iPhone XS :: iPad Air 3 :: iPad 8 [TSMC 7nm gen1 "N7"]
A12X Bionic (October 2018) iPad Pro 3 [TSMC N7] (7-core GPU)
A12Z Bionic (March 2020) iPad Pro 4 :: macOS Developer Transition Kit [TSMC N7] (8-core GPU)

A13 Bionic (September 2019) iPhone 11 :: iPad 9 [TSMC 7nm gen2 "N7P"]

A14 Bionic (October 2020) iPhone 12 :: iPad Air 4 :: iPad 10 [TSMC 5nm gen1 "N5"]
M1 (November 2020) Mini :: iMac :: iPad Pro 5 :: iPad Air 5 [TSMC N5]
M1 Pro/Max (October 2021) MacBook Pro [TSMC N5]
M1 Ultra (March 2022) Mac Studio (also M1 Max) [TSMC N5]

A15 Bionic (September 2021) iPhone 13 :: iPhone 14 [TSMC 5nm gen2 "N5P"]
M2 (June 2022) MacBook Air :: iPad Pro 6 :: Mini (also M2 Pro) [TSMC N5P]
M2 Pro/Max (January 2023) MacBook Pro [TSMC N5P]
M2 Ultra (June 2023) Mac Studio (also M2 Max) :: Mac Pro [TSMC N5P]

A16 Bionic (September 2022) iPhone 14 Pro :: iPhone 15 [TSMC 5nm gen4 "N4P"]

A17 Pro (September 2023) iPhone 15 Pro [TSMC 3nm gen1 "N3B"]
Right, according to that timeline, an hypothetical M3 with based on a “4nm” A16 would have released on the first half of 2023, but it didn’t, and now the M3 is expected to be released during 2024, replicating the delay we saw between the A15 and the M2 with the A17 and the M3.

Also, there’s another theory that says that the A17 is what the A16 should have been from the beginning (with rumors of the Ray Tracing being too hot for the A16 GPU, and maybe the A16 was expected to be built on this first N3 node but it suffered important delays). According this theory, there wouldn’t be any reason to release an A16-based M3, because the A16 and the A17 were, theoretically, two steps of the originally planned A17.

So yes, I think there’s a chance the M3 will be based on the A17, however, I still think it’s likely it will be built on the N3E process node.

And yes, the example of the A11 is interesting (I have an iPhone 8 and I still love it despite not being my main iPhone anymore), because as you said, Apple introduced the 10nm process before, on the A10X SoC, an equivalent to the current M SoC. Thus, there’s precedent in using a more advanced node first on a bigger SoC, in this case the iPad Pro SoC.
 

caribbeanblue

macrumors regular
May 14, 2020
138
132
Oh, sorry, I didn't mean to suggest that you had dismissed that. It wasn't clear (at least to me) that you thought M3 would be N3E, I thought you were just asking a question about which. So that critique was aimed at other comments in this thread and elsewhere, not yours.

Interesting you should mention the A11 -- TSMC 10nm was also used for the A10X, indeed it was the first consumer 10nm product, and the only time Apple and TSMC have changed process nodes midstream, from 16nm for the A10 to 10nm for the A10X. 10nm is the last node that isn't broken down into named generations like N7 and N7P, so I don't know if there are refinements in the "10nm" process used for A10X versus A11. There might well be.

To answer your (rhetorical) question about A14/M1 and A15/M2, I'd argue that the M1 (released only a month after the A14) is the outlier, an artifact of the transition to Apple silicon in Macs. The A-to-M cadence is likely to be closer to A15-to-M2 going forward, with nine months between A15 and M2. So maybe six months between A17 in September 2023 and M3 in, say, March 2024? So having the M3 come out, say, next month using the A16 architecture on N4P would be, let's see, fourteen months after the A16. That doesn't fit the A15-to-M2 model:

[Note: I don't track all devices, just those I think illustrate how the product line is the driver here, not the silicon.]

A8 (September 2014) iPhone 6 [TSMC 20nm]
A8X (October 2014) iPad Air 2 [TSMC 20nm]

A9 (September 2015) iPhone 6S :: iPad 5 [TSMC 16nm]
A9X (November 2015) iPad Pro 1 [TSMC 16nm]

A10 Fusion (September 2016) iPhone 7 :: iPad 6 :: iPad 7 [TSMC 16nm]
A10X Fusion (June 2017) iPad Pro 2 [TSMC 10nm]

A11 Bionic (September 2017) iPhone 8, iPhone X [TSMC 10nm]

A12 Bionic (September 2018) iPhone XS :: iPad Air 3 :: iPad 8 [TSMC 7nm gen1 "N7"]
A12X Bionic (October 2018) iPad Pro 3 [TSMC N7] (7-core GPU)
A12Z Bionic (March 2020) iPad Pro 4 :: macOS Developer Transition Kit [TSMC N7] (8-core GPU)

A13 Bionic (September 2019) iPhone 11 :: iPad 9 [TSMC 7nm gen2 "N7P"]

A14 Bionic (October 2020) iPhone 12 :: iPad Air 4 :: iPad 10 [TSMC 5nm gen1 "N5"]
M1 (November 2020) Mini :: iMac :: iPad Pro 5 :: iPad Air 5 [TSMC N5]
M1 Pro/Max (October 2021) MacBook Pro [TSMC N5]
M1 Ultra (March 2022) Mac Studio (also M1 Max) [TSMC N5]

A15 Bionic (September 2021) iPhone 13 :: iPhone 14 [TSMC 5nm gen2 "N5P"]
M2 (June 2022) MacBook Air :: iPad Pro 6 :: Mini (also M2 Pro) [TSMC N5P]
M2 Pro/Max (January 2023) MacBook Pro [TSMC N5P]
M2 Ultra (June 2023) Mac Studio (also M2 Max) :: Mac Pro [TSMC N5P]

A16 Bionic (September 2022) iPhone 14 Pro :: iPhone 15 [TSMC 5nm gen4 "N4P"]

A17 Pro (September 2023) iPhone 15 Pro [TSMC 3nm gen1 "N3B"]
I think A16 is on N4, not N4P, as I think N4P only started to be available for high volume products in the first half of 2023. The Snapdragon 8 Gen 2 is supposedly on N4P, and 8 Gen 3 will be on N4P as well because nobody except Apple is interested in N3B and N3E isn't ready until the second half of 2024.
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
988
Reviews of the new Intel desktop generation (Raptor Lake Refresh) are now out, and holy cow, peak power draw was measured by AnandTech at nearly 430W! Meanwhile performance at the top end ranges from a 1.5-6% boost over the previous gen.

An M3 with a bunch of A17-type cores with their "terrible" "inefficient" performance is looking pretty good right now. So weird (and aggravating) that we'll probably have to wait until next year to see it.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
About "heat kills" chips:

I believe that's true within certain broad parameters. Do you know what they are? Resistance is inversely related to temperatures in semiconductors (unlike metals), AFAIK, while leakage current goes up. Are there other effects as well?

You are asking the wrong person. I have no clue about any of these things :) I’m just quoting the few papers I knows are available on the topic as well as what happens in the industry.


While that has been the case for decades at this point, one has to wonder whether we will see those max temps drop as the industry moves to smaller and smaller processes, especially once they are measured in angstroms instead of nanometers.

Or maybe with new materials and manufacturing processes the permissible temperatures will even increase. I don’t think one can guess these things that easily.
 
  • Like
Reactions: caribbeanblue

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Reviews of the new Intel desktop generation (Raptor Lake Refresh) are now out, and holy cow, peak power draw was measured by AnandTech at nearly 430W! Meanwhile performance at the top end ranges from a 1.5-6% boost over the previous gen.

An M3 with a bunch of A17-type cores with their "terrible" "inefficient" performance is looking pretty good right now. So weird (and aggravating) that we'll probably have to wait until next year to see it.
Yea RL Refresh is super disappointing. It is also just a stop gap because Meteor Lake-S is late. Folks are hoping Intel isn't falling back to the 9-11th gen poor performance upgrades.
 

tenthousandthings

Contributor
May 14, 2012
276
323
New Haven, CT
I think A16 is on N4, not N4P, as I think N4P only started to be available for high volume products in the first half of 2023. The Snapdragon 8 Gen 2 is supposedly on N4P, and 8 Gen 3 will be on N4P as well because nobody except Apple is interested in N3B and N3E isn't ready until the second half of 2024.
A16 is now thought to be N4P. I learned of it via Wikipedia, but the underlying source is TechInsights: https://www.techinsights.com/products/dfr-2209-801

They specialize in what they call "reverse-engineering" -- they examine the silicon used in shipping products. I would rate this as likely, but not confirmed -- it's information from a single source. But it's not surprising Apple would get production from this process before anyone else. A-series silicon is often (always?) the first consumer product in its generation. TSMC originally announced "second-half 2022" for N4P production and it appears they hit that target.
 
  • Like
Reactions: caribbeanblue

name99

macrumors 68020
Jun 21, 2004
2,410
2,321
Oh, sorry, I didn't mean to suggest that you had dismissed that. It wasn't clear (at least to me) that you thought M3 would be N3E, I thought you were just asking a question about which. So that critique was aimed at other comments in this thread and elsewhere, not yours.

Interesting you should mention the A11 -- TSMC 10nm was also used for the A10X, indeed it was the first consumer 10nm product, and the only time Apple and TSMC have changed process nodes midstream, from 16nm for the A10 to 10nm for the A10X. 10nm is the last node that isn't broken down into named generations like N7 and N7P, so I don't know if there are refinements in the "10nm" process used for A10X versus A11. There might well be.

To answer your (rhetorical) question about A14/M1 and A15/M2, I'd argue that the M1 (released only a month after the A14) is the outlier, an artifact of the transition to Apple silicon in Macs. The A-to-M cadence is likely to be closer to A15-to-M2 going forward, with nine months between A15 and M2. So maybe six months between A17 in September 2023 and M3 in, say, March 2024? So having the M3 come out, say, next month using the A16 architecture on N4P would be, let's see, fourteen months after the A16. That doesn't fit the A15-to-M2 model:

[Note: I don't track all devices, just those I think illustrate how the product line is the driver here, not the silicon.]

A8 (September 2014) iPhone 6 [TSMC 20nm]
A8X (October 2014) iPad Air 2 [TSMC 20nm]

A9 (September 2015) iPhone 6S :: iPad 5 [TSMC 16nm]
A9X (November 2015) iPad Pro 1 [TSMC 16nm]

A10 Fusion (September 2016) iPhone 7 :: iPad 6 :: iPad 7 [TSMC 16nm]
A10X Fusion (June 2017) iPad Pro 2 [TSMC 10nm]

A11 Bionic (September 2017) iPhone 8, iPhone X [TSMC 10nm]

A12 Bionic (September 2018) iPhone XS :: iPad Air 3 :: iPad 8 [TSMC 7nm gen1 "N7"]
A12X Bionic (October 2018) iPad Pro 3 [TSMC N7] (7-core GPU)
A12Z Bionic (March 2020) iPad Pro 4 :: macOS Developer Transition Kit [TSMC N7] (8-core GPU)

A13 Bionic (September 2019) iPhone 11 :: iPad 9 [TSMC 7nm gen2 "N7P"]

A14 Bionic (October 2020) iPhone 12 :: iPad Air 4 :: iPad 10 [TSMC 5nm gen1 "N5"]
M1 (November 2020) Mini :: iMac :: iPad Pro 5 :: iPad Air 5 [TSMC N5]
M1 Pro/Max (October 2021) MacBook Pro [TSMC N5]
M1 Ultra (March 2022) Mac Studio (also M1 Max) [TSMC N5]

A15 Bionic (September 2021) iPhone 13 :: iPhone 14 [TSMC 5nm gen2 "N5P"]
M2 (June 2022) MacBook Air :: iPad Pro 6 :: Mini (also M2 Pro) [TSMC N5P]
M2 Pro/Max (January 2023) MacBook Pro [TSMC N5P]
M2 Ultra (June 2023) Mac Studio (also M2 Max) :: Mac Pro [TSMC N5P]

A16 Bionic (September 2022) iPhone 14 Pro :: iPhone 15 [TSMC 5nm gen4 "N4P"]

A17 Pro (September 2023) iPhone 15 Pro [TSMC 3nm gen1 "N3B"]

Until the A9, Apple used Samsung. The A9 was fabbed by both TSMC and Samsung (with the usual idiot-driven faux scandals about how one chip was worse than the other).

A9X and subsequent were TSMC only.
 

tenthousandthings

Contributor
May 14, 2012
276
323
New Haven, CT
Until the A9, Apple used Samsung. The A9 was fabbed by both TSMC and Samsung (with the usual idiot-driven faux scandals about how one chip was worse than the other).

A9X and subsequent were TSMC only.
A8/A8X were TSMC only. A7 was the last Samsung only.

The A9 dual sourcing was about FinFET transistor architecture, which was being introduced at that time. Samsung was the first to ship (for their Galaxy S6), so they had already proven they could do it. The A9, on the other hand, was TSMC’s first consumer product to use FinFET. So Apple hedged their bet, understandably. But the relationship with TSMC began with the A8.

[Edit to add that there was also a legal dispute over FinFET between Samsung and TSMC. I don’t know what it was about, but that’s likely another reason for the dual sourcing.]
 
Last edited:

name99

macrumors 68020
Jun 21, 2004
2,410
2,321
A8/A8X were TSMC only. A7 was the last Samsung only.

The A9 dual sourcing was about FinFET transistor architecture, which was being introduced at that time. Samsung was the first to ship (for their Galaxy S6), so they had already proven they could do it. The A9, on the other hand, was TSMC’s first consumer product to use FinFET. So Apple hedged their bet, understandably. But the relationship with TSMC began with the A8.

[Edit to add that there was also a legal dispute over FinFET between Samsung and TSMC. I don’t know what it was about, but that’s likely another reason for the dual sourcing.]
You're right!
All that old history gets so fuzzy! We land up only remembering the interesting stuff (ie the A9 dual sourcing)
 
  • Like
Reactions: tenthousandthings

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
I think A16 is on N4, not N4P, as I think N4P only started to be available for high volume products in the first half of 2023. The Snapdragon 8 Gen 2 is supposedly on N4P, and 8 Gen 3 will be on N4P as well because nobody except Apple is interested in N3B and N3E isn't ready until the second half of 2024.
According to TechInsights, A16 is "manufactured with TSMC's N4P FinFET process technology."

 
  • Like
Reactions: caribbeanblue

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Yea RL Refresh is super disappointing. It is also just a stop gap because Meteor Lake-S is late.

Meteor Lake S isn't late... it is cancelled. intel is on track to replace the gap fill with a desktop Arrow Lake solution some time 2H 2024.

No socketed ML coming was covered about a month ago....





Intel is going to deploy some of the laptop Meteor Lake configurations into a 'desktop' package, but mainly going to be about desktop All-in-ones and budget-midrange offerings. Aiming at the bulk of desktops sold not the upper 25-35 percentile.

The Intel ML-S socket activity is most likely to lay ground work for a socket compatible Arrow Lake S options to fall into after a substantive delay ( the RL-R will gap fill for about a year. )


Folks are hoping Intel isn't falling back to the 9-11th gen poor performance upgrades.

Intel is getting Intel 4 , Intel 3 , 20A , and 18A so close together by doing some limited scope releases. Intel 4 doesn't have the very high performance libraries options. So Meteor lake was not going to be anyone's overclockers dream. Meteor lake in the CPU P core realm is mainly a 'tick' (shrink) of Raptor Lake.

The fab processes are on a bit of a 'tick/tock' model also. Intel 4 (focus smaller) , Intel 3 (focus on substantive library changes ), 20A and 18A rinse and repeat. Intel isn't trying to tackle 3-4 dimensions of complexity all at once.

Arrow Lake -S may not be on Intel fab at all. ( Laptop Arrow Lake has a Intel 20A component. But there is no requirement that Intel make both desktop and laptop with the same exact stuff. ) . Laptop packages/SoCs are the biggest volume. If Intel plugs the desktop hole with TSMC 3B (or 3E) then it isn't relatively huge volume. There will be more than plenty of work for Intel fab to crank out wafers for over time.
 
  • Like
Reactions: Chuckeee

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Meteor Lake S isn't late... it is cancelled. intel is on track to replace the gap fill with a desktop Arrow Lake solution some time 2H 2024.

No socketed ML coming was covered about a month ago....





Intel is going to deploy some of the laptop Meteor Lake configurations into a 'desktop' package, but mainly going to be about desktop All-in-ones and budget-midrange offerings. Aiming at the bulk of desktops sold not the upper 25-35 percentile.

The Intel ML-S socket activity is most likely to lay ground work for a socket compatible Arrow Lake S options to fall into after a substantive delay ( the RL-R will gap fill for about a year. )




Intel is getting Intel 4 , Intel 3 , 20A , and 18A so close together by doing some limited scope releases. Intel 4 doesn't have the very high performance libraries options. So Meteor lake was not going to be anyone's overclockers dream. Meteor lake in the CPU P core realm is mainly a 'tick' (shrink) of Raptor Lake.

The fab processes are on a bit of a 'tick/tock' model also. Intel 4 (focus smaller) , Intel 3 (focus on substantive library changes ), 20A and 18A rinse and repeat. Intel isn't trying to tackle 3-4 dimensions of complexity all at once.

Arrow Lake -S may not be on Intel fab at all. ( Laptop Arrow Lake has a Intel 20A component. But there is no requirement that Intel make both desktop and laptop with the same exact stuff. ) . Laptop packages/SoCs are the biggest volume. If Intel plugs the desktop hole with TSMC 3B (or 3E) then it isn't relatively huge volume. There will be more than plenty of work for Intel fab to crank out wafers for over time.
Yeah I forgot that they canceled MTL-S. Very interested in seeing how the SoC Tiles they plan to use turn out. Could be a game changer for them, or maybe not haha.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Yeah I forgot that they canceled MTL-S. Very interested in seeing how the SoC Tiles they plan to use turn out. Could be a game changer for them, or maybe not haha.

There is some non-zero chance that the desktop variant may not be tiled at all. If the iGPU is relatively as small as the legacy desktop Intel iGPUs. Thunderbolt is missing, and some substantive PCI-e lane provisioning kicked off into the PCH chipset, there isn't a whole lot of upside of tiling TSMC N3B tiles together in a package ( if every tile is the exactly the same fab process they could be merged. ) . If it is all the same stuff and it aggregates to 200-250mm^2 of die area. Why bother.

They'd get very incrementally better yields with smaller dies but now have more expensive 3D die packaging overhead to deal with. If want to chop up a 300, 400 , 500+ mm^2 die the trade off is very likely worth it. But at some point of smaller die it is likely going to be a bigger 'pain' than an asset.

Significantly depends upon how much Intel 'bet the farm' on relying on dGPUs for AL-S systems in general. ( up until this generation AMD has zero iGPU on their mainstream Rzyen desktop offerings . )

The laptop volume demands are pretty likely going to swamp Intel's 3D packaging capacity. (and other stuff that is lined up for that). They could have had an active "Plan B" that took desktop off that more strategically critical path.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
There is some non-zero chance that the desktop variant may not be tiled at all. If the iGPU is relatively as small as the legacy desktop Intel iGPUs. Thunderbolt is missing, and some substantive PCI-e lane provisioning kicked off into the PCH chipset, there isn't a whole lot of upside of tiling TSMC N3B tiles together in a package ( if every tile is the exactly the same fab process they could be merged. ) . If it is all the same stuff and it aggregates to 200-250mm^2 of die area. Why bother.

They'd get very incrementally better yields with smaller dies but now have more expensive 3D die packaging overhead to deal with. If want to chop up a 300, 400 , 500+ mm^2 die the trade off is very likely worth it. But at some point of smaller die it is likely going to be a bigger 'pain' than an asset.

Significantly depends upon how much Intel 'bet the farm' on relying on dGPUs for AL-S systems in general. ( up until this generation AMD has zero iGPU on their mainstream Rzyen desktop offerings . )

The laptop volume demands are pretty likely going to swamp Intel's 3D packaging capacity. (and other stuff that is lined up for that). They could have had an active "Plan B" that took desktop off that more strategically critical path.
Isn't Intel adding an AI/ML block to their cpus like Apple (and to a lesser extent AMD) has? Not that they would need tiles for that, it just seemed like it would make different configurations easier on themselves. Especially with limited fab capacity.
 

Populus

macrumors 603
Aug 24, 2012
5,942
8,412
Spain, Europe
I wanted to add a bit of food for thought and speculation: The A17 Pro has an AV1 hardware video decoder. Do you think the M3 will have just the AV1 hardware decoder, or it could have both AV1 hardware encoder and decoder? This could be really useful if Handbrake included the AV1 VideoToolbox to use hardware encoding using this codec.

PS: I don’t really understand why it took Apple so long to include AV1 hardware decoding on their chips. It could have been included with the A15/M2 or even with the A14/M1. I guess Apple likes to include new features at their own pace…
 
  • Like
Reactions: Chuckeee

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Isn't Intel adding an AI/ML block to their cpus like Apple (and to a lesser extent AMD) has? Not that they would need tiles for that, it just seemed like it would make different configurations easier on themselves. Especially with limited fab capacity.

The limited fab capacity is more so limited number of EUV machines. Intel just doesn't have them and cannot spend their way to rapidly correct that. They are produced at a relatively slow rate by just one single supplier world wide. If you don't have several ... you are just plain stuck. Intel hit the 'snooze' button' for an extra two years and now they are in a hole they can't get out of for a substantial amount of time.

There are some other contributing factors with logic detaching from I/O and SRAM/cache ability to shrink going forward, but Intel really primarily being pushed by that.

The discrete GPU track is off on TSMC fab processes. That is primarily what is dragging the iGPU tile there. And that is mainly due to volume ( Intel did not intend Arc to be a horrible sales product. If it had be decently successful they would hace needed TSMC fabs to make it . )


as for the AI/ML inference. They really can't treat it like the pit that AVX-512 fell into where it is just present in fragments across the line up. The VPU/AI/ML facitly is on the 'SoC' tile. That tile crosses all the configurations. For example an Intel slide


MTL%20AI%20Deck%20for%20May%202023%20press%20brief_10.jpg


https://www.anandtech.com/show/1887...-lake-vpu-block-lays-out-vision-for-client-ai

From same article.

" ... Specifically, the block is derived from Movidius’s third-generation Vision Processing Unit (VPU) design, and going forward, is aptly being identified by Intel as a VPU. ..."
This isn't some new, untried tech. It is tech that intel just did not leverage in a synergistic fashion very well in earlier generations.

Note th e '(all SKUs)" in the VPU portion of the slide. The two LP E cores , VPU , memory controller are all core elements of the SoC tile and are basically uniform across all the SKUs. If changed the number and type of memory controllers then probably would be changing the socket pin count or package pad count.

If Intel is trying to attract software developers to put in semi-custom AI code into apps then the AI inference stuff pretty much has to appear in as many systems as possible to get wide spread , common adoption.

AI%20Deep%20Dive_FINAL%20CLEAN-20.png


OpenVINO will dispatch into CPU/GPU/NPU(VPU) as appropriate. Similar with DirectML. Don't have the NPU/VPU option then there are 'problems' if need to do very responsive inference.

If Windows is going to have an OS standard ML layer then it not having a NPU to leverage it makes about as much sense as not having a GPU to leverage DirectX (the GPU layer). x86 , Arm ... any SoC package that is targeting Windows needs one. Intel really doesn't have a choice whether to put on in or not going forward. At least for Windows targeted CPU SKUs.

The tiles are like more so aimed at the more potential problematic areas. First, a GPU that is trying to push into a new zone. It is aimed discrete and will be dialing it back a bit to use as iGPU. Intel basically needs that to catch up to AMD (and Apple). The GPU drivers for their new baseline GPU architecture has had lots of issues. Any of that which would require hardware adjustments ( there does seem to be a Alchemist+ ) would be eased by having having a tile that could 'respin' without having to ripple into other parts of the package (and dies).

The CPUs are also going to through some major 'replumbing' and possible outsourcing pressures. Intel didn't 'bet' on buying EUV machines so the CPU cores had to be ready to be outsourced in the 2023-24 timeframe , if Intel had not reversed course n actually spending money to creditably keep up with fab process evolution. However, if Intel could pull the CPU tile onto Intel fab process they could save face on making at least one of the critical components of the package.

According to rumors ArrowLake's P core are dumping SMT/Hyperthreading. But it also isn't going to have Intel's alterative of "rentable units". So it has risk in making that change. If toss additional risk of a Intel 3 or 20A process that also might also have problems there is decent chance for desktop Intel just skipped that and went with a presumed more predictable TSMC option. So it was 'ease of outsourcing' at least as much as 'configurability' in play there.

If Meteor Lake is decent then Intel has more time to come up with a fixed, in-sourced Arrow Lake for laptop packages. However, the better Meteor Lake is tuned to laptop the less it will be tuned for desktop. So that would be an area would want to go with the 'initial primary target' of TSMC for the solution so as to not loose time. If there were hiccups in taking out SMT ... even more so.

It is configurability in that the tiles/chiplets can progress at whatever reasonable speed they need. So if ArrowLake needs to reuse Meteorlake's GPU tiles then just use the older ones. If the GPU tiles happen to progress faster .. then perhaps newer ones in small GPU dies (where bandwidth pressure increase is probably less. ).
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I wanted to add a bit of food for thought and speculation: The A17 Pro has an AV1 hardware video decoder. Do you think the M3 will have just the AV1 hardware decoder, or it could have both AV1 hardware encoder and decoder? This could be really useful if Handbrake included the AV1 VideoToolbox to use hardware encoding using this codec.

Probably just a decoder. If Apple is dragging their feet on AV1 , then pushing encoding to the next generation uncore would be in keeping with there modus-operandi so far.

Everything that 'used up' transistor budget for a AV1 encoder for the A17 Pro die is just going to increase in 'spades' for bigger dies with even bigger Apple GPU core (and hardware ray tracing) increases also. more CPU cores . more TB controllers ... more display controllers ... lots of more everything that want to 'eat up' the transistor budget.

TSMC N3B doesn't bring major shrink to I/O or SRAM/cache so those are still approximately as big as there were. Again soaking up lots of die area space budget.

Apple has given priority to ProRes so space allocation. That helps to squeeze out AV1.

Really think Apple is shooting to be "king of Handbrake" systems? Probably not.

PS: I don’t really understand why it took Apple so long to include AV1 hardware decoding on their chips. It could have been included with the A15/M2 or even with the A14/M1. I guess Apple likes to include new features at their own pace…

Compression from cameras that Apple has embedded in their systems matter more than other people's cameras. That is very likely primarily it. Apple chose their homegrown ProRes for that. AV1 is competing for 'left over' transistor budget after Apple does their primary focus subsystems of apple CPU, apple NPU , apple GPU , apple security/SSD , apple image processing , Apple I/O (including memory) ... whatever is left over after their 'eat' is for AV1.

TMSC N3 brings in increase in budget where all of those groups can all take a substantive transistor budget allocation increase and there are 'leftovers'.

TSMC N3E isn't going to really help much there. SRAM/cache is going to get bigger. That is less 'leftovers' right there on die area. There is likely going to be pressure to make an improvement in CPU, GPU, and/or NPU so less 'leftovers' likely there also.

It wouldn't be surprising if that feature waited until get out of the first two iterations of N3 variations (i.e., N3P or N3S or even later. ) .

Netflix , Youtube , and other big time video consumption streams just need AV1 decode at a bulk number of clients. (not uploads).




.
 
  • Like
Reactions: Confused-User

Confused-User

macrumors 6502a
Oct 14, 2014
852
988
Compression from cameras that Apple has embedded in their systems matter more than other people's cameras. That is very likely primarily it. Apple chose their homegrown ProRes for that. AV1 is competing for 'left over' transistor budget after Apple does their primary focus subsystems of apple CPU, apple NPU , apple GPU , apple security/SSD , apple image processing , Apple I/O (including memory) ... whatever is left over after their 'eat' is for AV1.
There may well be some truth to that, but also possibly not. ProRes and AV1 are for very different target markets.

Until recently, nobody had hardware encode for AV1 in their CPU chips. I haven't been playing close attention to this, so I may be mistaken, but I think only the most recent AMDs with integrated GPUs have that. I don't think Apple skipping AV1 encoding is going to be a major issue for most people. (That said, with an Intel or AMD chip, you can always add a dGPU that supports it, and with Apple you don't have that option, so it's not entirely unimportant.)

I'm also not sure now long-lived AV1 is going to be. It's still not that common, and h.266 is around the corner.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
PS: I don’t really understand why it took Apple so long to include AV1 hardware decoding on their chips. It could have been included with the A15/M2 or even with the A14/M1. I guess Apple likes to include new features at their own pace…

I think these are mostly policy reasons. Apple was backing HEVC (H.265) and opposing Google's VP9 for many years. It seems that they have since abandoned their uncompromising stance on these matters and are now open to supporting more codecs.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I think only the most recent AMDs with integrated GPUs have that. I don't think Apple skipping AV1 encoding is going to be a major issue for most people.
I thought AV1 streaming was becoming popular with gamers, which is why nVidia, AMD and Intel have incorporated AV1 encoders in their latest GPUs.

It seems that they have since abandoned their uncompromising stance on these matters and are now open to supporting more codecs.
It is a bit strange that Apple has taken so long to adopt AV1, as it has been a member of the consortium controlling AV1 development since 2018.
 
  • Like
Reactions: Populus

Populus

macrumors 603
Aug 24, 2012
5,942
8,412
Spain, Europe
Probably just a decoder. If Apple is dragging their feet on AV1 , then pushing encoding to the next generation uncore would be in keeping with there modus-operandi so far.

Everything that 'used up' transistor budget for a AV1 encoder for the A17 Pro die is just going to increase in 'spades' for bigger dies with even bigger Apple GPU core (and hardware ray tracing) increases also. more CPU cores . more TB controllers ... more display controllers ... lots of more everything that want to 'eat up' the transistor budget.

TSMC N3B doesn't bring major shrink to I/O or SRAM/cache so those are still approximately as big as there were. Again soaking up lots of die area space budget.

Apple has given priority to ProRes so space allocation. That helps to squeeze out AV1.

Really think Apple is shooting to be "king of Handbrake" systems? Probably not.



Compression from cameras that Apple has embedded in their systems matter more than other people's cameras. That is very likely primarily it. Apple chose their homegrown ProRes for that. AV1 is competing for 'left over' transistor budget after Apple does their primary focus subsystems of apple CPU, apple NPU , apple GPU , apple security/SSD , apple image processing , Apple I/O (including memory) ... whatever is left over after their 'eat' is for AV1.

TMSC N3 brings in increase in budget where all of those groups can all take a substantive transistor budget allocation increase and there are 'leftovers'.

TSMC N3E isn't going to really help much there. SRAM/cache is going to get bigger. That is less 'leftovers' right there on die area. There is likely going to be pressure to make an improvement in CPU, GPU, and/or NPU so less 'leftovers' likely there also.

It wouldn't be surprising if that feature waited until get out of the first two iterations of N3 variations (i.e., N3P or N3S or even later. ) .

Netflix , Youtube , and other big time video consumption streams just need AV1 decode at a bulk number of clients. (not uploads).
Many thanks for your thorough reply, you worded it very well.

I very much agree with your reasoning, especially knowing a Apple for decades and how they behave. But also because, as you said, there’s not a lot of use for AV1 encoding, and there are a lot of applications for AV1 decoding. 1) low priority for it when it comes to die space management, 2) Apple is not after the “Handbrake diehards” that compress big files of high resolution video, and 3) It’s always an Apple policy to leave new features to make future chips more appealing. I think you call it “breadcrumbing” in English but I’m not a native speaker so I’m not sure.

There may well be some truth to that, but also possibly not. ProRes and AV1 are for very different target markets.

Until recently, nobody had hardware encode for AV1 in their CPU chips. I haven't been playing close attention to this, so I may be mistaken, but I think only the most recent AMDs with integrated GPUs have that. I don't think Apple skipping AV1 encoding is going to be a major issue for most people. (That said, with an Intel or AMD chip, you can always add a dGPU that supports it, and with Apple you don't have that option, so it's not entirely unimportant.)

I'm also not sure now long-lived AV1 is going to be. It's still not that common, and h.266 is around the corner.
Now I’m curious about h.266. Is it really around the corner? From what I see, most people keep using h.264 and the tests I’ve done with h.265 doesn’t provide a big improvement. I find it a bit disappointing, that’s why all my hopes were in a standard just like the AV1.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.