Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
Maybe I missed something, but AFAIK TSMC is the only fab even close to mass produce 3nm and that has all been pre booked by Apple.
Intel has reportedly booked a lot of TSMC's 3nm production capacity and will use it to manufacture some of their CPU dies while their own manufacturing unit tries to catch up to TSMC:


But there are rumors that TSMC has problems with their 3nm process, so some of their customers may experience delays:

 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Intel’s got two significant marks against them.
1. The performance of their latest chips is achieved partly due to Windows. If you’re not running the latest version of Windows 11, you’re not going to see that performance. With Apple Silicon, macOS and Linux benefits similarly. They’re already to the point where they’re leaning on the OS, how much MORE will Windows be taking up the slack for Intel?
2. They’ll always have to deal with a large decode step in whatever instruction they execute while an instruction written for Apple Silicon is getting executed against the hardware designed to execute that command. There’s not really a lot Intel can do about that, it’s the beast they’ve built and now they have to carry it forward to maintain backwards compatibility.

Those things are going to stand between Intel and parity (power/performance) with Apple for the long term.
 

Fragment Shader

macrumors member
Oct 8, 2014
33
37
Toronto
Intel’s got two significant marks against them.
1. The performance of their latest chips is achieved partly due to Windows. If you’re not running the latest version of Windows 11, you’re not going to see that performance.
I'm not sure what this means. Windows 11 simply allocates threads for the E-cores when they're a better fit, it's not some magical performance unlocker. It's like saying the M1 'relies' on MacOSX to deliver its performance, I mean all OS's do in order to use the architecture effectively.

The decoder issue is a big thorn though.
 

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
Intel’s got two significant marks against them.
1. The performance of their latest chips is achieved partly due to Windows. If you’re not running the latest version of Windows 11, you’re not going to see that performance. With Apple Silicon, macOS and Linux benefits similarly. They’re already to the point where they’re leaning on the OS, how much MORE will Windows be taking up the slack for Intel?
Windows 10 and older run just fine on Alder Lake. Thread scheduling may be less optimal, but in practice there is little perceivable difference. Besides, Intel is also contributing support for their "thread director" to the Linux kernel.

2. They’ll always have to deal with a large decode step in whatever instruction they execute while an instruction written for Apple Silicon is getting executed against the hardware designed to execute that command. There’s not really a lot Intel can do about that, it’s the beast they’ve built and now they have to carry it forward to maintain backwards compatibility.
Modern ARM CPUs rely on micro-code very similar to x86 CPUs. Decoding and branch prediction are less complex due to fixed-length instructions, but compared to other blocks on a modern CPU the instruction decoder is very small regardless.

The power efficiency of ARM CPUs is mostly because ARM came from the mobile side and historically had a focus on power efficiency, while x86 CPU designers focused more on raw power. But now we are seeing increasingly powerful ARM CPUs and increasingly efficient x86 CPUs. Both segments have their uses, which is why e.g. ARM has the N- and V-series CPUs and Intel is splitting their product lines into two distinct core types.
 
Last edited:

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
I'm not sure what this means. Windows 11 simply allocates threads for the E-cores when they're a better fit, it's not some magical performance unlocker. It's like saying the M1 'relies' on MacOSX to deliver its performance, I mean all OS's do in order to use the architecture effectively.

The decoder issue is a big thorn though.
But, there IS a performance difference between Win 11 and 10 in some tests, between 4 and 14% depending on the app. When Intel’s talking about being “faster” than Apple Silicon by 4% then it kinda matters. :)
 
  • Like
Reactions: JMacHack

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Windows 10 and older run just fine on Alder Lake. Thread scheduling may be less optimal, but in practice there is little perceivable difference. Besides, Intel is also contributing support for their "thread director" to the Linux kernel.
It’s anywhere between 4-14% depending on the app. It’s not a lot for now, and, you know, maybe Microsoft’s fine helping Intel out, for now. But, that just means that Microsoft’s overtures in an ARM-ish direction could mean that some future iteration of Microsoft won’t be as readily helpful.

Modern ARM CPUs rely on micro-code very similar to x86 CPUs. Decoding and branch prediction are less complex due to fixed-length instructions, but compared to other blocks on a modern CPU the instruction decoder is very small regardless.
I remember reading that Intel’s decoder was 20% of the chip, but that may have been when chips were smaller. The same size decoder on a larger chip would, of course, be a lower percentage. Their decoder is undoubtedly more complex than Apple’s though, and that will always mean more work per instruction.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Oh I don't disagree, that's more or less my entire point. Apple made monstrous chips that the rest of the industry was afraid to make, and proved that they could still be power efficient.
It's a mistake to think of this in these terms. Nobody was afraid, or needed Apple to prove that this strategy was possible. They already knew, it's a simple tradeoff study. Their problem is that they aren't willing to pay the literal price - monster chips which trade higher transistor count to gain efficiency are more expensive to build.

There's a huge number of complex reasons why the PC industry went down the route it did, and why it's hard to get from where they are to where Apple is, but none of them have much to do with fear of the unknown.
 
  • Like
Reactions: JMacHack

vladi

macrumors 65816
Jan 30, 2010
1,008
617
M1 Ultra 2xGPU will not be anywhere close to 3090 in cross-platform apps that support both CUDA and Metal.

Current M1 Max GPU trails 3080Ti by 400% in Octane render. Yes, it's four times slower. As a matter of fact it's just tad bit slower than 1080Ti

Octane render scales 99.9% close to perfect when you feed it multiple GPUs, no loss whatsoever. This would mean that Ultra 2xGPU option will still be slower than 3080Ti and I would predict it to be at least 100% slower.

But that is not the main concern with M1 GPUs. Biggest problem Apple is facing when it comes to GPU is that very few apps are actually willing to bring their GPU workflow to M1. Keyshot still doesn't support GPU rendering on M1. Some poster in other thread mentioned Vectorworks and how M1 Ultra will finally lift up Vecotorworks in performance. i'm sorry but it will not because Vectorworks has been poorly optimized x86 app and it will remain as such on Metal or DirectX. How do I know? I work on it few times a month and it still needs lot of time to spit out ugly renders on Titan RTX paired with Threadripper 3.
 
  • Like
Reactions: macsforme

elvisimprsntr

macrumors 65816
Jul 17, 2013
1,052
1,612
Florida
i just pre-ordered the next intel M1 ultra counter Xeon but i dont know i can handle the power consumption
Probably my electricity panel will shut down when i drive that thing to the max
Intel recommends me to use 2.4 kilowatts power supply. I guess when your lights in your city turns down, you can curse me because i turn on the Intel counter Xeon
Does that include the electricity for the 30,000 BTU chiller you need to cool the [redacted] thing?
 
  • Like
Reactions: MayaUser

elvisimprsntr

macrumors 65816
Jul 17, 2013
1,052
1,612
Florida
Intel has reportedly booked a lot of TSMC's 3nm production capacity and will use it to manufacture some of their CPU dies while their own manufacturing unit tries to catch up to TSMC:


But there are rumors that TSMC has problems with their 3nm process, so some of their customers may experience delays:

Let Intel spend all their coin$ working out the process bugs.
 

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
It’s anywhere between 4-14% depending on the app.
And that was measured how? Which apps?

It’s not a lot for now, and, you know, maybe Microsoft’s fine helping Intel out, for now. But, that just means that Microsoft’s overtures in an ARM-ish direction could mean that some future iteration of Microsoft won’t be as readily helpful.
Sure. Microsoft will sabotage the platform that runs approximately 99.9% of Windows installations. :p

Whenever new CPUs are coming out, the major operating systems add support for their new features, usually supported by the CPU manufacturers. That happens all the time. Support for the M1 didn't magically appear in MacOS either. In case of Alder Lake, Intel has worked with Microsoft for months or maybe years to prepare Windows for the new heterogeneous architecture.

I remember reading that Intel’s decoder was 20% of the chip, but that may have been when chips were smaller. The same size decoder on a larger chip would, of course, be a lower percentage. Their decoder is undoubtedly more complex than Apple’s though, and that will always mean more work per instruction.
Here's what Jim Keller recently had to say about it:

"For a while we thought variable-length instructions were really hard to decode. But we keep figuring out how to do that. You basically predict where all the instructions are in tables, and once you have good predictors, you can predict that stuff well enough. So fixed-length instructions seem really nice when you're building little baby computers, but if you're building a really big computer, to predict or to figure out where all the instructions are, it isn't dominating the die. So it doesn't matter that much."
 
  • Like
Reactions: Fragment Shader

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
This would be a genius design if so, because power consumption doesn't scale linearly with clock speed (double the clock, more than double the power).
Actually, if everything stays the same, increasing clock frequency increases power comsumption in a linear fashion. Unfortunately most of the time, increasing clock frequency usually means increasing signal voltage to compensate for noise IIRC. Voltage has a quadratic relationship to frequency. That's where most of the power consumption goes to when increasing clock frequency.
 

huge_apple_fangirl

macrumors 6502a
Aug 1, 2019
769
1,301
Intel’s got two significant marks against them.
1. The performance of their latest chips is achieved partly due to Windows. If you’re not running the latest version of Windows 11, you’re not going to see that performance.
Ah, you mean the way Apple designs its software and silicon to work together?
With Apple Silicon, macOS and Linux benefits similarly.
What? No they don’t. M1 benefits macOS only. Some hacked together project from Hector Martin is certainly interesting, but it is not Linux support. Unless you’re talking about a VM? Either way M1 is tied to macOS way more than Alder Lake is to Windows 11.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Ah, you mean the way Apple designs its software and silicon to work together?
No, in the way that Windows 11 has a scheduler designed to improve the performance of Alder Lake systems.
I’d guess that linux folks will be modifying their schedulers to deal with the issue, but as of now, if a user wants the best performance from their Alder Lake system, Windows 11 is the way to get it.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
It's a mistake to think of this in these terms. Nobody was afraid, or needed Apple to prove that this strategy was possible. They already knew, it's a simple tradeoff study. Their problem is that they aren't willing to pay the literal price - monster chips which trade higher transistor count to gain efficiency are more expensive to build.

There's a huge number of complex reasons why the PC industry went down the route it did, and why it's hard to get from where they are to where Apple is, but none of them have much to do with fear of the unknown.

Intel has had to learn this lesson before. With the Pentium 4's, they lengthened the pipeline to 31+ stages by the time Prescott arrived in the hopes of bringing ever-higher clock speeds. What they learned is that clock speeds aren't everything. Pentium D proved that point when it turned the computer into a frying pan (of course trying to catch up to AMD when they brought the first dual core CPU to the market).

The solution is pretty much obvious looking back. Put more cores in place, make them more efficient (in terms of IPC), and clock them lower (so that the voltage can be reduced and power efficiency can permit a decent core count on the chip). Intel learned this lesson years ago, came out with the Pentium M, and later made the Core 2 series based off of it (which of course led to Sandy Bridge, and the rest is history). Intel was very much on a roll for a while and their change in direction served them well enough for Apple to switch from PowerPC to Intel back in the day. But when their 14nm woes began, they turned back to ratcheting clock speeds higher again, and now Intel and AMD are both pushing turbo clocks to 5+ ghz again. The performance can roughly match the single threaded performance of the M1 now, but at MUCH higher power consumption.

To date, nobody besides Apple has really taken the approach that Apple took, with the level of performance that Apple achieved. If people thought it were possible (and practical) to stuff this kind of performance into such a small power envelope, Intel and AMD would have almost certainly have gone for it, but Apple redefined what people thought was practical. Everyone's reaction was like "Holy heavens, what kind of black magic is this?" - and now ARM itself, AMD, and Intel are all scrambling to catch up.

And of course I'm not arguing that everyone actually thought it was truly impossible to do it. Of course not, but obviously a lot of people didn't see it as being practical (at the very least from a profit standpoint) in the consumer market before the M1. Apple changed that. They moved the goalposts by miles in one blow. Huge die-sizes that were previously characteristic of $7000 Xeons are now something that competitors need to think about bringing to the consumer market of everyday i7's, and that changes the game considerably across the industry.

In short, Apple simply raised the bar. It's not much unlike the days when we thought 4.7 inch screens on phones were "large". Companies went this route for a while, but eventually Apple realized that the obvious answer was to make an all-screen phone. Apple almost certainly wasn't the first company to think of doing this. But Apple was the first company to actually succeed in doing it. And once they did, everyone else quickly followed suit.

Were these companies "afraid" to do it before? That's debatable. That's speculation, and I won't attempt to defend speculation as indisputable fact. But the reality is that none of them actually did it. Apple did. And once they did, they clearly changed game industry-wide.
 
Last edited:

huge_apple_fangirl

macrumors 6502a
Aug 1, 2019
769
1,301
No, in the way that Windows 11 has a scheduler designed to improve the performance of Alder Lake systems.
I’d guess that linux folks will be modifying their schedulers to deal with the issue, but as of now, if a user wants the best performance from their Alder Lake system, Windows 11 is the way to get it.
And macOS 11 shipped with a scheduler for M1's big.LITTLE design... what's the difference?
 

KingOfPain

macrumors member
Jan 8, 2004
31
17
Modern ARM CPUs rely on micro-code very similar to x86 CPUs.

The power efficiency of ARM CPUs is mostly because ARM came from the mobile side and historically had a focus on power efficiency, while x86 CPU designers focused more on raw power.
Do you have a source for the claim that ARM CPUs use microcode? Because thus far I haven‘t been aware of that fact.

But your second claim is definitely wrong, because the first ARM processors were designed by Acorn Computer for their Archimedes desktop computers (back then ARM stood for “Acorn RISC Machine“). It was just a coincidence that the CPUs were very power efficient and thus became of interest to the embedded sector.
 
Last edited:

Fragment Shader

macrumors member
Oct 8, 2014
33
37
Toronto
No, in the way that Windows 11 has a scheduler designed to improve the performance of Alder Lake systems.
I’d guess that linux folks will be modifying their schedulers to deal with the issue, but as of now, if a user wants the best performance from their Alder Lake system, Windows 11 is the way to get it.
Again I just don't get the point of contention here. Of course a new CPU will require scheduler changes with an OS to fully take advantage of said CPU.

The argument that this is a potential problem for Intel because Microsoft will stop bothering to optimize their OS for X86 because they'll focus on ARM Windows going forward - an OS variant that currently has fractions of a single % uptake, is bizarre. Sure, perhaps in a hypothetical future decades from now? I mean what's the point of even speculating that far out.
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Intel has not been shy at criticizing Apple Silicon for the past 18 months. Have they said anything about the M1 Ultra?

Are we going to get the Alder Lake Ultra?
Probably have commercials of someone pushing on the studio roughly trying to get it to respond as a touch device. Then they will claim is crap because it doesn't fold in half like a yoga. Then knock over the display saying they can't do anything on it.

:rolleyes:
 
  • Like
Reactions: spiderman0616
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.