Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
lol, A17 Pro may close to 12900K but it's not Apple Silicon's advantage. It still does not change the fact that Apple needs 3nm to compete with old Intel 7 or TSMC 7nm which is what you are trying to say.
So if Intel gets down to 3nm their cpus will magically gain 500% less power consumption?

Am I understanding you correctly?
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
The gateway, hmm. Anyway, I'm arguing in just as good a faith as you are, we're just looking at different things. A single core is no gateway to performance, a computer is NEVER just doing a single thing. The best gateway to performance is having multiple cores, even slower cores are okay if there's enough of them, and a good OS. I never look at single core performance when I'm buying a machine, or optimizing software to run faster, or basically anything else I do. I'm not a researcher or whatever you do, I'm an IT generalist (manager), I care about my users getting their work done on time, and that's pretty much it.

Ok, I will try to explain. Let's talk about server CPUs, and let's ignore the questions of memory/caches etc. for the moment, let's just talk about the CPU performance. What is your goal when designing an excellent server CPU? You have two constraints: die area (larger dies get prohibitively expensive to manufacture) and power dissipation (the more power you draw the harder it is to cool down your chip). So you want to maximise the performance while minimising the die area occupied by the CPU and keeping the power consumption within a certain range. It's a balancing/optimisation problem.

Let's look at die area first. An Intel Golden Cove core is about 5.7mm2. AMD Zen4 core is 2.56mm2, and the compacted Zen4c is a very impressive 1.43mm2. Apple's A15 P-core is 2.55mm2, essentially identical to AMD (same process). Now, which microarchitecture will allow you to build a server CPU with the largest amount of cores?

Of course, the number of cores is not the only parameter. Zen4c is small and AMD can fit a lot of them onto a single chip, but it's also slower, so maybe Intel can overcome their area disadvantage by providing better overall performance? Well, let's see. AMD can run Zen4c cores at ~ 2.2Ghz consuming 2-3 watts each. If we were to quantify the performance of these cores, it would be about ~ 1250 GB6 points per core. Intel's Golden Cove can't really get this efficiency. Intel runs their cores at a similar frequency (in larger configurations even sub 2Ghz), with the power consumption of 6-10 watts per core (depending on the config). The performance of Intel cores running with those settings is similar to AMD, likely a bit lower (I estimate around 1100 GB6 points for a Golden Cove running at 2.2Ghz). As you see, Intel has a very hard time competing here. Their cores are much larger — so they need to spend considerably more money to build a comparable product. Even then, their cores use 2x more power for the same performance, limiting scalability. I hope you can also see why AMD is not using the "big" Zen4 core for their server products — because it would be pointless. They simply don't have the thermal headroom to run these cores at the higher frequency. They know that they will run the cores at under 3Ghz anyway, and the compact Zen4c allows them to pack almost twice as many cores in the same die area. Not to mention that Zen4 would consume more power for the same performance.

And that's what I mean with "single core performance is the gateway to the multicore performance". More performance per core at the same power gives you faster overall computer. And this is the main reason why AMD looks much better in the current server war (Intel does have some unique points like the matrix coprocessors and fast integrated HBM, but they won't matter for the majority of users). You might marvel over the 4000 GB6 overclocked Intel CPU, but that won't work in a server — it already barely works in an overclockers lab and requires a steady supply of liquid nitrogen to cool just one core. If you want fast multicore you better have CPU cores that can go fast under extreme thermal constraints. Intel doesn't have that. AMD doesn't have that either (but they get ahead by having more smaller slower cores).

What about Apple though? Well, Avalance (A15/M2) consumes around 6 watts (similar to Intel server CPU core) running at about 3.5Ghz. This gives it the performance close to 2500-2600 GB6 points. That's about 2.5x what Intel can do with similar thermal constraints, and roughly 2x over Zen4c (but Zen4c again uses half as much power). What would a hypothetical Apple server CPU look if you keep everything else equal? It will have half as many cores as a Zen4c product (2x size difference), consume the same amount of power (again 2x difference), and offer similar performance (yet again 2x difference). But it would also offer 2x higher single-core burst in case you need to run an asymmetric workload.

Of course, Apple doesn't build server CPUs and they don't seem to be interested in doing so. The above is just an academic exercise that doesn't bear any relation to existing products. But it's a useful exercise if you want to compare the state of technology. Apple is ahead of the CPU curve because it offers efficiency/density comparable to that of Zen4c while delivering the absolute performance of much more power-hungry cores.

Sources:
- https://locuza.substack.com/p/die-walkthrough-alder-lake-sp-and
- https://www.semianalysis.com/p/zen-4c-amds-response-to-hyperscale
- https://www.semianalysis.com/p/apple-a15-die-shot-and-annotation
 

sunny5

macrumors 68000
Jun 11, 2021
1,837
1,706
So if Intel gets down to 3nm their cpus will magically gain 500% less power consumption?

Am I understanding you correctly?
500% less power is just your assumption and no proof. Still doesn't change the fact that Apple Silicon has lithography advantages over Intel as they are the only one using 3nm vs Intel 7 = TSMC 7nm.
 

R2DHue

macrumors 6502
Sep 9, 2019
292
270
Isn't the answer they have to keep compatibility with a legacy architecture?

Exaaaaaaaactly

Unlike Intel and Microsoft, Apple has an amazing history of transitioning its Mac operating systems, SDKs and core apps to entirely different hardware (CPU) platforms.

For anyone unfamiliar or who are familiar but like nostalgia…

It was one of the “bumpier roads” in the history of Apple transitioning Macs to a different hardware architecture, but it transitioned its Mac platform from the Motorola 68000[#] CPU to the Apple-IBM PowerPC RISC CPU — that had a VERY different ISA architecture from the Motorola 68000[#] (and to the Intel x86 ISA architecture for that matter).

So Apple had to emulate in software the Motorola 68000[#] hardware or else the entire body of existing Mac software would no longer run for the existing Mac user base and would be rendered obsolete on the new PowerPC Macs running the new PowerPC CPU based operating system.

FACTOID: in 1992, years BEFORE Apple made the decision to move to the PowerPC processor, there was a project at Apple known internally as “Star Trek” where the Mac’s “System 7” operating system was rewritten to run natively on an Intel x86 PC. It successfully booted up System 7.1 on an Intel 486 PC in 1992, but would have required all Mac software to be recompiled to run on the PC/Intel architecture. For that and many other reasons, “Star Trek” was cancelled.

So, the Mac operating system ran on the Intel x86 PC architecture before the Mac operating system ran on the Intel x86 PC architecture… 🤔

(Technically, it was the Unix/NextStep-based Mac OS X the second time, but…still…)

An interim solution in the migration to PowerPC was software apps written as “FAT binaries,” where the same app would run one “fork” or codebase if it was being run on a 68000 Mac, or it would take the other “fork in the road” to run PowerPC code, if the software was running on a PowerPC-based Mac.

Apple made a lot of this easy in their SDK, but not enough that it didn’t still require special efforts on the part of programmers and Mac software developers. (They didn’t like it.)

The biggest struggle was weaning ISVs off their still-compatible Motorola 68000[#] software and incentivizing them to write “clean” (thus fast!) PowerPC-only Mac apps that were no longer compatible with Motorola 68000-based Macs.

But eventually, just like how the Motorola 68000-line of CPUs easily outperformed any other microprocessor on the market at the timeuntil it didn’t — the PowerPC easily outperformed any other microprocessor on the market at the time — until it didn’t.

Apple used PowerPC CPUs in Macs from 1994 until 2006, during which time Intel’s CISC x86 CPUs caught up with and surpassed the PowerPC RISC CPU. Chalk Intel’s success up to “brute force” and money.

(There was even a short-lived “moment” in the PowerPC’s history when it ran Windows faster on a Mac in an emulator than Windows ran on its native Intel x86 CPU! Short lived.)

So, after the transition was complete, the PowerPC that initially smoked any Intel CPU experienced growing pains after a few years and couldn’t keep up with ever-faster new Intel CPUs.

And high performance PowerPC processors were ill-suited for laptop Macs because of their large die size, high power requirements and huge heat generation. And the laptop market was really growing at the time.

So…Apple changed again and transitioned the Mac platform to Intel CPUs!

This transition was MUCH smoother than the transition to PowerPC.

Apple made Macs that were so much like Windows PCs, in fact, that Apple’s “Boot Camp” utility allowed you to install Windows natively on your Mac and boot straight into Windows instead of Mac OS. Windows “saw” your Mac as a PC. (Because it was? 🤔…)

But the most recent transition to ARM processor-based, custom Apple-made processors was the smoothest transition yet.

I think a lot of this is owed to Apple adopting Steve Jobs’ “NextStep/OpenStep” Unix operating system that the famous Mac GUI and user interface ran atop.

Mach/Darwin/Berkeley BSD Unix was the non-user-facing, underlying operating system that ran in place of the completely gutted Mac operating system. It’s the GUI layer on top that makes it still work and feel “like a Mac.”

Next’s software engineers stressed from the beginning, “hardware independence” and “hardware abstraction” in software design so that NextStepOS and the OpenStep API were always designed with portability to other platforms front-and-center in their minds. What an amazing team of people.

Astonishingly, Next’s operating system had been successfully ported from the Motorola 68000 CPU to Intel x86 to Hewlett Packard’s HP PA-RISC microprocessor to Sun Microsystems’ SPARC RISC architecture CPU to the Motorola 88000 RISC CPU and to the PowerPC CPU (mainly to impress Apple when Next was pitching it to them).

And before porting “OS X” to ARM, Apple had already ported it from PowerPC to Intel. Steve Jobs always knew this was possible because he’d already done it at Next.

Strict, fastidious adherence to hardware independence and hardware abstraction is why I also believe that Apple’s transition from Intel to ARM/Apple Silicon was the easiest, smoothest migration of the Mac platform to a different CPU yet.

(And btw, NextOS and Mac OS — from OS X onIS UNIX! It’s not “Unix-like” as so many like to say. It’s fully POSIX compliant and IS a standard Unix variant!)

Mac OS X remained SO portable that it was the underlying operating system of the very first (ARM-processor-based) iPhone OS — just with an entirely different GUI layer on top of OS X (than the Mac’s GUI) — and has been as portable ever since.

In fact, the eminently portable OS X beats at the heart of iOS, tvOS, watchOS, audioOS (HomePod) and even the upcoming visionOS.

Among the many visionary ideas of the gifted team of individuals at Next was hardware independence, portability and “Write Once Run Everywhere” mentality.

And, today, as we commemorate the 12th anniversary of Steve Jobs’ death, all of these people should be remembered by history.

And it’s just so great to see strict adherence to this vision live on at Apple right to this day.
 
Last edited:

picpicmac

macrumors 65816
Aug 10, 2023
1,239
1,833
Strict, fastidious adherence to hardware independence and hardware abstraction is why I also believe that Apple’s transition from Intel to ARM/Apple Silicon was the easiest, smoothest migration of the Mac platform to a different CPU yet.

Amen.

And thanks for explaining to the young'uns how we got to where we are today.
 
  • Love
Reactions: R2DHue

leman

macrumors Core
Oct 14, 2008
19,520
19,671
But the most recent transition to ARM processor-based, custom Apple-made processors was the smoothest transition yet.

Arm64 has been carefully designed to enable smooth transitions from x86. The basic data type size and alignment is identical between the two platforms, and that's the major pain when ensuring that your software works correctly on the new architecture.
 
  • Like
Reactions: R2DHue

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
500% less power is just your assumption and no proof. Still doesn't change the fact that Apple Silicon has lithography advantages over Intel as they are the only one using 3nm vs Intel 7 = TSMC 7nm.
I think you’re being transparently ridiculous and no one here is buying your argument.

Intel uses many times the power to get the same performance as a phone on a high end desktop chip. That’s a fact.

You can either cope with it or not.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,447
Europe
I hope you can also see why why AMD is not using the "big" Zen4 core for their server products — because it would be pointless.
That's a weird thing so say since AMD not only makes server processors with up to 128 Zen 4c cores (Bergamo), they also make server processors with up to 96 Zen 4 cores (Genoa).
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,447
Europe
A single core is no gateway to performance, a computer is NEVER just doing a single thing. The best gateway to performance is having multiple cores, even slower cores are okay if there's enough of them, and a good OS. I never look at single core performance when I'm buying a machine, or optimizing software to run faster, or basically anything else I do. I'm not a researcher or whatever you do, I'm an IT generalist (manager), I care about my users getting their work done on time, and that's pretty much it.
The point for single-core performance isn't whether a computer is doing one or more things, it's whether any and all single things can scale with multiple cores! Unfortunately there are many single things that simply do not scale to multiple, let alone many, threads, and that's why single-core performance is still very important. Also, you can always split the performance of a fast core by letting it run multiple threads (with SMT or with the usual time sharing), but you can only combine the performance of multiple slower cores to run a single task faster then that task scales well to multiple threads.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,447
Europe
The best gateway to performance is having multiple cores, even slower cores are okay if there's enough of them, and a good OS.
Niagara failed big time. And for user-facing computers the answer is clear, you always want a couple of really fast cores to keep the single-threaded portions, like single web pages and large parts of the user interface, as snappy as possible. There's a reason iPhones have 2 large cores. There's a reason a Mac with an M1 feels just as fast as one with an M1 Max for many "everyday" applications. Sure, we don't want to go back to single processor machines, but we also don't want a Mac with 16 or even 24 E-cores. That would suck for too many things, including everything interactive.
 

sunny5

macrumors 68000
Jun 11, 2021
1,837
1,706
I think you’re being transparently ridiculous and no one here is buying your argument.

Intel uses many times the power to get the same performance as a phone on a high end desktop chip. That’s a fact.

You can either cope with it or not.
And I said you are totally ignoring the lithography and that's the fact: TSMC 3nm vs Intel 7 = TSMC 7nm. What kind of comparison is that? Your logic already failed.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
The point for single-core performance isn't whether a computer is doing one or more things, it's whether any and all single things can scale with multiple cores!
As I've said many times, that's an irrelevant statistic to me, because a computer *never* does only one thing at a time. Even when we had single core computers, once we got true multitasking, we got interrupts for other tasks. And now we have many cores and even better multitasking.

And yes, there are tasks that can't be parallelized, so what, you can still increase your throughput by running that task multiple times -- it's easy. Run 1 it takes an hour, run 4 at a time, and you can get at least 3 done in that same time. That is assuming you have more datasets to process. If you have only one dataset, period, well you'd have a point, but that's not any real world situation I've ever been in. (just a guess since we really haven't specified just what task that is)

One of the jobs I had, I was the developer that optimized for performance of internally built applications and systems, and never once did I just throw new hardware at it (to get faster single cores) and call it a day, and I never needed to either.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
That's a weird thing so say since AMD not only makes server processors with up to 128 Zen 4c cores (Bergamo), they also make server processors with up to 96 Zen 4 cores (Genoa).

Bergamo (Zen4c) is newer and replaces Genoa in areas where compute density is paramount (e.g. cloud computing). Although you are right that AMD has released a new Zen4 based server core (Genoa-X) targeting complex scientific compute — it's selling points are large amounts of cache (V-cache) as well as higher peaks clocks for asymmetric compute. So yes, I got a bit carried away in my previous post. What I meant is that AMD's server offering going forward will most likely be based on a compact version of the core to maximise perf/watt and area. Some speciality products (e.g. personal workstations etc.) will still need higher single-core performance where a compact-only core implementation won't be sufficient.
 
  • Like
Reactions: Basic75

sunny5

macrumors 68000
Jun 11, 2021
1,837
1,706
Why should intel get credit for their inability to match TSMCa process? What a strange opinion.
Then you are saying that to match the performance of 12900K which is inferior Intel 7 = TSMC 7nm, Apple has to use TSMC 3nm by taking advantage with lithography and it only proves that Apple can't keep up with the performance but power consumption. A simple logic as A17 Pro's performance is close to Intel 12900K.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
And I said you are totally ignoring the lithography and that's the fact: TSMC 3nm vs Intel 7 = TSMC 7nm. What kind of comparison is that? Your logic already failed.
what has to do with anything?
i need computers for my business...on the market i have to choose TSMC vs Intel....it is what the companies are capable of offering me and to the others
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Niagara failed big time. And for user-facing computers the answer is clear, you always want a couple of really fast cores to keep the single-threaded portions, like single web pages and large parts of the user interface, as snappy as possible. There's a reason iPhones have 2 large cores. There's a reason a Mac with an M1 feels just as fast as one with an M1 Max for many "everyday" applications. Sure, we don't want to go back to single processor machines, but we also don't want a Mac with 16 or even 24 E-cores. That would suck for too many things, including everything interactive.
I don't know what niagara is at the moment, but I still disagree about a "couple" fast cores, I'd still take a 4 slower core machine over a 2 faster core machine every time. (and I'd take an 8 core machine over that.) As for the iPhone, it also only has a couple of fast cores because of its thermal requirements.

As for a Mac with 16 or 24 e cores, I'd take it if I had a job for it. I don't use Macs to make money.
 

sunny5

macrumors 68000
Jun 11, 2021
1,837
1,706
what has to do with anything?
i need computers for my business...on the market i have to choice TSMC vs Intel....it is what the companies are capable of offering me and to the others
"You mean the A17 which gets similar single core performance to the 12900k?

That “nowhere near”?"

Someone claimed that A17 is close to 12900K and yet he totally forgot that A17 used TSMC 3nm while Intel used Intel 7 which is 2 gen ahead. It's not surprising since Apple is the only one who uses lithography advantage and yet claiming its Apple advantage is a joke.
 

APCX

Suspended
Sep 19, 2023
262
337
Then you are saying that to match the performance of 12900K which is inferior Intel 7 = TSMC 7nm, Apple has to use TSMC 3nm by taking advantage with lithography and it only proves that Apple can't keep up with the performance but power consumption. A simple logic as A17 Pro's performance is close to Intel 12900K.
No. You said that we should ignore the fact that intel fell behind TSMC on process node. Everything else you wrote is a figment of your imagination.

Let’s try and keep this factual.
 

APCX

Suspended
Sep 19, 2023
262
337
"You mean the A17 which gets similar single core performance to the 12900k?

That “nowhere near”?"

Someone claimed that A17 is close to 12900K and yet he totally forgot that A17 used TSMC 3nm while Intel used Intel 7 which is 2 gen ahead. It's not surprising since Apple is the only one who uses lithography advantage and yet claiming its Apple advantage is a joke.
So intel is so bad that they are two generations behind TSMC?
 

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
"You mean the A17 which gets similar single core performance to the 12900k?

That “nowhere near”?"

Someone claimed that A17 is close to 12900K and yet he totally forgot that A17 used TSMC 3nm while Intel used Intel 7 which is 2 gen ahead. It's not surprising since Apple is the only one who uses lithography advantage and yet claiming its Apple advantage is a joke.
It doesnt matter for us....for us it matters the present day and not to wait until Intel will catch up ...for us time is money, for you its just a little thing to argue about with everyone, that it
 
  • Haha
Reactions: sunny5

leman

macrumors Core
Oct 14, 2008
19,520
19,671
Someone claimed that A17 is close to 12900K and yet he totally forgot that A17 used TSMC 3nm while Intel used Intel 7 which is 2 gen ahead. It's not surprising since Apple is the only one who uses lithography advantage and yet claiming its Apple advantage is a joke.

Lithography advantage of two node steps alone cannot reduce the power consumption by the factor of 6. I don't even understand what your argument means. So it's unfair to compare Apple to Intel since the former uses more advanced node? Fine! Does this mean you saying that Intel CPUs are obsolete garbage made on outdated process?
 

APCX

Suspended
Sep 19, 2023
262
337
I don't know what niagara is at the moment, but I still disagree about a "couple" fast cores, I'd still take a 4 slower core machine over a 2 faster core machine every time. (and I'd take an 8 core machine over that.) As for the iPhone, it also only has a couple of fast cores because of its thermal requirements.

As for a Mac with 16 or 24 e cores, I'd take it if I had a job for it. I don't use Macs to make money.
How much slower would you accept a multi core machine?
 
  • Like
  • Haha
Reactions: Basic75 and sunny5

sunny5

macrumors 68000
Jun 11, 2021
1,837
1,706
No. You said that we should ignore the fact that intel fell behind TSMC on process node. Everything else you wrote is a figment of your imagination.

Let’s try and keep this factual.
And I said, Apple has to use the most advanced lithography to compete with old tech. You said it to yourself, A17 is close to 12900K. 2 gen differences.

So intel is so bad that they are two generations behind TSMC?
So is it good that Apple has to use 2 gen advanced tech to compete with old tech? Ironic.

You see, your argument failed after all.
 
  • Haha
Reactions: Romain_H

APCX

Suspended
Sep 19, 2023
262
337
And I said, Apple has to use the most advanced lithography to compete with old tech. You said it to yourself, A17 is close to 12900K. 2 gen differences.


So is it good that Apple has to use 2 gen advanced tech to compete with old tech? Ironic.

You see, your argument failed after all.
So you confirmed that Intel is two generations behind and still can’t compete with Apple’s performance per watt?

Or are you saying that using newer technology is a failure? In that case I assume intel wont be moving to a newer node? That would be a failure right?


I’d be interested in your opinion as to why intel has failed so badly?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.