Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I dunno how long it will take for Intel to dig themselves out of this hole, or if they can, but for the foreseeable future they’ve got a big disadvantage.

They kind of already have. it is fairly clear that Alder Lake will beat Zen 3 in both single and multi-core performance. Maybe not efficiently, but it's not something the average PC user seems to be concerned about.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
They kind of already have. it is fairly clear that Alder Lake will beat Zen 3 in both single and multi-core performance. Maybe not efficiently, but it's not something the average PC user seems to be concerned about.
I guess I can’t argue that if we throw out efficiency as a standard. Raw performance is what Intel is focused on.

Though I have to wonder where it will end, with gpus taking ever more power and at least Intel heading this direction to keep performance up (I assigned AMD with follow suit), when will enough be enough? 5000w power supplies? It seems ridiculous.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I guess I can’t argue that if we throw out efficiency as a standard. Raw performance is what Intel is focused on.

Though I have to wonder where it will end, with gpus taking ever more power and at least Intel heading this direction to keep performance up (I assigned AMD with follow suit), when will enough be enough? 5000w power supplies? It seems ridiculous.
While the gaming crowd wants bigger, faster, more power hungry GPU's, the average Windows user cares less about that than power efficiency. (in other words, not at all) iGPUs of whatever generation is enough. Even I don't care much. Most intel PC's don't have dGPU's.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
I guess I can’t argue that if we throw out efficiency as a standard. Raw performance is what Intel is focused on.

Though I have to wonder where it will end, with gpus taking ever more power and at least Intel heading this direction to keep performance up (I assigned AMD with follow suit), when will enough be enough? 5000w power supplies? It seems ridiculous.

AMD looks to be taking the same approach in future releases. I think that they will always have a PPW advantage but I think that they're going to crank up PL2 as well.

The rumors that I've seen on Zen 5 - EPYC with 256 cores at 600 Watts. It's a server chip but it's going to be a monster.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
They kind of already have. it is fairly clear that Alder Lake will beat Zen 3 in both single and multi-core performance. Maybe not efficiently, but it's not something the average PC user seems to be concerned about.

In laptops it still matters for the PC crowd. Portability, quiet operations, keeping cool, with good performance on battery, and good battery life are generally appreciated in the form factor beyond Apple as well (with obvious exceptions for certain use cases).
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
In laptops it still matters for the PC crowd. Portability, quiet operations, keeping cool, with good performance on battery, and good battery life are generally appreciated in the form factor beyond Apple as well (with obvious exceptions for certain use cases).

Electricity could become dear again too.

That's the current case in China where they've taken drastic moves to curtail electricity usage. People might want electricity for their cars more than their PCs in the future.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I guess I can’t argue that if we throw out efficiency as a standard. Raw performance is what Intel is focused on.

Though I have to wonder where it will end, with gpus taking ever more power and at least Intel heading this direction to keep performance up (I assigned AMD with follow suit), when will enough be enough? 5000w power supplies? It seems ridiculous.

In laptops it still matters for the PC crowd. Portability, quiet operations, keeping cool, with good performance on battery, and good battery life are generally appreciated in the form factor beyond Apple as well (with obvious exceptions for certain use cases).

I believe Alder Lake will have better sustained performance than Zen3/4 at the same power consumption. At least in benchmarks and other software that scales easily with the core count. Their efficiency cores are an undeniable advantage here.

Gracemont (the performance core) will probably be significantly faster than Zen3 at slightly higher power consumption though. We‘ll see in a couple of days.

At any rate, it is my opinion that Intel is commiting to a subpar architecture. They are focusing on benchmarks rather than flexible hardware. The E-core approach won’t scale in consumer software.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
I believe Alder Lake will have better sustained performance than Zen3/4 at the same power consumption. At least in benchmarks and other software that scales easily with the core count. Their efficiency cores are an undeniable advantage here.

Gracemont (the performance core) will probably be significantly faster than Zen3 at slightly higher power consumption though. We‘ll see in a couple of days.

At any rate, it is my opinion that Intel is commiting to a subpar architecture. They are focusing on benchmarks rather than flexible hardware. The E-core approach won’t scale in consumer software.

My understanding on their deal with TSMC is that they are doing that for the server market where efficiency does matter. A lot. I don't think that single-core performance is anywhere near as important there and they are probably somewhat worried about competition from more powerful ARM cores now that Apple has demonstrated that it can be done.

The next question is: how fast can the software ecosystem move to ARM? My son's workplace provides Macs to their employees and his will be the first Apple Silicon laptop. He thinks that their important tools have ports, either supported or in Beta. So it would be a risk in getting Apple Silicon but they don't have a choice right now; unless they want to go to Costco or Best Buy and try to scarf one of the few remaining 16 inch Intel laptops. They are going to have to bite the bullet at some point though.

They do a lot of work on some large x86 servers but I'd guess that they'd go to ARM if it were cost-effective and their software runs. Power is pretty expensive in New England as is cooling in the summer.
 

Ploki

macrumors 601
Jan 21, 2008
4,324
1,560
I really don't understand these discussions. Performance per watt IS performance, because in real-world conditions you don't have infinite cooling available - especially not in a laptop enclosure.
12900HK is the unlocked CPU. I doubt the benchmark was done in a power envelope fitting a MacBook 16", most HK's performed poorly in MacBooks. I returned the 8950HK because it was ****ing garbage and it didn't perform nearly as well as benchmarks suggested.

I don't care if 12900HK can over-perform M1 Max by 5% if it pulls 150W and melts the enclosure. I seriously doubt that it could perform anywhere near the leaked benchmark in a 16" enclosure with a dedicated GPU, likely it would sound like a vacuum cleaner and throttle as ****.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
The next question is: how fast can the software ecosystem move to ARM?

For server stuff? Probably years. It is possible thst the process has already started. Server and supercomputing is the last true bastion of x86 and ARM starts to chip on it slowly. We will know for sure before 2025.
 
  • Like
Reactions: Ploki

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
For server stuff? Probably years. It is possible thst the process has already started. Server and supercomputing is the last true bastion of x86 and ARM starts to chip on it slowly. We will know for sure before 2025.

Son's area is likely like that. A lot of it is open source but someone has to pull it and try to build it and everyone likes to let someone else do it. Software engineers are lazy, also known as efficient.

I imagine some of the dedicated ARM companies have a stake in it (Apple doesn't seem to care), but they might not have the resources to do a lot of porting. Companies like Amazon and Google certainly have incentives.
 

EPO75

Suspended
Oct 12, 2016
162
167
Rotterdam
Apple can't just continue to add more cores without diminishing returns on performance. This was the article that caught my eye originally:

Yes they can, this M1 is A14 tech while they already released the A15 and now working on the A16, so there is a LOT to gain (if they want...).

And this topic, this Sunny dude just post some link from a news site and wants attention? Intel might score better in Geekbench, but I'm sure this gen will not outperform the M1 yet, especially not in the perf-watt sector. It will take them some time and hopefully they get there as more competition is good.
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Apple is not overly concerned about where ARM heads because Apple Silicon only uses the ISA (with adds for Apple unique pieces). The microarchitecture is 100% Apple.
 

x3sphere

macrumors member
Apr 17, 2014
72
46
problem is the performance/watt. Even if it's faster on paper I seriously doubt Intel's chip will come anywhere close to the M1 in a laptop configuration due to thermal throttling.
 
  • Like
Reactions: MysticCow and Ploki

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
Yes they can, this M1 is A14 tech while they already released the A15 and now working on the A16, so there is a LOT to gain (if they want...).

And this topic, this Sunny dude just post some link from a news site and wants attention? Intel might score better in Geekbench, but I'm sure this gen will not outperform the M1 yet, especially not in the perf-watt sector. It will take them some time and hopefully they get there as more competition is good.
Excuse me. I did not post those links to get attention.

EDIT: My misunderstanding. Sorry.
 
Last edited:

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
problem is the performance/watt. Even if it's faster on paper I seriously doubt Intel's chip will come anywhere close to the M1 in a laptop configuration due to thermal throttling.
We don't know what the power draw will be for sustained performance, surprised if the laptop using it could run on battery only for sustained perfomance.
 
  • Like
Reactions: Ploki

theotherphil

macrumors 6502a
Sep 21, 2012
899
1,234
For server stuff? Probably years. It is possible thst the process has already started. Server and supercomputing is the last true bastion of x86 and ARM starts to chip on it slowly. We will know for sure before 2025.

ARM already has the supercomputer crown:

“Fugaku is powered by ARM A64FX chips, of which it has 7,630,848 cores. When tested against the HPL supercomputing benchmark, it set a world record of 442 petaflops, and against the high-performance computing artificial intelligence workload (HPC-AI) benchmark it maxed out at 2.0 exaflops, beating the previous record (also held by Fugaku) of 1.4 exaflops set in June 2020. According to Top 500, Fugaku's HPC-AI benchmark was "the first benchmark measurements above one exaflop for any precision on any type of hardware."


And Amazon AWS are already using their ARM based gravitron processors:

” WS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. They power Amazon EC2 general purpose (M6g, M6gd, T4g), compute optimized (C6g, C6gd, C6gn), and memory optimized (R6g, R6gd, X2gd) instances, that provide up to 40% better price performance over comparable current generation x86-based instances for a wide variety of workloads, including application servers, micro-services, high-performance computing, CPU-based machine learning inference, video encoding, electronic design automation, gaming, open-source databases, and in-memory caches. They deliver 7x more performance, 4x more compute cores, 5x faster memory, and 2x larger caches.”



And Google is most definitely interested in custom SOC’s, even if they don’t specifically mention ARM: https://cloud.google.com/blog/topics/systems/the-past-present-and-future-of-custom-compute-at-google
 
Last edited:

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
ARM already has the supercomputer crown:

“Fugaku is powered by ARM A64FX chips, of which it has 7,630,848 cores. When tested against the HPL supercomputing benchmark, it set a world record of 442 petaflops, and against the high-performance computing artificial intelligence workload (HPC-AI) benchmark it maxed out at 2.0 exaflops, beating the previous record (also held by Fugaku) of 1.4 exaflops set in June 2020. According to Top 500, Fugaku's HPC-AI benchmark was "the first benchmark measurements above one exaflop for any precision on any type of hardware."


And Amazon AWS are using their ARM based gravitron processors:

” WS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. They power Amazon EC2 general purpose (M6g, M6gd, T4g), compute optimized (C6g, C6gd, C6gn), and memory optimized (R6g, R6gd, X2gd) instances, that provide up to 40% better price performance over comparable current generation x86-based instances for a wide variety of workloads, including application servers, micro-services, high-performance computing, CPU-based machine learning inference, video encoding, electronic design automation, gaming, open-source databases, and in-memory caches. They deliver 7x more performance, 4x more compute cores, 5x faster memory, and 2x larger caches.”
  • Power consumption of 30 to 40 MW (cf. K computer: 12.7 MW)
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
The crown doesn’t matter, it’s all about market share. X86 is still over 90%

I remember when Internet Explorer had 94% marketshare. Everyone wants to topple the king. Competition is a great thing.

I used to know an EE who did work for Intel and he was an incredible fanboy of the company and then he got disillusioned with the company as they had misstep after misstep and they got into a lot of stuff that had nothing to do with engineering. I think that this was around 2008-2012. I don't know what happened to him but he was pretty angry with the direction of the company back then. Intel had the world and they let it slip away. It's still theirs to lose though. I've seen it happen with a lot of tech companies.
 

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
My understanding on their deal with TSMC is that they are doing that for the server market where efficiency does matter.
An interesting twist is that by placing large orders for TSMC 3nm chips, Intel will take away manufacturing capacities from rivals, including AMD and Apple, which are already struggling with supply constraints.


A lot. I don't think that single-core performance is anywhere near as important there and they are probably somewhat worried about competition from more powerful ARM cores now that Apple has demonstrated that it can be done.
It really depends on the workload. And Apple hasn't demonstrated yet that their approach can scale to workstation and server CPUs. One issue where I'm curious to see what they do in this space is memory. On the M1 they benefit a lot from having the RAM integrated in the CPU package (lower latency and higher bandwidth, also a key part of the GPU performance on their unified memory architecure). That's fine on consumer laptops, but doesn't scale to the hundreds of gigs which are required for workstations and servers. If they switch to an external interface like DDR5 they will lose that advantage.

The next question is: how fast can the software ecosystem move to ARM?
Well, first they'll have to increase their market share, otherwise nobody will bother. Since the introduction of the first M1 Macs last year the market share of MacOS has actually gone down (probably because the pandemic caused high demand for work laptops, which are dominated by Windows/x86 even more than the consumer market).
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
It really depends on the workload. And Apple hasn't demonstrated yet that their approach can scale to workstation and server CPUs. One issue where I'm curious to see what they do in this space is memory. On the M1 they benefit a lot from having the RAM integrated in the CPU package (lower latency and higher bandwidth, also a key part of the GPU performance on their unified memory architecure). That's fine on consumer laptops, but doesn't scale to the hundreds of gigs which are required for workstations and servers. If they switch to an external interface like DDR5 they will lose that advantage.

Well, first they'll have to increase their market share, otherwise nobody will bother. Since the introduction of the first M1 Macs last year the market share of MacOS has actually gone down (probably because the pandemic caused high demand for work laptops, which are dominated by Windows even more than the consumer market).

I have yet to see any indication that they are interested in the server market.

We should see what they do with the Mac Pro - I'm guessing DIMM second-tier RAM.

Intel is getting hit from AMD and ARM and they just made the desperation move of going to TSMC. That shows you how scared they are of the other players out there.

Apple sold $9 billion in Macs in Q3 which is pretty good for a relatively small product line.
 
  • Like
Reactions: JMacHack
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.