Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Argon_

macrumors 6502
Nov 18, 2020
425
256
I highly doubt Intel will ever regain that business. Apple's PA Semi acquisition was the nail on the coffin for Intel's dominance on the Mac.

Furthermore, the entire Apple Silicone is way too advanced and has Mac-oriented features at this point that it makes no sense to switch back to an Intel CPU.

That's not what I said.

What I said: "The only likely situation I see for Intel regaining Apple's business would be ASi fabbed on an Intel process node"
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
That's not what I said.

What I said: "The only likely situation I see for Intel regaining Apple's business would be ASi fabbed on an Intel process node"
Even worse. Intel has been a joke to the computing world over it's failure to achieve the next node in its fabs and staying stuck on a process node for over 3 years. Something which TSMC did and the reason why TSMC is now gold standard in fab vs once proud Intel.
 

Argon_

macrumors 6502
Nov 18, 2020
425
256
Even worse. Intel has been a joke to the computing world over it's failure to achieve the next node in its fabs and staying stuck on a process node for over 3 years. Something which TSMC did and the reason why TSMC is now gold standard in fab vs once proud Intel.
"Seems unlikely at the moment, but not impossible."

Competition benefits us all. If Intel claws its way back, consumers win.
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
"Seems unlikely at the moment, but not impossible."

Competition benefits us all. If Intel claws its way back, consumers win.
Doubt it, Intel is too far behind in fab and node processes. Although, to be fair, Intel has been rumored to spin-off its fabs into a unit on its own to gain business away from Samsung and GloFo, but much remains to be seen if it happens.
 
  • Like
Reactions: Argon_

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
perf-trajectory.png

This graph is a tad suspect on experimental design. Have a whole string of desktop ("K" suffix for overclocking ) processors and then on gen 11 ... a laptop processor. I suppose this in part due to Apple claims of "desktop supremacy" for their cellphone SoC. I suspect the other handwaving thing here is that Gen 11 on desktop was a backported ( 10nm+ -> 14nm ) implementation. However, in part the backslide on Gen 11 is because did a "apples vs oranges" switch.



Those two curves have substantial fab process contributions to them. Intel's mostly stuck and Apple leapfrogging between process shrink and substantively bigger die on every other iteration. Either do a shrink or stick with same node and go with bigger die with more stuff. If TSMC slows down ( which roadmaps indicate) so will Apple.

There is lots of stuff that Intel Processors had ( advanced branch target , bigger register sets ,etc). that Intel already had in 2014-2018 that Apple added as the fab process nodes gots denser. Apple did catch Intel while riding on a less dense fab node.
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
This graph is a tad suspect on experimental design. Have a whole string of desktop ("K" suffix for overclocking ) processors and then on gen 11 ... a laptop processor. I suppose this in part due to Apple claims of "desktop supremacy" for their cellphone SoC. I suspect the other handwaving thing here is that Gen 11 on desktop was a backported ( 10nm+ -> 14nm ) implementation. However, in part the backslide on Gen 11 is because did a "apples vs oranges" switch.



Those two curves have substantial fab process contributions to them. Intel's mostly stuck and Apple leapfrogging between process shrink and substantively bigger die on every other iteration. Either do a shrink or stick with same node and go with bigger die with more stuff. If TSMC slows down ( which roadmaps indicate) so will Apple.

There is lots of stuff that Intel Processors had ( advanced branch target , bigger register sets ,etc). that Intel already had in 2014-2018 that Apple added as the fab process nodes gots denser. Apple did catch Intel while riding on a less dense fab node.
"K" processors only have their multiplier unlocked. They aren't superior, except they cost more. They just allow for a better and more fine tuned overclock.
 
  • Like
Reactions: KeithBN and Basic75

Freeangel1

Suspended
Jan 13, 2020
1,191
1,755
I dont think you understand the future and where everything is headed in regards to computers.

Everything is going to be cloud based.

As high speed internet becomes more common and Faster, 6G and beyond you will run ALL your software from the cloud and thru your browser.

The horsepower and graphics power will all be up in the cloud for you to rent or borrow. As well as the software.

So you see the CPU in your PC or Mac will become less important. Of course Energy efficient.

If you don't believe me wait and see.

Most movies, Video Games, Music and Books are Cloud Based already.
 

Realityck

macrumors G4
Nov 9, 2015
11,414
17,205
Silicon Valley, CA
I dont think you understand the future and where everything is headed in regards to computers.

Everything is going to be cloud based.

As high speed internet becomes more common and Faster, 6G and beyond you will run ALL your software from the cloud and thru your browser.

The horsepower and graphics power will all be up in the cloud for you to rent or borrow. As well as the software.

So you see the CPU in your PC or Mac will become less important. Of course Energy efficient.

If you don't believe me wait and see.

Most movies, Video Games, Music and Books are Cloud Based already.
What happens when the cloud fails? All those users are hosed. The is akin to the earlier usage of thin clients instead of PC’s and putting multiple user sessions of business software on servers.
 
  • Like
Reactions: KeithBN and Basic75

altaic

Suspended
Jan 26, 2004
712
484
I dont think you understand the future and where everything is headed in regards to computers.

Everything is going to be cloud based.

If you don't believe me wait and see.

Wow, here you’re making sweeping 20-30 year predictions and all I can successfully handle are usually targeted 5-10 year predictions. Good luck with that, Nostradamus!
 
  • Like
Reactions: KeithBN and Basic75

Gnattu

macrumors 65816
Sep 18, 2020
1,107
1,671
Don't we need to use real world applications to measure CPU perf. Synthetic bench marks hold no weight in my book.
What kind of real world application you want? My M1 complies nodejs bundles(mostly single core) faster than my 3900x, and having higher per-core throughput on a backend server software I'm working for my job, and consuming fraction of the power. I recently bought an i9-12900 but it still cannot outperform my M1 Max in all cases(it does in a lot of cases), for example, the python scripts we run everyday, the 12900 is even slightly slower.
 

Gnattu

macrumors 65816
Sep 18, 2020
1,107
1,671
Fanboys like that chart since specint easily fit within CPU cache so tend to show overly optimistic performance compared to real world workloads. And, that's only one aspect of the total package so doesn't factor in things like slow storage I/O, broken trackpad palm rejection, etc.
For specfp that is very hard to fit within CPU cache(and even causing DRAM bandwidth bottleneck on Alder Lake systems with DDR4 rams), M1 series performs even better.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
"K" processors only have their multiplier unlocked. They aren't superior, except they cost more. They just allow for a better and more fine tuned overclock.

This whole thread's theme is about "winning' the single threading drag racing war. Overlocked CPU coupled to a heavy duty liquid and/or refrigerated cooler is likely going to win the single threaded drag race. If that is the focus then they are superior along that dimension of evaluation.

Most laptop dies aren't in the K status. Which again means it is a different dies and design tuning really talking about here. ( K dies tend not to have biggest iGPU coupled to them. )

Upclocking both the memory and the CPU at some point will leave Apple LPDDR system behind.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
My pondering thought is when will Apple Mac users regret that Apple moved away from Intel? Maybe when Intel is beating Apple in performance per watt OR could it be that Intel when Intel has a 50% more performant CPUs.
Anyone that most concerned with raw performance isn’t using a Mac and won’t ever be using a Mac. That’s a given. So, that Intel has a 50% more performant CPU doesn’t matter, people use macOS for macOS. Intel could release a 100% better performing chip, but it doesn’t run macOS, Logic, or FCP.

And anyway, the likelihood of that occurring is small. Unless Intel’s willing to make a break with backwards compatibility (they’re not), there’s so much cruft in Intel that getting more performance isn’t tied to a better performing processor, it’s now tied to the OS (Windows 11’s scheduler) and to just making it tough enough to be able to draw more power. A future Intel chip requiring 500w is not out of the question at this point.
 
  • Like
Reactions: KeithBN

Danfango

macrumors 65816
Jan 4, 2022
1,294
5,779
London, UK
I dont think you understand the future and where everything is headed in regards to computers.

Everything is going to be cloud based.

As high speed internet becomes more common and Faster, 6G and beyond you will run ALL your software from the cloud and thru your browser.

The horsepower and graphics power will all be up in the cloud for you to rent or borrow. As well as the software.

So you see the CPU in your PC or Mac will become less important. Of course Energy efficient.

If you don't believe me wait and see.

Most movies, Video Games, Music and Books are Cloud Based already.

That is never going to be the default model for computing. Even I say that as a massive cloud proponent and employee of a SaaS megacorp. The reason is simply that it’s impossible to deliver the same user experience remotely regardless of technology because performance has non negotiable constraints tied to physics. On top of that, the browser that you clearly love, is an incredibly inefficient, unreliable and inconsistent software delivery mechanism.

Apple have it just about right where the native software uses synchronisation between itself and their cloud storage.

Your point is actually also wrong regarding movies, videos, music and books.

Movies require native codecs. The majority of work happens on the client after delivery. Video games. Cloud gaming is a niche death march and it’s terrible due to the latency. It only works for some games. Music is the same as video. Books. The 14Gb of PDFs here, well hmm.

One thing I regularly hear is how pro photography workflow will go to the cloud. I’m sitting in a pub outside a major town here and want to push some stuff around. There is no 4G. There is barely 3G. The pub doesn’t have WiFi. Ubiquitous connectivity is not guaranteed anywhere and should never be relied upon. If you think it’s everywhere you haven’t travelled anywhere outside of your tech bubble yet. Occasionally connected is the only viable model.

So I’ll take it with a pinch of salt that the CPU doesn’t matter because the ROI for me is mobile compute power with energy efficiency. Apple are seriously winning there. Intel doesn’t have a chance either because the x86 ISA is fundamentally not capable of delivering the power efficiency gains everyone is betting on. It’s flawed on internal microcode translation, cycle count and the MMU and memory architecture and they can’t change that without making it not x86 any more. It’s time to bury the hack job 1970s ISA with all the crap and hacks stuck to it and start again.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
You make it sound like "5nm" and "extremely large caches" are completely decoupled from one another. Pragmatically, the are not. Using denser fab process allows the cache size to grow larger without causes some losses in other areas. ( e.g., sacrifice core count to have larger caches with a given fixed transistor budget and die size).

True, but I suspect that’s the beginning if the story, not the end of it. Apple can afford making more expensive chips since they are the sole consumer. That’s a big advantage as well.

Other vendors are now aggressively increasing the caches etc. because they see that Apples model works and they feel threatened. It will be interesting to see how these things play out as Intel and AMD move to the more advanced nodes.

For example both AMD and Nvidia GPUs are rumors to substantively boost their L2 caches in the next gen GPUs coming at the end of the 2022.

Well, I sure hope so, GPU L2 has traditionally been a joke. I mean, small L2 worked due to how GPUs operated historically, but AMD has demonstrated that fast large cache can dramatically reduce reliance on power hungry RAM.
But the playing field isn't just about laptops. Apple and Intel mostly sell laptops , but that isn't the whole market. Especially where "single threaded top end performance" is a strong selling point. Apple also was talking 'smack' about how they conquered desktop performance. Can't move the goal posts and crawl back into solely laptop land without a retreat there.
That’s true. I wonder whether Apple has a response here going forward. I guess they could yield the fastest single core desktop place and instead focus on ultra compact form factors and multicore performance. Or maybe they have something else planned.

Putting a higher performance GPU on the same memory bus as a CPU core trying to hit "beat everybody" single threaded throughput is a dual edged sword. That is why Apple puts a bandwidth cap on the CPU cores. It is a graphical user interface operating systems so at some point the GPUs 'wins' the limited bandwidth contrast when both sides want "too much".

True. But then again, giving CPU cores more bandwidth would also mean increasing the bandwidth of the caches, which would probably result in higher power consumption. Apple is quite conservative here. Their L1D for example is "only" around 153GB/s (48B/cycle at 3.2Ghz). Other Cpus can go much faster. But Apple compensates with larger cache size, lower latencies and their cluster architecture with shared/virtual L2 caches. The point is, 200GB/s to a CPU core cluster is hardly a limitation if you look at the full picture.

In the general desktop market it isn't likely that most of the buyers are going to be keen to throw modularity out the window for efficiency. Some of the efficiency trade-offs here are markets that Apple is tossing aside. As long as Intel and AMD are shooting at broader market coverage, they will likely continue to make different trade offs.

Oh, no doubt about it.


That real strength doesn't come for 'free'. It is all integrated... but it is all fixed integrated. Again scoped down to just laptops a more reasonable trade-off than as scale up the desktop product space.

One way out might be focusing more on multi-chip technology. Apple clearly has the fundamental tech down, let's see what they can do with it.


Errr..... it isn't like Intel and AMD don't have fused on die solutions. "x86" in and of itself doesn't inhibit heterogeneous compute solutions. Neither AMD or Intel have "bet the whole farm" on it, but it isn't like they haven't worked on it. (and some of this has to do with operating support and security.... not CPU core design. )

I think all the big players see MCM as the future. There is a lot of work in this area.
 
  • Like
Reactions: BigMcGuire

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
Don't we need to use real world applications to measure CPU perf. Synthetic bench marks hold no weight in my book.
'Synthetic' benchmarks like SPECint2006 are collections of real world applications. There's XML encoding, video compression, pathfinding and C code compilation (among others) in SPECint2006.

This graph is a tad suspect on experimental design. Have a whole string of desktop ("K" suffix for overclocking ) processors and then on gen 11 ... a laptop processor. I suppose this in part due to Apple claims of "desktop supremacy" for their cellphone SoC. I suspect the other handwaving thing here is that Gen 11 on desktop was a backported ( 10nm+ -> 14nm ) implementation. However, in part the backslide on Gen 11 is because did a "apples vs oranges" switch.

Those two curves have substantial fab process contributions to them. Intel's mostly stuck and Apple leapfrogging between process shrink and substantively bigger die on every other iteration. Either do a shrink or stick with same node and go with bigger die with more stuff. If TSMC slows down ( which roadmaps indicate) so will Apple.

There is lots of stuff that Intel Processors had ( advanced branch target , bigger register sets ,etc). that Intel already had in 2014-2018 that Apple added as the fab process nodes gots denser. Apple did catch Intel while riding on a less dense fab node.
The graph shows the top performing Intel and Apple CPUs of each generation at the time the graph was made (a month after the release of the A14, I believe) in a widely used, industry standard benchmark. There's nothing 'suspect' about it. That's been the trend from 2015 to 2021. Apple got a +198% performance increase 2015-2020 while Intel got a +41.5% in the same period. How or why that happened was not my point. If you want to bet on both well established trends changing in opposite directions over the next couple years, that's fine. It's not impossible for that to happen.

What exactly does „Performance“ mean? Nice numbers but in what „performance“ unit would that be?
Read the y-axis label maybe.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Don't we need to use real world applications to measure CPU perf. Synthetic bench marks hold no weight in my book.

Which application do you suggest specifically? Modern benchmarking paradigms rely on a selection of real-world tasks and algorithms. The age of truly synthetic microbenchmarks ended 10 years ago.

For example, Geekbench 5 CPU tests include things like C compilation using clang, PDF rendering, compression/decompression, running queries agains a SQLite database, basic image processing, HTML DOM manipulation etc.

 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
M1 is not ahead in performance. In single-threaded performance (Spec2017, measured by Anandtech), the fastest non-overclocked Alder Lake desktop chip (i9-12900K with DDR5) is 10% faster than the M1 Max (averaging integer and floating point results). But that's at the expense of far higher power consumption. If things continue as they are, Intel will likely stay marginally ahead in performance, but at the cost of much poorer efficiency.

So the interesting question is not will AS be able to stay ahead of Intel in performance (it's not), but rather if AS will be able to maintain its huge lead in efficiency over Intel. And that's really up to Intel.

 
Last edited:

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Don't we need to use real world applications to measure CPU perf. Synthetic bench marks hold no weight in my book.
As others have mentioned, benchmarks like Geekbench and SPEC2017 aren't really synthetic. They each use a suite of real-world tests. Having said that, your broader point is correct—there's no substitute for testing with the applications you actually use, in the way you use them, to determine which system is fastest for your use case.

Thus seeing how a system performs with a "real-world application" isn't going to help you assess a system any more than seeing how it performs with SPEC or GB, unless that "real-world application" is one that you actually use.

Note also that GB and SPEC both test the CPU (GB does have a separate GPU test), and maybe what matters for your throughput is a combination of GPU and CPU.

I would thus think of SPEC2017 and GB as useful starting points. And they provide a standardized basis for comparison.
 
Last edited:

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
M1 is not ahead in performance. In single-threaded performance (Spec2017, measured by Anandtech), the fastest non-overclocked Alder Lake desktop chip (i9-12900K with DDR5) is 10% faster than M1 (averaging integer and floating point results). But that's at the expense of far higher power consumption. If things continue as they are, Intel will likely stay marginally ahead in performance, but at the cost of much poorer efficiency.
As things are right now, despite losing by about 10% in ST benchmarks, I feel that M1 has more useful single threaded performance than Alder Lake.

It's 2022, ST isn't ST anymore. We all have tons of background tasks going all the time now.

Intel chips won't run at max turbo frequency unless there is truly just one thread running, and the dropoff in frequency as more cores turn on can be quite severe. People who know anything about benchmarking methodology correctly quit all the software they can before running their benchmark, because you don't want the variance from random competitors for CPU time polluting the results. This helps Alder Lake actually hit its ST peak (max turbo) numbers, but isn't too representative of the real world where most people have at least a few browser tabs open all the time.

The other thing I'd cite is that if you start up a heavy compute job on 4 cores of your M1 Pro, then tab over to another program which needs 1 core, the 1-core program will run no worse than 94% as fast as it would with nothing else running. By contrast, a top of the line Alder Lake desktop chip has a base frequency about 62% of its max turbo ST frequency.

The ability to run any core at near-peak speed regardless of load on other cores is why M1's real-world ST performance is superior. You're never significantly penalized by anything less than #active threads >= #cores.
 

PsykX

macrumors 68030
Sep 16, 2006
2,745
3,921
I think the answer to this question depends on :
1. Mainly how competitors will be able to put their hands on TSMC a d screw Apple's contracts in 3-5 years from now
2. A little bit about how early Apple will adopt Armv9.
3. How obsessed Apple is to keep their insane thermal specs instead of adding more performance in desktop computers (this could actually kill Apple Silicon in the long run).
 
  • Like
Reactions: Danfango
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.