Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Intel is still 80% of the laptop market, and 78% overall. Despite the erosion.

The point I was making is with Intel's current position, things still have to shift quite a bit to put Intel itself in jeopardy, and deprive them of the money to fight back or shove them out of the CPU market. And my point about the shortage is more that it's unlikely AMD will be able to buy the capacity they'd need to even be able to supply Intel's current customers when there's no capacity to buy, and it's not likely there will be any relief on fab demand for a couple of years. That gives Intel a couple cycles, and even gives Intel an opportunity if they can actually move on it.

But keep in mind that with the rise of laptops, and shipment numbers of desktops being a third that of laptops, you can't dethrone Intel using desktop chips. AMD could stand to be a bit more aggressive with their laptop chips, IMO. However, they are being plenty aggressive in the server space at least.

Why do you think Intel spent what 10nm fab capacity they had on Laptop chips?

My argument isn't that Intel is somehow in a good position. They aren't. It's more that there's no path in the short term to be able to supply CPU demand in the PC space without them representing a majority share of those CPUs. If Intel were to implode in the next couple years, it would likely be a bad thing for the industry in the short term.



Oh, I agree, as I'd rather not be Intel's CEO at the moment.
Intel’s market share is about to be a story fit for the ole Lenin.jpeg quote on weeks where years happen.

It’s funny, because IME people love regurgitating the “oh people have been saying x for years” [in this case that ARM is the future for PC’s etc] and while the purveyors of x may be hasty - and it took longer than I’d have thought wrt Apple - eventually the day comes.


Most importantly, Intel has absolutely nowhere to go but down from here in the server & PC market. We’ve got Qualcomm, (arguably the biggest threat to PC’s) Apple (just took a chunk of their premium sales in PC’s) Nvidia (huge server threat), and AMD, who happens to be killing it for X86, though the shortages are an impediment.
Even with 7NM & heterogeneous architectures in 2023 (latter starts in 2021 though), I don’t see how they manage to compete with Qualcomm laptops or Nvidia in servers. The discrepancy in performance under load will be painfully obvious. I assume a lot of this is about wide decoders, but then again, even absent wide architectures, Qualcomm or Apple et. al make much more efficient chips. Look back to the iPads on 10NM. Still great P/W, much steeper curve. X86 is just... awful for these purposes
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Let's remember that ARM <> Apple Silicon. All they use is the ISA - everything else (especially the all important microarchitecture) is Apple designed and indeed unique to Apple.

Also, I have to think part of the incredible performance we see is MacOS and iOS are designed to take full advantage of the Firestorm architecture (twice as wide as anyone else). Firestorm (and therefore the current A Series and M Series) processes at minimum double the number of instructions in parallel per core of anything else out there. I would think iOS and MacOS are both engineered to exploit this model as fully as possible.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
Intel is still 80% of the laptop market, and 78% overall. Despite the erosion.

The point I was making is with Intel's current position, things still have to shift quite a bit to put Intel itself in jeopardy, and deprive them of the money to fight back or shove them out of the CPU market. And my point about the shortage is more that it's unlikely AMD will be able to buy the capacity they'd need to even be able to supply Intel's current customers when there's no capacity to buy, and it's not likely there will be any relief on fab demand for a couple of years. That gives Intel a couple cycles, and even gives Intel an opportunity if they can actually move on it.

But keep in mind that with the rise of laptops, and shipment numbers of desktops being a third that of laptops, you can't dethrone Intel using desktop chips. AMD could stand to be a bit more aggressive with their laptop chips, IMO. However, they are being plenty aggressive in the server space at least.

Why do you think Intel spent what 10nm fab capacity they had on Laptop chips?

My argument isn't that Intel is somehow in a good position. They aren't. It's more that there's no path in the short term to be able to supply CPU demand in the PC space without them representing a majority share of those CPUs. If Intel were to implode in the next couple years, it would likely be a bad thing for the industry in the short term.



Oh, I agree, as I'd rather not be Intel's CEO at the moment.

Sure I agree Intel’s entrenched position is a huge advantage and I won’t join the chorus of “Intel is DOOOMED” just yet. But I’d say the pandemic actually hasn’t helped them much - hence my edit. Sure notebook sales are up 50% for Intel but their consumer group CCG’s growth was only 8% ... those margins are awful for Intel. Basically they’re being relegated to the cheapest sets and vendors are able to say they’ll go AMD or ARM unless Intel cut prices. Similarly their server side business, DCG, also had their margins halved.

Again this isn’t to say that Intel has no chance to turn things around, but even Intel has said their supplies are constrained relative to demand and now they’re opening up their fabs. How will they allocate? If they don’t give enough space to their partners, IDM 2.0 will fail. Too much and they can’t produce enough of their own chips. It’s a tight balancing act. They also haven’t been explicit about what IP exactly Intel will be licensing to others in this new model ...

In terms of prioritizing 10nm for mobile ... it’s true laptops are by far the biggest segment, but the rumor is that they also didn’t have a choice: that 10nm wasn’t actually an option for their performance desktop chips - CPUs were too big and needed too much power and the node was failing to produce enough quality desktop chips. 14nm was all they could produce those CPUs on.
 

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Let's remember that ARM <> Apple Silicon. All they use is the ISA - everything else (especially the all important microarchitecture) is Apple designed and indeed unique to Apple.

Also, I have to think part of the incredible performance we see is MacOS and iOS are designed to take full advantage of the Firestorm architecture (twice as wide as anyone else). Firestorm (and therefore the current A Series and M Series) processes at minimum double the number of instructions in parallel per core of anything else out there. I would think iOS and MacOS are both engineered to exploit this model as fully as possible.
I think most here are keen to this, but also, remember that Nuvia graph of the P/W from various ARM OEM's and then AMD/Intel CPU's? It's fairly obvious something is going on in terms of the X86 baggage that makes it easier to implement power efficient chips on ARM's ISA (or the target platform, whatever).
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I think most here are keen to this, but also, remember that Nuvia graph of the P/W from various ARM OEM's and then AMD/Intel CPU's? It's fairly obvious something is going on in terms of the X86 baggage that makes it easier to implement power efficient chips on ARM's ISA (or the target platform, whatever).

x86 adds about a 20% overhead in P/W, all else being equal. Apple has the additional advantage of having very good chip designers, many of whom came from AMD, DEC, and Exponential (and the rest of whom are using methodologies taught by those folks), which is worth another 20%. Then you have software/hardware integration, for whatever that’s worth.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
reading all these post lets me know what a moron i am

You aren’t alone. There are plenty of posts that make me feel that way too. Everyone has different knowledge bases. Read the posts enough, look stuff up, ask questions, and eventually you may not be an expert but you’ll begin to get it.

I mean most of my writings on here represent the mangling of stuff written by much more knowledgeable people - some of whom disagree with each other - that I digested and tried to make sense of given my (limited) knowledge and analytical skills.
 
  • Like
Reactions: quarkysg

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
x86 adds about a 20% overhead in P/W, all else being equal. Apple has the additional advantage of having very good chip designers, many of whom came from AMD, DEC, and Exponential (and the rest of whom are using methodologies taught by those folks), which is worth another 20%. Then you have software/hardware integration, for whatever that’s worth.
Yes, I have no doubt Apple's experience adds some to the blow, interesting on that quote, that roughly matches my priors, though I'd have guessed a tad higher re X86's P/W loss. Do you think this [the p/w loss] is an internal topic of concern at Intel, or kind of a mere academic concern for now? (Since AMD et. al cannot produce enough & Qualcomm (until now) hasn't done ****)
 

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Also guys, tbf before we slobber all over Apple's Firestorm cores, Qualcomm's 888 X1 core is on Samsung 7NM instead of TSMC 5NM, and the former is known to be on par with TSMC 7NM, and more importantly Qualcomm has a habit - that they lived up to once again - of cheaping out on the cache and what have you, so the X1 in proper form would be a bit closer to Apple silicon, at least A13 territory.
 

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Arguably Apple's biggest lead is with the icestorm cores and the proper scheduling/implementation, the fellas are basically Cortex A75/A76 tier at this point.

And the M1 integrated GPU is probably my favorite part of the M1. Really did not expect it to be this damn good. Beats Tiger Lake's GPU by a hair, destroys AMD's "APU" laptop graphics as well (though they have not updated the modules to the Big Navi architecture yet, still Vega, so lol)
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
Also guys, tbf before we slobber all over Apple's Firestorm cores, Qualcomm's 888 X1 core is on Samsung 7NM instead of TSMC 5NM, and the former is known to be on par with TSMC 7NM, and more importantly Qualcomm has a habit - that they lived up to once again - of cheaping out on the cache and what have you, so the X1 in proper form would be a bit closer to Apple silicon, at least A13 territory.

Typo: it’s on Samsung 5nm which is comparable to TSMC 7nm. Everything else is accurate. The X1 *should* be better than its implementations have been.

Arguably Apple's biggest lead is with the icestorm cores and the proper scheduling/implementation, the fellas are basically Cortex A75/A76 tier at this point.

True though I’m given to understand that A7x cores could be underclocked and use roughly the same amount of power as an A5x core with much more performance - similar to Icestorm (not quite as good but similar). The reason Android CPU makers don’t do that is that A5x cores are *tiny* - you can fit a lot of them on a chip so it’s cheaper to have more of them. It’s a play for perf per area, thus lower silicon cost.
 
  • Like
Reactions: BigPotatoLobbyist

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
I just think people keep pulling the "it's not about ARM or Intel ****ing up, it's Apple is so {insert superlative}" card, and in practice it very much is about the ARM ecosystem & competitiveness, it is about Intel laziness, etc.


Like does anyone really think an 8-core X1 (or updated X2) CPU on TSMC 5NM wouldn't have a superior PPW profile to a Zen 4 chip on TSMC 5NM? Probably the AMD Ryzen chip would have a superior peak performance, but I know which P/W curve would be steeper.
 

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Typo: it’s on Samsung 5nm which is comparable to TSMC 7nm. Everything else is accurate.



True though I’m given to understand the A7x series could be underclocked and use roughly the same amount of power as an A5x core with much more power - similar to Icestorm (not quite as good but similar). The reason Android CPU makers don’t do that is that A5x cores are *tiny* - you can fit a lot of them on a chip so it’s cheaper to have more of them. It’s a play for perf per area, thus lower silicon cost.
Ah, thanks. My bad, I dunno what I was thinking.


And I think part of it is marketing too tbh. They love being able to shout about 10 core phones or whatever.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Yes, I have no doubt Apple's experience adds some to the blow, interesting on that quote, that roughly matches my priors, though I'd have guessed a tad higher re X86's P/W loss. Do you think this [the p/w loss] is an internal topic of concern at Intel, or kind of a mere academic concern for now? (Since AMD et. al cannot produce enough & Qualcomm (until now) hasn't done ****)
Well I can never guess what management is thinking, but certainly the engineers know it’s a problem. After all, they tried to ditch x86 entirely with Itanium for a good reason (of course Itanium had its own issues)

If anyone at Intel is thinking long term, it should be a concern. It all comes down to this: there is absolutely no inherent advantage to the x86/64 architecture in modern computers, other than compatibility with software. But there are many disadvantages. And over time, software compatibility becomes less and less of a bulwark; Apple‘s shown it isn’t an issue, more and more apps are Arm-first because they are mobile apps, and Microsoft is sniffing around with WOA.
 
  • Like
Reactions: BigPotatoLobbyist

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
Ah, thanks. My bad, I dunno what I was thinking.


And I think part of it is marketing too tbh. They love being able to shout about 10 core phones or whatever.

That’s part of it but another part is that they need low power cores to offload background tasks too. To be pedantic the standard ARM fiber seems to be limited to 8 cores. So let’s take a top of the line Snapdragon with a single X1 and 3 A78s for performance/mid cores. These are supposed to provide the single threaded oomph and the bulk of the multithreaded performance when needed. But they use power and are big so you don’t want them cluttered up with menial tasks. Using 4 down clocked A76s let’s say could net you low power cores with decent performance but how many mobile phone tasks needs 8 threads anyway? Or 8 high performing threads at any rate. Probably few and far between. A76s are still big and downclocked are not as good as Icestorm. So you’re basically spending a lot of silicon area to accelerate menial background tasks that nobody cares if they finish that quickly. Even though Qualcomm leverages its modem business to sell its CPUs it still has to compete to win contracts. Would any OEM want spend the extra money to get performance that they don’t care about? When they could get a much cheaper CPU that has 4 A55s? Probably not. The A55s are *great* for this and overall multithreaded performance is pretty damn good still. It just could be better for a cost they aren’t willing to pay.

In contrast, Apple controls the whole stack and doesn’t have to care as much about silicon costs. They also want cores that can scale from watches to (soon) workstations. Further, since their cores are better pound for pound, they can get away with a 2+4 configuration on a phone that gives them the flexibility to offload menial tasks to low power cores while also giving them plenty of multithreaded performance.

So part of this is different approaches in design because of very different business models and priorities.
 
Last edited:
  • Like
Reactions: BigPotatoLobbyist

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
It all comes down to this: there is absolutely no inherent advantage to the x86/64 architecture in modern computers, other than compatibility with software.
IMHO, Microsoft has to share equal blame with Intel/AMD on the compatibility part. Windows should have been better designed with forward thinking technology adoption, instead of taking the safe and easy path of making Intel/AMD keep backward compatibility.

I suppose Apple has an advantage here being vertically integrated and with a smaller customer base to manage.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
IMHO, Microsoft has to share equal blame with Intel/AMD on the compatibility part. Windows should have been better designed with forward thinking technology adoption, instead of taking the safe and easy path of making Intel/AMD keep backward compatibility.

I suppose Apple has an advantage here being vertically integrated and with a smaller customer base to manage.
Windows got where it is by enabling backward compatibility for apps almost no matter how old they are. Of course, there is a terrible price to be paid for that, for sure.
 
Last edited:

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
I still wonder what the next M Series SOC will be? Will they do an "M1X" with 8 Firestorm and 4 Icestorm? Or wait for the next Core iteration to do a new SOC? I have to think M1X simply because they want to get the new MBP 16 out sooner and it would be late this year before the successor to Firestorm is rolled out (Along with the iPhone 13).
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
I still wonder what the next M Series SOC will be? Will they do an "M1X" with 8 Firestorm and 4 Icestorm? Or wait for the next Core iteration to do a new SOC? I have to think M1X simply because they want to get the new MBP 16 out sooner and it would be late this year before the successor to Firestorm is rolled out (Along with the iPhone 13).
I'm pretty sure the next round of Macs will still use the same Firestorm cores as the M1, just more of them. Will be risky to try using a newer architecture on higher end Macs. The M1 architecture has already been field tested so it's a much safer option.

I suspect the entire line of Macs (up to the Mac Pro) will use the same M1 CPU architecture.

The base M1 Macs released last year will likely go to M2 sometime next year.
 

pasamio

macrumors 6502
Jan 22, 2020
356
297
I think the backwards compatibility advantage of x86 is overrated considering Apple has shown a way of being able to run x86-64 code with sufficient performance, Microsoft is getting better at their own x86 translation processes for Windows on ARM and in the Linux space native ARM has been a thing for a while now. For applications that are left behind long enough, advances in CPU speed do wonders for various virtualisation technologies as well.

Intel's data centre dominance is being challenged by both AMD and ARM. Apple has given ARM on the desktop a shot in the ARM, being able to run native ARM apps on the desktop will make it even easier to develop against for Apple devices and this might open up a broader market for Linux ARM devices that do exist but perhaps were relegated to being substandard compared to Intel or AMD powered devices.

Intel having it's own fab is a production advantage for the moment but they're producing devices that right now are being panned in the marketplace and have to reduce their cost against the AMD chips. Intel faces an increasing amount of headwinds and needs to figure out what it's future looks like. However if their only advantage is their backwards compatibility and their own fabs, neither of those these days seem to be large moats.
 
  • Like
Reactions: BigPotatoLobbyist

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Intel's data centre dominance is being challenged by both AMD and ARM.
I think Intel sees this and is worried. Over time, cloud providers will probably demand higher premiums for x86/64 servers compared to say ARM. This will increase the demands for non-x86/64 based cloud solutions, and further erode Intel's dominance there. And with the world going green, this pressure will definitely increase over time.
 

alien3dx

macrumors 68020
Feb 12, 2017
2,193
524
Well I can never guess what management is thinking, but certainly the engineers know it’s a problem. After all, they tried to ditch x86 entirely with Itanium for a good reason (of course Itanium had its own issues)

If anyone at Intel is thinking long term, it should be a concern. It all comes down to this: there is absolutely no inherent advantage to the x86/64 architecture in modern computers, other than compatibility with software. But there are many disadvantages. And over time, software compatibility becomes less and less of a bulwark; Apple‘s shown it isn’t an issue, more and more apps are Arm-first because they are mobile apps, and Microsoft is sniffing around with WOA.
ohh dam itanium .. long term never heard this thing.. i o_Oo_O last time building web apps working on xeon not working on itanium .
 

robco74

macrumors 6502a
Nov 22, 2020
509
944
Given the performance of the Surface Pro X, I'm not really sure I'd consider ARM competition at present. Another company would need to invest as Apple has to create competitive chips. Apple dropped 32-bit support, even for Intel, but MS couldn't do the same, even if they wanted to. Even Ubuntu got a lot of pushback when they tried to drop 32-bit support. There are a lot of industrial and financial applications that for a variety of reasons can't or won't be updated to run on newer hardware and OSes. There will be a market for x86 chips for the foreseeable future. This might be enough to sustain them as a company, but losing the high performance and enthusiast market to AMD will hurt margins and reputation.

So far, nobody has a real competitor to M1, and M1 is exclusive to Apple. Intel should be grateful that Apple will likely only use their custom silicon for their own devices. Others may try to follow Apple's lead, but that lead is substantial and the number of people who can design custom SoCs with comparable performance is relatively small.

For MS, it may be time to cleave the codebase in two once more. A Windows Classic that retains 32-bit backward compatibility that mostly just gets security updates and is available for legacy apps, and a new version, maybe even under a completely new name, that is more cutting edge.
 
  • Like
Reactions: bobcomer

pasamio

macrumors 6502
Jan 22, 2020
356
297
I think Intel sees this and is worried. Over time, cloud providers will probably demand higher premiums for x86/64 servers compared to say ARM. This will increase the demands for non-x86/64 based cloud solutions, and further erode Intel's dominance there. And with the world going green, this pressure will definitely increase over time.
If you can support use cases running on ARM with 20% less overhead per @cmaier then there is the immediate power saving or extra performance. Designing their own chips also gives the cloud vendors a further point of differentiation. At the moment you get the same chips from either vendor though AWS also has their Graviton ARM SKUs. Apple are already introducing their own extensions (neural engine and DSP), the cloud providers could offer their own differentiation factors too. AWS' Graviton already has better encryption and security functionality as a selling point for example. Perhaps Google introduce their own AI processing functionality on their chips that can be leveraged by applications running in GCP to run faster.

What sets Apple apart compared to Microsoft's attempts at ARM has been Apple fully investing in it, building their tooling to support it and making it very easy to take an existing x86-64 MacOS app and run it on Apple Silicon. Microsoft are apparently excited to announce that next year Visual Studio will finally be 64-bit. To me the message there is that moving that ecosystem hasn't been important for Microsoft but if your developers can't work natively on the new platform, what hope do you have for adoption? To quote from AWS, nothing build developer tool ecosystems better than volume and Apple are providing the volume for their ecosystem however Microsoft's lack of support for native ARM development show that they intrinsically value it less.
 
Last edited:

BigPotatoLobbyist

macrumors 6502
Dec 25, 2020
301
155
Well I can never guess what management is thinking, but certainly the engineers know it’s a problem. After all, they tried to ditch x86 entirely with Itanium for a good reason (of course Itanium had its own issues)

If anyone at Intel is thinking long term, it should be a concern. It all comes down to this: there is absolutely no inherent advantage to the x86/64 architecture in modern computers, other than compatibility with software. But there are many disadvantages. And over time, software compatibility becomes less and less of a bulwark; Apple‘s shown it isn’t an issue, more and more apps are Arm-first because they are mobile apps, and Microsoft is sniffing around with WOA.
Completely agree re advantages, disadvantages. My main surprise thus far is how many seemed wedded to the idea the M1 would be in no-man's-land because ARM Windows laptops did so poorly or because PPC transition was long. ****ing porting today isn't as difficult as it was then, and frankly Apple's userbase is obviously larger - which is a boon to the transition speed because third-party developers take note as opposed to writing off a niche user base.

With MS what sucks is you've the chicken and egg issue, only exacerbated by MS trying to "hedge their bets" so to speak between two architectures, which IMO if they're not careful just ends in Chromebooks wiping the floor in 5 years should Google firmly commit to ARM laptops.

Or put another way, by 2030, I have strong doubts that the modal consumer scaling all the way to modal developer is using an X86 laptop. Someone has to win, and while Electron apps are cool and all, nothing beats native software for the important utilities or entertainment really (see Minecraft from Java to W10 edition, world of a diff. IME)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.