Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Will the x86 architecture be fully outdated in 10 years

  • Yes

    Votes: 38 13.1%
  • No

    Votes: 195 67.2%
  • Possibly

    Votes: 57 19.7%

  • Total voters
    290

Longplays

Suspended
May 30, 2023
1,308
1,158
Considering having the ARM architecture is only getting more popular. I was wondering your guy’s opinions on whether x86 will be dead in 10 years.
See how PPC is still vibrant on MR 17 years after Intel shipped in 2006.

macOS support for Intel will end as late as 2030.
 

theluggage

macrumors G3
Jul 29, 2011
8,011
8,444
Considering having the ARM architecture is only getting more popular. I was wondering your guy’s opinions on whether x86 will be dead in 10 years.
Define "dead". Last I looked, Z80 was still around in microcontrollers and such...

x86 and "Wintel" is too pervasive to completely vanish in 10 years, but it's already in decline, and already vastly outnumbered by ARM (yes, that's mostly phones, but todays phones are general-purpose computers that outperform x86 chips from 10 years ago). I think we passed "peak Wintel" around the launch of the iPhone - I'm not giving the iPhone all the credit, but its a good marker for a bunch of interconnected developments such as:
  • The surge in internet use and wide availability of fast broadband
  • Online apps like Google Docs, doing things that MS Office couldn't
  • The rise of Linux as a serious competitor to Windows Server
  • MS and Intel failing to dominate the mobile market (all of that legacy software is useless on a smartphone)
Legacy software will keep the x86 ISA alive for a few years yet, but even that isn't forever because what the software does will eventually become outdated as will how it works (e.g. monolithic, single-threaded code, poor security) - plus, ARM and RISC-V can run COBOL just fine! We'll probably see a few spectacular failures caused by old software where the source code was lost and the original developers are pushing up daisies. There's no need for 99% of modern application software to give a wet slap about what architecture it is running on, unless its doing something bleeding edge (in which case it will be obsolete in a few years anyway). A lot of modern development platforms (MS CLR, Java, Android and, optionally, xCode) compile to bytecode for a virtual machine/just-in-time compiler anyway not to mention all the work that's being done in distriubted-as-source "scripting" languages like python and Javascript. So, hopefully, the sort of "legacy" apps that grew up in the 80s and 90s when the only way was Wintel and binary compatibility will be a shrinking pool.

Meanwhile, emulation, translation and virtualisation are improving - look at how well the majority of stuff runs on Rosetta2, and even the x86 translation on Windows for ARM is quite capable.

Since legacy software won't evolve much, the point will come when it runs faster on an emulated virtual machine (which also makes it easier to secure).
 
  • Like
Reactions: psychicist

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
“We didn’t get into this mud hole because everything was going great. We had some serious issues in terms of leadership, people, methodology, et cetera that we needed to attack.”
That's CEO/stockholder talk, I'm only interested in the product. I'm not an intel stockholder.
They're running Intel x86 processors in 16bit mode?!
Yes. 486's, 386's.

Certainly not a modern Core processor, I'd hope... I'd be curious what fraction of the x86 market this accounts for to be worth carrying this mode forward.
Definitely not a modern processor, most of them are as old as they sound. And I'd say pretty much nil for the current "new" market, it's all stuff that can be had on the used parts market. The way you asked the question, I couldn't answer any other way. I do need 16-bit mode, just not in a modern processor design, and yes, I wouldn't be super worried if we lost it. I'm not convinced it needs to go though.

I like full backwards compatibility. That's my biggest pet peeve about Apple -- their total lack of backwards compatibility. I would never use an Apple machine in a project because of it.

If intel processors were struggling performance/stability-wise, then yeah, changes would have to be made, but they're not.

Windows already runs on Arm. It used to run on PowerPC, MIPS, Alpha and Itanium. And that was the old Microsoft. There's no reason the new collaborative Microsoft can't run on another architecture in the future. If Microsoft thinks they're losing an edge by being tethered to x86, they'll support alternatives. It'll be a while before the drop x86, but they're not going to let Intel drag them down.
Windows on Arm is currently crap, and that's all I have to go by. They'll never drop x86 as long as people still use them and that's 90% of the market. There's a LOT of momentum in a market share like that...

I'm not trying to start a dueling benchmark thread, or argue individual benchmarks and hardware details forever, but two contemporary machines running an x86 benchmark and emulation won handily all while generating significantly less heat-- even if it's not better in all cases, it's certainly competitive. If the target platform wasn't Arm but was a faster slimmed down subset of x86? I have to think it would do even better.
And an XPS13 isn't the fastest intel machine by far either. It's an ultralight. And no I don't want to get into a benchmark war, they're stupid as far as I'm concerned, I hate them and they provide no info I need other than relative difference between 2 similar products that run the same thing. And that's all I'll say about benchmarks.

What it runs and if it runs it adequately is all I'm concerned about.

No, it isn't Apple. I don't know what grounds you have to dismiss the others though. Marvel has been heavily invested in Arm since they bought the StrongARM IP, Qualcomm has the balance sheet to outspend Intel, and Samsung has a bit of both.
The only thing I'm concerned with, they don't run what I know I need, (and the market I represent) They'll be in my phone, and maybe tablet, but that's it for now.

AMD knows the x86 market inside and out, they have the OEM contacts, and they're willing to take risks to gain market share. I could see them breaking the mold.
Possibly, but they wont throw away x86 backwards compatibility, they know why they got where they are.

There's already a lot of Arm Linux happening, but if Microsoft signaled an intent to broaden their Windows on Arm side hustle? Investment would pour in to Arm based PCs.
They've been signaling that for years, it hasn't made a difference. It just doesn't sell. Right now there's more ARM Windows machines than ever before, they're never on my to buy list...
 

mi7chy

macrumors G4
Oct 24, 2014
10,619
11,293
Meanwhile, emulation, translation and virtualisation are improving - look at how well the majority of stuff runs on Rosetta2, and even the x86 translation on Windows for ARM is quite capable.

That's only something homebodies will endure wasting time going through permutations of Rosetta 2 then Crossover and Parallels subscriptions and very often it still doesn't work. No sane company will suffer through that when x64 works on the first try. Someone recently posted about that experience with Verisk Xactimate which is some basic claims estimation software.
 
  • Like
Reactions: bobcomer

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Considering having the ARM architecture is only getting more popular. I was wondering your guy’s opinions on whether x86 will be dead in 10 years.
I don’t think it will die in a year. But it may in 20-30 years.

Heck mainframes are still getting support contracts. IBMs Power series of chips still sell strong. Sun (I think) still makes Sparq chips.

15 years from now x86 could very well be in the same position, and I think that’s likely.

Many companies are already breaking down their monolithic architectures to micro services. .Net can now run on Linux. It isn’t going to be as overnight as 10 years. But the first bit of blood has started dripping. The problem is companies keep doing what they do until there’s a reason to let go.

So yes, x86 will still be alive, but I wouldn’t go as far as to say it will be doing well.

I don’t have a lot of love and respect for Intel. But I do for Pat Gelsinger. If anyone can figure out a way to slow the bleeding it’s him. It would be ignorant to say though that he thinks x86 is going to be here for the next 15-20 years.
 
  • Like
Reactions: psychicist

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
One thing to keep in mind: Rosetta 2 does not emulate anything x86 related, so bobcomer's comments regarding emulation and performance penalties are irrelevant to the discussion at hand. What Rosetta 2 does is translate the x86 code into Apple Silicon code prior to the first launch of the app in question. After that initial conversion, the app runs natively for all intents and purposes. This is different from the original Rosetta, which translated PPC code into x86 code at run time.

I don't think x86 will be dead (or even close to dead) within the next 10 years, let alone 25. That platform has such a grip on the PC market as a whole that it would basically require Intel and AMD to both give up on x86 entirely before it fades into oblivion, but that's where the majority of the market is. With that being said, I can see x86 losing overall market share, especially as Windows on ARM matures and the Mac continues down the Apple Silicon path. Factor in the push towards RISC-V, and the competition will only increase for the x86 platform.

Given all of the discussion over ARM switching up its licensing terms for the majority of its partners (both Apple and Samsung have completely separate agreements which are not affected in the same way Quallcomm, Mediatek, etc. have been), RISC-V might actually become the biggest competitor to x86, although it's far too early to make an assessment of ARM vs RISC-V.
 

theluggage

macrumors G3
Jul 29, 2011
8,011
8,444
That's only something homebodies will endure wasting time going through permutations of Rosetta 2 then Crossover and Parallels subscriptions and very often it still doesn't work.
You're complicating it by talking about running Windows software on M1 Mac - we're talking about the wider future of x86. Crossover is hit and miss even on Intel Macs and its actually pretty impressive that anything works on Crossover+Rosetta. For a lot of x86 MacOS software, Rosetta 2 "just works" - including things like Reaktor (virtual synth software) which you might not expect to work under emulation.

It hard to find any recent feedback on the performance of Microsoft's x86 emulation on Windows for ARM running on officially-supported ARM hardware (which only added support for x86-64 translation relatively recently) and its coloured by the fact that Qualcomm is playing catch-up with Apple so the current supported processors are a bit pants. Again, the fact that people have had any significant success running things like s86 Windows games under Parallels on M1 (which MS seems to be 'tolerating' rather than 'supporting') bodes well for the tech.

It may not be a solution today, but it might be going forward, especially when both MS and Intel are starting to cut back on backwards-compatibility.
 
  • Like
Reactions: psychicist

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
Ok, here is a confusing graphic


that links to a rather badly-written page, which I believe is the origin of the claim that I was referencing. The number I heard seems to come from farther down the page, where they say that when the decoder kicks in (when the μop cache cannot be used), it adds something like 4% to the power draw, which does not line up with this graphic.

My interpretation of the graphic is:
  • "uncore" refers to the power draw of the processor support logic (this seems perhaps a bit low)
  • "cores" is the total draw of each of the cores (probably P-cores)
  • "execution units" is part of core draw
  • "instruction decoders" is likewise part of core draw
  • caches (L1,L2,L3) are not part of either core or uncore draw

The graphic would suggest that the decoders add around 18% of the power draw (averaging the ~8% for FP with the ~20% for integer, with the consideration that integer will tend to get much more use than FP, most of the time).

Which, having looked at this mess, leaves me curious as to what the reality is. Clearly the small number I heard tossed about is not very accurate, and this source is a very long way from reliable. I apologize for repeating casual hearsay.

It does seem unlikely that an x86 core is actually using a sixth of its power to interpret code, but maybe it really is. Decoding ARM instructions cannot cost more than a fraction of 1% of the power a core uses, though the elaborate trip to one of the execution queues is kind of expensive.

Thanks for the link! I agree, the linked page isn't terribly thorough, but it does link to a few interesting papers that might be more informative once I get a chance to read them. I haven't had time to much more than look at the pictures in the Hirki paper that your table came from and this caught my attention because it hits on two topics of this thread:

1685551132639.png

60% of the power, almost 30W in total, is static leakage. They show 10%, 4.8W, going to the decoders for this benchmark. As it relates to this discussion, that blurs two points together-- static leakage power, which is a process component, and decoder power which is an ISA component.

If we look only the dynamic power, the decoders in this benchmark are 25% of the dynamic power. In the other presented benchmark that exercise the L2 and L3 caches more heavily the decoders are 3% of the package power and about 10% of the dynamic power.

So based on these definitions and toy analysis, the x86 ISA is somewhere between 10 and 25% less efficient, which wouldn't shock me either way. I'm not sold yet on the assertion that the decoders are the only relevant power difference, but I've yet to read all the references.

This is all based on a 10 year old i7-4770 Haswell at 22nm though, so it's hard to round out the system power analysis and draw modern comparisons from the static power data.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
That's CEO/stockholder talk, I'm only interested in the product. I'm not an intel stockholder.

It would be a rare situation though for a CEO to "bash" their own company and falsely claim to be struggling when they aren't. That's my point. It's not bashing to acknowledge that the company is struggling and they're struggling because their technology isn't positioned where it used to be. You still have parts you can buy for your work, but much of this thread is forward looking.

Yes. 486's, 386's.

Definitely not a modern processor, most of them are as old as they sound. And I'd say pretty much nil for the current "new" market, it's all stuff that can be had on the used parts market. The way you asked the question, I couldn't answer any other way. I do need 16-bit mode, just not in a modern processor design, and yes, I wouldn't be super worried if we lost it. I'm not convinced it needs to go though.

I like full backwards compatibility. That's my biggest pet peeve about Apple -- their total lack of backwards compatibility. I would never use an Apple machine in a project because of it.

If intel processors were struggling performance/stability-wise, then yeah, changes would have to be made, but they're not.


Windows on Arm is currently crap, and that's all I have to go by. They'll never drop x86 as long as people still use them and that's 90% of the market. There's a LOT of momentum in a market share like that...


And an XPS13 isn't the fastest intel machine by far either. It's an ultralight. And no I don't want to get into a benchmark war, they're stupid as far as I'm concerned, I hate them and they provide no info I need other than relative difference between 2 similar products that run the same thing. And that's all I'll say about benchmarks.

What it runs and if it runs it adequately is all I'm concerned about.


The only thing I'm concerned with, they don't run what I know I need, (and the market I represent) They'll be in my phone, and maybe tablet, but that's it for now.


Possibly, but they wont throw away x86 backwards compatibility, they know why they got where they are.


They've been signaling that for years, it hasn't made a difference. It just doesn't sell. Right now there's more ARM Windows machines than ever before, they're never on my to buy list...

I think I'd summarize your point, and correct me if I'm missing it, to say that you see a need to run x86 binaries for quite some time into the future. You don't necessarily need them to run on Intel, you don't need them to run on native x86 hardware with full 16 bit compatibility, but you need them to run quickly and reliably.

I'm working on that same assumption. I don't think I'm misreading the realities of the market. If there is a version of Windows that runs current code faster, cheaper, and just as reliably as a current system, people will buy it. Even if under the Intel logo there's a quite different processor that is running a simplified x86 or something quite different and the code.

I think the difference between our views is just how much gain we think there is to be found in rethinking that low level architecture. If I'm understanding that right, that's a reasonable point for debate.
 
  • Like
Reactions: psychicist

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
One thing to keep in mind: Rosetta 2 does not emulate anything x86 related, so bobcomer's comments regarding emulation and performance penalties are irrelevant to the discussion at hand. What Rosetta 2 does is translate the x86 code into Apple Silicon code prior to the first launch of the app in question. After that initial conversion, the app runs natively for all intents and purposes. This is different from the original Rosetta, which translated PPC code into x86 code at run time.

True, but this is one of those places where words might get in the way of meaning... It's possible peoples opinions on the technology are tied to less efficient approaches, but there's still a difference. It's hard to enumerate all the ways one ISA can execute on a non-native platform, but it's a reasonable bet that they're all introducing some sort of inefficiency to the process. At the moment R2 is very close, but probably not quite as fast on the M series as the native x86 on a top performing Intel. I wouldn’t be surprised if that gap closes over time.

I'd also imagine the penalty for running legacy x86 on a modernized reduced-x86 would be much lower though:

Compilers do tend to optimise x86 object code to flow as smoothly as possible, and Intel executives have asserted that this in and of itself is enough to erase the RISC advantage. And it does work pretty well, but if you are optimising out a lot of your neato features, that functionality is just sitting there idling, and the ISA still has to implement it, for logical symmetry. And the best compilers cannot make up for the weaknesses of a half-century-old design ethos. x86 is just going to cost more to run, there is no way around that.

In addition to stripping out the modes that x86S proposes, it seems these "neato features" that aren't being used but still need to be implemented would be natural targets for software emulation/translation. If old code calls them, they get trapped and translated in software. The penalty would be higher than doing the decode to micro-ops in hardware, but it would presumably be rare and only on older code that will benefit from the generational processor improvements regardless.
 
Last edited:

izzy0242mr

macrumors 6502a
Jul 24, 2009
691
491
My prediction: x86 is here for another 15 years minimum.

ARM chipmakers can't be motivated to truly compete with Apple.

Businesses don't care about power usage (efficiency) and as long as it gets the job done, x86 will be the de facto. So many legacy apps are written to run on x86.

Businesses won't even think about upgrading to ARM chips until 2 things happen: universal and nearly perfect compatibility of x86 apps via ARM computers emulating x86 (like Rosetta) for Windows; and performance exceeds x86 dollar for dollar.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
It would be a rare situation though for a CEO to "bash" their own company and falsely claim to be struggling when they aren't. That's my point. It's not bashing to acknowledge that the company is struggling and they're struggling because their technology isn't positioned where it used to be. You still have parts you can buy for your work, but much of this thread is forward looking.
It wouldn't be rare, stock price manipulation is common, but like I said, I don't look at that at all. And I will have parts I can buy for the next 20 years at least, maybe a lot longer, I have no worries about Intel. It will take a major paradigm shift to change that, and more power efficiency isn't that. I'm no less forward looking than you, but I base my opinions on the market, not the hardware details. The market, or what people buy, is the most important thing to me.

I think I'd summarize your point, and correct me if I'm missing it, to say that you see a need to run x86 binaries for quite some time into the future. You don't necessarily need them to run on Intel, you don't need them to run on native x86 hardware with full 16 bit compatibility, but you need them to run quickly and reliably.
True.

I'm working on that same assumption. I don't think I'm misreading the realities of the market. If there is a version of Windows that runs current code faster, cheaper, and just as reliably as a current system, people will buy it. Even if under the Intel logo there's a quite different processor that is running a simplified x86 or something quite different and the code.
Yes and no, like I said, market momentum is a big thing, people just wont buy something because it is new, they'll see the differences and balk. For me personally, I would test it, but I couldn't promise you I'd buy it, it would have to be very very close in capability. And a software layer emulation isn't going to be that close, it just can't. (on any hardware I see in the next 20 or thirty years, native will always beat it.) Now when we get past silicon, maybe, who knows. I know I'll be watching. :)

I think the difference between our views is just how much gain we think there is to be found in rethinking that low level architecture. If I'm understanding that right, that's a reasonable point for debate.
Not really the same, I'm only looking from the software/market side and you're looking from the hardware side. I understand where you're coming from more than before by quite a bit, but the hardware side, I'm just not as concerned about it, I'm a software guy, always have been.
 
  • Like
Reactions: Analog Kid

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
One thing to keep in mind: Rosetta 2 does not emulate anything x86 related, so bobcomer's comments regarding emulation and performance penalties are irrelevant to the discussion at hand. What Rosetta 2 does is translate the x86 code into Apple Silicon code prior to the first launch of the app in question. After that initial conversion, the app runs natively for all intents and purposes. This is different from the original Rosetta, which translated PPC code into x86 code at run time.

I don't think x86 will be dead (or even close to dead) within the next 10 years, let alone 25. That platform has such a grip on the PC market as a whole that it would basically require Intel and AMD to both give up on x86 entirely before it fades into oblivion, but that's where the majority of the market is. With that being said, I can see x86 losing overall market share, especially as Windows on ARM matures and the Mac continues down the Apple Silicon path. Factor in the push towards RISC-V, and the competition will only increase for the x86 platform.

Given all of the discussion over ARM switching up its licensing terms for the majority of its partners (both Apple and Samsung have completely separate agreements which are not affected in the same way Quallcomm, Mediatek, etc. have been), RISC-V might actually become the biggest competitor to x86, although it's far too early to make an assessment of ARM vs RISC-V.

Adding this here for historical interest, "DIGITAL FX!32: Combining Emulation and Binary Translation":
 

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
Agreed. Intel has typically optimized for uncompromised backwards compatibility and single thread benchmarks. Apple has optimized for balanced performance in a constrained power envelope. The compromise in the M series is desktop single thread performance.
The primary reason for Intel's disadvantage in multi-core is that they are still lagging behind TSMC's cutting edge manufacturing processes. They currently simply can't put as many cores on a chip as e.g. AMD can. If they manage to catch up to TSMC, that will no longer be the case. According to Intel's roadmap that is supposed to happen in 2025.

The thread topic is whether x86 is outdated by which the OP meant the fully legacy compatible x86 as it's always been. I think the answer to that is yes-- the current way Intel does things is the road to ruin. I also think that if they pivot and are able to break with their past they might be able to keep some modernized version of x86 competitive.
I think you are vastly overestimating the impact that backwards compatibility has on x86 performance. That isn't the primary problem. Simplification would of course make life easier or Intel's designers and testers, but they will not make a radical break. That would be more than reckless given the huge amount of critical legacy software.

If that means they can't be nimble with accelerators and coprocessors, then there's a risk they'll be obsoleted by an array of single vendor market niches.
There aren't many companies that have the resources to mimic what Apple is doing.

Beyond that though, power is always the limitation. Most mainframe and server systems are naturally multithreaded, so benefit more from multicore performance than putting all the power into a less efficient fast core.
And for those kinds of workloads Intel and AMD have developed "Sapphire Forest" and "Bergamo".
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
I think you are vastly overestimating the impact that backwards compatibility has on x86 performance. That isn't the primary problem. Simplification would of course make life easier or Intel's designers and testers, but they will not make a radical break. That would be more than reckless given the huge amount of critical legacy software.

It's entirely possible that I'm overestimating the impact, but the choices for why Intel is running so hot are basically process, optimization point, architecture and competence. I don't doubt the competence of their architects. I don't think a generation of process difference or picking a different point on the power/performance tradeoff curve for the process, accounts for what we're seeing. Some here (not me) seem to think that Intel's process is already at parity with TSMC, for whom the only answer really is architecture.

AMD fabs on TSMCs process, and I know they're able to take the fight to Intel in a few key areas, but I don't think they're matching what Apple has shown either when it comes to thermally efficient computing power.

By the same token, I think you're overestimating the impact of replatforming on critical legacy software. How much do you think is out there and what does it do? If it's so old that the changes break hardware compatibility, it's old enough that the change in underlying hardware performance will hide the inefficiency of ISA translation and emulation.

Those customers might balk at changing architectures even if Intel or Microsoft can assure them of compatibility, but are those customers important enough to Intel and Microsoft that they're willing to burden themselves and everyone else with maintaining all this in their products?

The primary reason for Intel's disadvantage in multi-core is that they are still lagging behind TSMC's cutting edge manufacturing processes. They currently simply can't put as many cores on a chip as e.g. AMD can. If they manage to catch up to TSMC, that will no longer be the case. According to Intel's roadmap that is supposed to happen in 2025.
And for those kinds of workloads Intel and AMD have developed "Sapphire Forest" and "Bergamo".

It's not just core count, but thermal limits. There's only so much heat you can pull out of the silicon. Smaller works against you if it's still running hot.

Sapphire Forest, Intel 18A, it's nice to have on a roadmap but Intel's problem over the last decade has been an inability to execute to the roadmap and deliver on expectations. Maybe this CEO can do it. I hope so. I don't expect so.

There aren't many companies that have the resources to mimic what Apple is doing.

There don't have to be many. In reality, there only needs to be one other than Apple. Apple won't sell their processors to OEM PC makers, but if someone does then the landscape shifts in a way it hasn't for 20 years.

Qualcomm, Samsung, AMD, Marvel all have the means. Or a fabless unicorn startup somewhere...
 
  • Like
Reactions: psychicist

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
It's hard to directly compare how Intel or AMD approach processor design to how Apple does it with the M-series of SoCs. AMD is ahead of Intel in terms of process, since they rely on TSMC instead of trying to run their own foundries. In fact, dropping their foundry business was one of AMDs brightest moves. Intel has started to explore outside parties fabricating their CPUs because of the issues they have had with yields and moving to smaller process nodes. Intel even went so far as to adopt a BIG.little design for the 13th gen Core CPUs, similar to what ARM-based systems have used for years. Meanwhile, AMD (for now) is still relying on monolithic cores, although that might change when AM6 replaces AM5 as their primary platform. What's interesting is that AMD actually beats Intel in performance per watt, despite not running any efficiency cores. Part of that is using a smaller process node than Intel, but part is also inherent to AMDs design philosophy for the Ryzen series as a whole.
 

xraydoc

Contributor
Oct 9, 2005
11,019
5,484
192.168.1.1
Long thread and I'm just coming in on page 6. Too much to read every post, but I think for many applications, x86 will be dead in a decade. There will be few laptops running x86, I predict. I think most will be running on some kind of an ARM variant. Compact all-in-one PCs will likely be on the same trajectory.

Where power consumption doesn't have as much of an impact, like for servers and high-end desktop workstations, then there will probably be x86 variants still being used.

I predict a much more fragmented code base situation compared to what we're used to on the Mac. While it took really only about two years before we almost never have to think about what apps are M-series native and which are Intel, I think the situation will be a lot messier on the Windows side of things.
 
  • Like
Reactions: psychicist

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
Long thread and I'm just coming in on page 6. Too much to read every post, but I think for many applications, x86 will be dead in a decade. There will be few laptops running x86, I predict. I think most will be running on some kind of an ARM variant. Compact all-in-one PCs will likely be on the same trajectory.

Where power consumption doesn't have as much of an impact, like for servers and high-end desktop workstations, then there will probably be x86 variants still being used.

I predict a much more fragmented code base situation compared to what we're used to on the Mac. While it took really only about two years before we almost never have to think about what apps are M-series native and which are Intel, I think the situation will be a lot messier on the Windows side of things.

If you look at Microsoft's existing forays into ARM-based software and hardware, it's already messy. Somehow Microsoft took the same mindset as Apple with respect to cutting backwards compatibility for ARM-based systems, but in a completely different fashion. I'm sure some of that can be attributed to Apple being a hardware and software company from day one, while Microsoft has primarily been software first since its inception. With that being said, it's still odd to see them intentionally take a wildly divergent path towards a similar goal as Apple.
 
  • Like
Reactions: psychicist

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
If you look at Microsoft's existing forays into ARM-based software and hardware, it's already messy. Somehow Microsoft took the same mindset as Apple with respect to cutting backwards compatibility for ARM-based systems, but in a completely different fashion. I'm sure some of that can be attributed to Apple being a hardware and software company from day one, while Microsoft has primarily been software first since its inception. With that being said, it's still odd to see them intentionally take a wildly divergent path towards a similar goal as Apple.
Microsoft doesn't really have strong incentives to push for ARM. For Apple it makes a lot of sense because (1) it gives them a unified CPU platform for both mobile devices and computers, and (2) it gives them full control over their vertically integrated hard-/software stack and allows them to customize the CPU for their specific software and target segments. Neither of those apply to MS; the success of Windows is heavily dependent on the PC OEMs, for whom a complete platform change would incur a lot of cost with no obvious business benefit, given that they already dominate the PC market with the current platform.

A player with a stronger incentive might be Qualcomm. They'd love to get a foothold in the low-end PC market. But it's unclear if they can compete with Intel/AMD.
 

Kazgarth

macrumors 6502
Oct 18, 2020
318
834
Microsoft doesn't really have strong incentives to push for ARM. For Apple it makes a lot of sense because (1) it gives them a unified CPU platform for both mobile devices and computers, and (2) it gives them full control over their vertically integrated hard-/software stack and allows them to customize the CPU for their specific software and target segments. Neither of those apply to MS; the success of Windows is heavily dependent on the PC OEMs, for whom a complete platform change would incur a lot of cost with no obvious business benefit, given that they already dominate the PC market with the current platform.

A player with a stronger incentive might be Qualcomm. They'd love to get a foothold in the low-end PC market. But it's unclear if they can compete with Intel/AMD.
The future is VR/AR (VR to replace desktop, AR glasses to replace mobile phones).

Microsoft has every interest to remain relevant and in both market and the only way to do so is to fully adopt ARM.
(They missed out the mobile phones era, so they are working overtime not to miss out on the post mobile phones era)

They already have SQ1/SQ2 ARM chips from Qualcomm. And it's getting better once Nuvia team releases their first silicon.

Edit: Their Hololens also uses Qualcomm ARM silicon.
 
Last edited:

ditroia

macrumors newbie
Sep 10, 2012
9
0
Adelaide, Australia
My Mate is the IT Tech for a Primary School.
Don't think you can yet get any ARM Laptops for under $500 Australian 4+ core CPU, 8GB RAM and 128/256GB NVME SSD.

For the price of one M1 MacBook Air [At education pricing], you can get 2-3 Laptops.
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
In the mid '80s, Apple introduced the (premium) consumer-level GUI (which was not their own invention per se but a majo refinement on the concept) – within a decade, it had become extremely difficult to buy a consumer-level computer that did not come with a GUI. In the mid/late '90s, Apple abandoned the floppy drive, and various arcane ports in favor of USB – over the following decade, floppies became increasingly scarce and USB became the de facto standard. In the late aughts, they introduced a simple multi-touch-screen smartphone, and it became the fundamental design principle for nearly all smartphones (it may be possible to obtain a phone with one of those hinky keyboard things, but they are very uncommon).

A few years ago, after a decade-and-a-half flirtation with them, Apple abandoned the mess that is Intel/AMD's x86-64 architecture. Apple has had some product flops, though this move looks a lot like not one. This may be the onset of another Apple-driven major trend that could see x86 marginalized – you can still get some of the things Apple has killed over the decades, but it tends to take extra effort.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
In the mid '80s, Apple introduced the (premium) consumer-level GUI (which was not their own invention per se but a majo refinement on the concept) – within a decade, it had become extremely difficult to buy a consumer-level computer that did not come with a GUI. In the mid/late '90s, Apple abandoned the floppy drive, and various arcane ports in favor of USB – over the following decade, floppies became increasingly scarce and USB became the de facto standard. In the late aughts, they introduced a simple multi-touch-screen smartphone, and it became the fundamental design principle for nearly all smartphones (it may be possible to obtain a phone with one of those hinky keyboard things, but they are very uncommon).

A few years ago, after a decade-and-a-half flirtation with them, Apple abandoned the mess that is Intel/AMD's x86-64 architecture. Apple has had some product flops, though this move looks a lot like not one. This may be the onset of another Apple-driven major trend that could see x86 marginalized – you can still get some of the things Apple has killed over the decades, but it tends to take extra effort.

At the time Apple ditched the floppy, that medium was already on the way out as optical media and even external storage devices such as the Sygate Quest and Iomega ZIP drive became more widely accepted, due in large part to their ability to hold significantly more data. USB largely eliminated the need to manually set up peripherals such as mice, printers, and external storage, and even Microsoft jumped on the "Universal Plug and Play" bandwagon once Apple went there with the iMac.

x86 is an entirely different matter though. Given the sheer dominance of x86 in the electronics space, it would take a seismic shift from the PC side of the industry to change gears from x86 based hardware to ARM or RISC-V. Inertia is real, and it would be extremely difficult to change course among the major players on the PC side, such as HP, Dell, and Lenovo. Microsoft themselves would also have to make WoA their priority instead of x86 variants of Windows, and progress on that side of the market has been (arguably) glacially slow from Redmond.
 

huge_apple_fangirl

macrumors 6502a
Aug 1, 2019
769
1,301
A few years ago, after a decade-and-a-half flirtation with them, Apple abandoned the mess that is Intel/AMD's x86-64 architecture. Apple has had some product flops, though this move looks a lot like not one. This may be the onset of another Apple-driven major trend that could see x86 marginalized – you can still get some of the things Apple has killed over the decades, but it tends to take extra effort.
Apple was never wed to x86 the way Windows is, though. Apple has always been flexible with chip architecture.
 
  • Like
Reactions: Unregistered 4U
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.