Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Will the x86 architecture be fully outdated in 10 years

  • Yes

    Votes: 38 13.1%
  • No

    Votes: 195 67.2%
  • Possibly

    Votes: 57 19.7%

  • Total voters
    290

spiderman0616

Suspended
Aug 1, 2010
5,670
7,499
As is office computers. Still the majority in that space even if non office users don't see them. It's like people around here don't even see 90% of the market...
Many here think the world is much smaller than it really is and that Macrumors forums=everyone. Even the biggest threads here are more or less a storm in a teacup in relation to the number of average Apple customers.
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
At the moment, they're lagging TSMC on process …
No, not really. Intel "N7", the node used for Alder Lake, is about equivalent to TSMC's "N4", the node used for Apple's M2 and a fair bit denser than TSMC's "N5", the M1 node. Apple does not really have a process advantage over Intel, yet their devices use coniderably less juice to do the same work.

TSMC is based in Taiwan, Intel is based in California and Samsung is based in Korea: their node numbers have different meanings, almost as though they originate in places that speak different languages.
 
  • Like
Reactions: prefuse07

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
"Fully optimizing" isn't a one-dimensional process. You optimize designs with different goals. Intel has historically optimized for maximum performance and treated mobile as more of an afterthought (which is why they missed the boat in the smartphone market 15 years ago), while the M-series was derived from a mobile device CPU that is tailored for Apple's very specific needs (and makes some compromises elsewhere).

Agreed. Intel has typically optimized for uncompromised backwards compatibility and single thread benchmarks. Apple has optimized for balanced performance in a constrained power envelope. The compromise in the M series is desktop single thread performance.

What I meant, and perhaps wasn't fully expressing, is that experience teaches a lot about where the inefficiencies are in a design. Intel has had more time to squeeze those out and Apple and the Arm world still probably have some fat they can cut. I think Apple, at least, is probably pretty far down the path though...

"Lunar Lake" is Intel's first attempt in a long time to design a mobile-first CPU, and (if they can pull it off on schedule) will also narrow the gap to TSMC's cutting edge manufacturing process.

But it's still vaporware and will have to compete not with where Arm is today but where it will be in a couple years.

True, this potentially allows them to move faster. But (coming back to the thread topic) it also means that their CPUs will remain a single-vendor market niche.

The thread topic is whether x86 is outdated by which the OP meant the fully legacy compatible x86 as it's always been. I think the answer to that is yes-- the current way Intel does things is the road to ruin. I also think that if they pivot and are able to break with their past they might be able to keep some modernized version of x86 competitive.

If that means they can't be nimble with accelerators and coprocessors, then there's a risk they'll be obsoleted by an array of single vendor market niches.

You can't just scale the power and clock frequency if the chip wasn't designed for it.

Of course not. Why would anyone think they could?

The new CEO started this turnaround plan a couple of years ago, and they are pumping enormous amounts of money into it.

Yep. Here's hoping they pull it off, but it will take a while to know and there's no end of pressure to put a good face on whatever's really happening. Maybe after a long history of delays and missed expectations, they've finally go it together this time.

They are still unrivaled in terms of volumes when it comes to computers. You can't just mingle the mobile device market with the market for servers and PCs. As mentioned earlier, x86 still has somewhere around 90% market share in both segments. The M-series has much, much smaller volumes, which is probably the reason why development since the M1 hasn't been very fast.

I compared iPhones to PCs because those numbers are readily available. Apple makes AS devices for more than their phones and Apple is only half or less of the mobile market. There's also growing presence of Arm in the server room.

But the more important number that you cut out of your quote is the amount of money available for R&D. Both Apple and Qualcomm have Intel overmatched, which doesn't bode well.

An underestimated advantage in favor of x86 is that Intel (and to a lesser extent AMD) is nurturing a big ecosystem of standardized platform components and software to enable them, which makes it possible for a large number of OEMs to enter the market. There is nothing comparable for ARM.

Maybe, and maybe not yet. There's no reason for it if Apple is the only company making Arm PCs. But Apple isn't Intel's competition here-- Qualcomm and Samsung are. The system will follow the processor trends.

Are they though? There is still no ARM CPU that can keep up with their (and AMD's) "big iron" CPUs in terms of raw performance, even though some of the candidates run on a currently still superior TSMC manufacturing process.

Again, the only benchmark the M series lag on is desktop single thread performance. There's no apparent technical reason for this given how much power x86 has to burn to match M-series performance.

Beyond that though, power is always the limitation. Most mainframe and server systems are naturally multithreaded, so benefit more from multicore performance than putting all the power into a less efficient fast core.

Amazon seems to think Arm is a better bet on at least some of their big iron. Graviton, ThunderX, Altra are all making headway in the server room.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
No, not really. Intel "N7", the node used for Alder Lake, is about equivalent to TSMC's "N4", the node used for Apple's M2 and a fair bit denser than TSMC's "N5", the M1 node. Apple does not really have a process advantage over Intel, yet their devices use coniderably less juice to do the same work.

TSMC is based in Taiwan, Intel is based in California and Samsung is based in Korea: their node numbers have different meanings, almost as though they originate in places that speak different languages.

The numbers started becoming meaningless when the definition of "minimum feature size" and "gate length" became less well defined...

I don't think density tells the whole story. I think the TSMC process is still ahead of Intel at this point. Intel lost too many years at 14 and 10.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
I can't rattle off a list of names, mostly because it seems to be getting hard to Google anything about this. I suspect DEC's lack of success in this area is why it hasn't left a large footprint, either on the modern web or in your memory. However, I was able to find a Byte magazine article on archive.org, written in 1992 by a DEC engineer as an introduction to Alpha, and it has this at the end:


Yeah, earlier in the article they also say it's meant to be licensed.

I'm having the same problem looking for details on some of this stuff. Information on the net gets sparse going back to the 90s and Alpha doesn't make a great search keyword...
 
  • Like
Reactions: mr_roboto

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
I don't think density tells the whole story. I think the TSMC process is still ahead of Intel at this point.
Higher density means shorter leads and thinner gates, which means faster signal propagation with less juice/heat. Unless TSMC burns cleaner, less ragged leads and/or has better geometry or better materials/more consistent doping, I find it hard to imagine how Intel's denser process is not an advantage. The link says that iN7 is comparable to a tN4.1.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
Higher density means shorter leads and thinner gates, which means faster signal propagation with less juice/heat. Unless TSMC burns cleaner, less ragged leads and/or has better geometry or better materials/more consistent doping, I find it hard to imagine how Intel's denser process is not an advantage. The link says that iN7 is comparable to a tN4.1.
Everything you’re saying is true in a relative sense, but geometry is not the only parameter so it’s exceedingly hard to make absolute comparisons. At this point I’m willing to assume that at least part of what’s holding x86 back is process related.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,141
1,899
Anchorage, AK
The numbers started becoming meaningless when the definition of "minimum feature size" and "gate length" became less well defined...

I don't think density tells the whole story. I think the TSMC process is still ahead of Intel at this point. Intel lost too many years at 14 and 10.

If Intel was truly as close in terms of process size as being claimed by some in this thread, then their CPUs would require noticeably less power to operate. Yet if anything, Intel (and AMD to a lesser extent) have been maintaining if not increasing power requirements for their CPUs.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,609
8,624
If Intel was truly as close in terms of process size as being claimed by some in this thread, then their CPUs would require noticeably less power to operate. Yet if anything, Intel (and AMD to a lesser extent) have been maintaining if not increasing power requirements for their CPUs.
I think Intel’s processors, due to the requirement to support a LOT of legacy code, can’t really be efficient as it requires a good amount of work just to figure out what mode the current instruction is in, then break it up into bits that can actually be processed. :) The process node decreases DO make a difference, but if they’re dragging around a bag of garbage to every process node, then that garbage is going to end up causing them the same problems over and over again. It’s no mistake that one of the most performant and efficient chips available today is working on a fairly clean and modernized code base.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,141
1,899
Anchorage, AK
I think Intel’s processors, due to the requirement to support a LOT of legacy code, can’t really be efficient as it requires a good amount of work just to figure out what mode the current instruction is in, then break it up into bits that can actually be processed. :) The process node decreases DO make a difference, but if they’re dragging around a bag of garbage to every process node, then that garbage is going to end up causing them the same problems over and over again. It’s no mistake that one of the most performant and efficient chips available today is working on a fairly clean and modernized code base.

The point is that the power requirements for both Intel and AMD processors would be lower if they were as close to Apple Silicon in terms of process size as some are trying to claim. That is an independent argument from the bloat inherent to the x86 and x86-64 ISA.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
The point is that the power requirements for both Intel and AMD processors would be lower if they were as close to Apple Silicon in terms of process size as some are trying to claim. That is an independent argument from the bloat inherent to the x86 and x86-64 ISA.
Yep, they're struggling on both the architecture and the process so they can't play their old games of using one to hide deficiencies in the other. At the moment those deficiencies compound.

The arbitrary renaming of their processes suggest they're already sensitive to the fact that the media and investors only understand nanometers. It's not beyond the realm of possibility that they are doing what they need to in order to claim parity on process size while being so badly tuned on other details that they're unable to reap the benefits of the finer line sizes.

In fairness, we can't expect them to turn the ship in a single generation. The process struggles have been apparent for a long time though so it's fair to ask if they can turn the ship at all.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,609
8,624
The point is that the power requirements for both Intel and AMD processors would be lower if they were as close to Apple Silicon in terms of process size as some are trying to claim. That is an independent argument from the bloat inherent to the x86 and x86-64 ISA.
I don’t think so, though. Because Apple Silicon doesn’t have a power hungry decoder that takes up an inordinate amount of real estate on the chip. Get rid of that and Intel could be closer to Apple’s numbers, but get rid of that and you end up with a chip no one will buy.
 
Last edited:

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
I don’t think so, though. Because Apple Silicon doesn’t have a power hungry decoder that takes up an inordinate amount of real estate on the chip. Get rid of that and Intel could be closer to Apple’s numbers, but get rid of that and you end up with a chip no one will buy.

We're looking at a few independent contributors. TSMCs process, I believe, is still better than Intel's, TSMC's process allows Apple to trade performance for power and it seems Apple has been leaning more towards power efficiency than all out performance, and the x86 architecture has an enormous amount of technical debt that makes everything Intel does less efficient.

The technical debt is real. If their process was competitive they'd still be behind. I don't know how anyone at Intel could read that x86S document and not be screaming into their pillow about how they've somehow convinced themselves what they've been doing is ok. As you say, the legacy stuff makes everything they do more complicated to design and then more complicated to execute. It means more logic needs to switch, burning power, more real estate, leaking power, and it also means that things take more time which means needing to run faster to do the same work burning power.

There are people on the forum who can probably describe exactly what those inefficiencies translate to as far as where the extraneous logic sits and how it limits prediction and cache efficiency and what other bottlenecks are introduced. I'm not familiar to that level of detail, but I'm familiar enough to develop a gut feel and my gut says architecture can't explain it all.

Even if Intel cleared their technical debt, I think they'd be at a disadvantage because of their process. In other words, if they opened as a foundry to Apple, I don't think Apple would choose to produce their Apple Silicon parts on Intel processes. The inefficiency of the architecture is a problem but I can't convince myself it fully explains how much better the M-series appears on a performance per watt basis.

As far as whether anyone would buy a simplified x86, I honestly can't see why they wouldn't. Intel makes a pretty compelling case themselves:

"Since its introduction over 20 years ago, the Intel® 64 architecture became the dominant operating mode. As an example of this evolution, Microsoft stopped shipping the 32-bit version of their Windows 11 operating system. Intel firmware no longer supports non UEFI64 operating systems natively. 64-bit operating systems are the de facto standard today. They retain the ability to run 32-bit applications but have stopped supporting 16-bit applications natively. "​

I think they've fallen victim to their own marketing message that if you don't have "Intel Inside", you can't be sure it's going to be "compatible". They've become so fundamentalist about it that they're convinced themselves that to be truly compatible means being able to trace back all the way to the dawn of the microprocessor.

We don't need that. We live in a 64bit world with very capable translators, emulators, and virtualizers. Itanium learned that it was better to translate x86 through software than implement it in hardware. Apple's Rosetta runs close to native x86 speeds on Arm. Window NT provided x86 compatibility to PowerPC, MIPS and Alpha and, while trying to fact check myself on Alpha earlier, I'm seeing that the Alpha systems of the day were the fastest way to run x86 code because the processor was fast enough to hide the translation inefficiencies.

If Intel can make a faster, more efficient processor then I think they can stay relevant even if it means moving away from gate level compatibility with x86 instructions. It means giving up the ruse that only Intel can be compatible though. The competition came up quickly but Apple has a culture of keeping alternative idea alive in the lab so they can be quick to pivot-- maybe Intel's been doing the same...
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,609
8,624
I think they've fallen victim to their own marketing message that if you don't have "Intel Inside", you can't be sure it's going to be "compatible". They've become so fundamentalist about it that they're convinced themselves that to be truly compatible means being able to trace back all the way to the dawn of the microprocessor.
Agreed. That, plus, if they ever try to do “something new” that’s more performant and efficient but marginally not backwards compatible, AMD will be sitting there ready to eat their lunch. Well, they’d be READY to, but they can’t really supply solutions in the numbers that the world would want, so there’s that.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
Agreed. That, plus, if they ever try to do “something new” that’s more performant and efficient but marginally not backwards compatible, AMD will be sitting there ready to eat their lunch. Well, they’d be READY to, but they can’t really supply solutions in the numbers that the world would want, so there’s that.
This is all upside for AMD, I think. The markets clearly think so...
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
@bobcomer, the reaction button is a bit of a blunt instrument on a long post and it's rather safe to laugh but not venture an opinion of your own for review. I'd be interested in hearing where your opinion differs and why.
 

mi7chy

macrumors G4
Oct 24, 2014
10,619
11,293
Actually, it's more of an upside for Intel if it's intended for data center which had dropped 32-bit OS options with Windows Server 2012 in 2012 and Redhat 7 in 2014. It's just wasted silicon space for most modern data centers.
 
Last edited:

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
@bobcomer, the reaction button is a bit of a blunt instrument on a long post and it's rather safe to laugh but not venture an opinion of your own for review. I'd be interested in hearing where your opinion differs and why.
It was very intentional, your ideas on what's behind and what's not are just don't fit and especially said "remedy" for intel. It's like you don't understand the major laptop/desktop market and want to go full theory driven and ignore what the market needs. The real world doesn't work like that. I've posted what I thought many times in this forum, no need to go any further.

The continuous intel/AMD bashing bothers me, mainly because I know I, and a majority of the market needs them. The same can't be said for Apple. (I just want them)
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
It was very intentional, your ideas on what's behind and what's not are just don't fit and especially said "remedy" for intel. It's like you don't understand the major laptop/desktop market and want to go full theory driven and ignore what the market needs. The real world doesn't work like that. I've posted what I thought many times in this forum, no need to go any further.

The continuous intel/AMD bashing bothers me, mainly because I know I, and a majority of the market needs them. The same can't be said for Apple. (I just want them)

Do you really need 16bit modes supported in hardware?

The question is what do you and the majority of the market need? Do you need Intel flavored transistors? No. Do you need continuity for running legacy x86 code? Yes. Does it really matter in the end what hardware that code is running on? x86 code is being hardware translated to micro-ops anyway, would it bother you if that translation was happening in software and if the micro-ops mapped to the Arm ISA? Would you be more comfortable if that software and the core it executed on were made by Intel?

In the end, is there more to consider than just having your programs run as quickly as possible and without errors?

Apple and Intel aren't competitors. Apple was an Intel customer, now they aren't. This conversation has little to do with Apple beyond the fact that Apple had the R&D budget to show the weaknesses in x86. The competition will come from elsewhere: Qualcomm, Marvel, Samsung, quite likely AMD.

It's not bashing to say Intel is struggling-- that doesn't even seem in dispute by Intel themselves. There's more than a little schadenfreude mixed in because Intel's arrogant bullying hasn't made them many friends, but while some humbling is welcome it would be a shame to see a titan of the industry fade away.
 
  • Like
Reactions: psychicist

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
The technical debt is real. If their process was competitive they'd still be behind. I don't know how anyone at Intel could read that x86S document and not be screaming into their pillow about how they've somehow convinced themselves what they've been doing is ok. As you say, the legacy stuff makes everything they do more complicated to design and then more complicated to execute. It means more logic needs to switch, burning power, more real estate, leaking power, and it also means that things take more time which means needing to run faster to do the same work burning power.

The reality seems to be that the x86 ISA penalty is on the order of 3~5% of the power envelope. Once the instruction stream is parsed and diced into μops, running them costs about as much as a clean ISA. The back end can add some complexity, as the μops have to be collected and properly ordered for retirement – but a clean architecture has to do this step as well, especially if it is ridiculously out-of-order.

The x86 designs use a μop cache that lets them capture a small loop inside the pipe, bypassing repeated instruction decoding. And code does tend to spend a lot of its time in small loops, so this is a worthwhile optimisation (and probably one thing that keeps the ISA penalty as low as it is).

Perhaps the biggest legacy penalty is the register architecture. The 8086 was designed to do much of its work in memory and to that end, there are a lot of dedicated function registers which are used to do a thing that cannot be done with just any random register. ARM Aarch64 is designed to do as much work as possible inside the register file, which is faster than doing a lot of work with memory-based operands. A simple subroutine, for instance, may be able to do its work without ever touching the stack, which is a very good thing.

Compilers do tend to optimise x86 object code to flow as smoothly as possible, and Intel executives have asserted that this in and of itself is enough to erase the RISC advantage. And it does work pretty well, but if you are optimising out a lot of your neato features, that functionality is just sitting there idling, and the ISA still has to implement it, for logical symmetry. And the best compilers cannot make up for the weaknesses of a half-century-old design ethos. x86 is just going to cost more to run, there is no way around that.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
The reality seems to be that the x86 ISA penalty is on the order of 3~5% of the power envelope.

Do you happen to have a reference? That number seems low to me, but your phrasing makes it sound like you have more than just intuition behind it...
 
  • Like
Reactions: psychicist

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Do you really need 16bit modes supported in hardware?
Not on my user PC's, but we have embedded PC's that run/monitor industrial machines, so the answer would be yes.

The question is what do you and the majority of the market need?
Windows compatibility, and that means x86 compatibility. That's not changing in 10 years, or even 20.
Do you need Intel flavored transistors?
Whatever that is, but I suspect no, just x86 compatibility.

Does it really matter in the end what hardware that code is running on? x86 code is being hardware translated to micro-ops anyway, would it bother you if that translation was happening in software and if the micro-ops mapped to the Arm ISA? Would you be more comfortable if that software and the core it executed on were made by Intel?
Of course it matters to a degree, AMD runs x86 code and that's fine, and useful for me, but ARM, RISC-V, whatever other, they don't run x86 code. As long as it ran as good as a real intel processor, I'd be fine with whatever hardware is under it, but nobody but AMD and Intel can do it now. Emulation is tough and slow in the software layer and always will be. I wouldn't buy an ARM processor from Intel to replace an x86 processor any more than I'd replace a PC with a Mac for running x86 software. I have no love of intel, they're just an appliance maker these days and things that get to that level are darned hard to replace except by a similar appliance.
In the end, is there more to consider than just having your programs run as quickly as possible and without errors?
No, there's not. PC's (for businesses like ours) don't use enough power to worry about it and software on new platforms costs huge dollars, the balance isn't even close. We use much more electricity on industrial equipment that PC's come out to almost a rounding error in difference.

Now businesses that are all PC's, yes, they might want more efficiency, but they also have the cost of rewriting their software to balance it against. It's not an easy choice to say electrical efficiency is the most important thing.

Apple and Intel aren't competitors. Apple was an Intel customer, now they aren't. This conversation has little to do with Apple beyond the fact that Apple had the R&D budget to show the weaknesses in x86. The competition will come from elsewhere: Qualcomm, Marvel, Samsung, quite likely AMD.
The original question asked if intel would be gone in 10 years, and the only way that happens is something else replaces it. That isn't Apple, not with their total lack of backwards compatibility. That is how Intel/Windows got to be 90% of the market after all. Nobody yet has really stated a real weakness for Intel, and qualcomm, marvel, and samsung are even less of a competitor than Apple. AMD is more of a co-host than anything, and as I said above, if they carry on x86, and not intel, well, makes no difference to me. I'm a software guy and all I'm concerned about is the software, with a few driver exceptions..
It's not bashing to say Intel is struggling
It is when they aren't currently struggling.
 
  • Like
Reactions: falainber

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
It is when they aren't currently struggling.

Gelsinger, Intel CEO:

“We didn’t get into this mud hole because everything was going great. We had some serious issues in terms of leadership, people, methodology, et cetera that we needed to attack.”​

we have embedded PC's that run/monitor industrial machines, so the answer would be yes.
They're running Intel x86 processors in 16bit mode?! Certainly not a modern Core processor, I'd hope... I'd be curious what fraction of the x86 market this accounts for to be worth carrying this mode forward.

Windows compatibility, and that means x86 compatibility. That's not changing in 10 years, or even 20.
Windows already runs on Arm. It used to run on PowerPC, MIPS, Alpha and Itanium. And that was the old Microsoft. There's no reason the new collaborative Microsoft can't run on another architecture in the future. If Microsoft thinks they're losing an edge by being tethered to x86, they'll support alternatives. It'll be a while before the drop x86, but they're not going to let Intel drag them down.

Emulation is tough and slow in the software layer and always will be.

https://www.tomsguide.com/news/macbook-pro-m1-benchmarks-are-in-and-they-destroy-intel

"on the PugetBench Photoshop test — which performs 21 different Photoshop tasks, three times per run — the M1 Air (653) and Pro (649) beat the XPS 13 (588). Again, though, this test isn't optimized for Apple Silicon — it's an Intel-based test running through Rosetta 2"​

I'm not trying to start a dueling benchmark thread, or argue individual benchmarks and hardware details forever, but two contemporary machines running an x86 benchmark and emulation won handily all while generating significantly less heat-- even if it's not better in all cases, it's certainly competitive. If the target platform wasn't Arm but was a faster slimmed down subset of x86? I have to think it would do even better.

The original question asked if intel would be gone in 10 years, and the only way that happens is something else replaces it. That isn't Apple, [...] and qualcomm, marvel, and samsung are even less of a competitor than Apple. AMD is more of a co-host than anything
No, it isn't Apple. I don't know what grounds you have to dismiss the others though. Marvel has been heavily invested in Arm since they bought the StrongARM IP, Qualcomm has the balance sheet to outspend Intel, and Samsung has a bit of both.

AMD knows the x86 market inside and out, they have the OEM contacts, and they're willing to take risks to gain market share. I could see them breaking the mold.

There's already a lot of Arm Linux happening, but if Microsoft signaled an intent to broaden their Windows on Arm side hustle? Investment would pour in to Arm based PCs.
 
Last edited:
  • Like
Reactions: psychicist

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
Do you happen to have a reference? That number seems low to me, but your phrasing makes it sound like you have more than just intuition behind it...

Ok, here is a confusing graphic


that links to a rather badly-written page, which I believe is the origin of the claim that I was referencing. The number I heard seems to come from farther down the page, where they say that when the decoder kicks in (when the μop cache cannot be used), it adds something like 4% to the power draw, which does not line up with this graphic.

My interpretation of the graphic is:
  • "uncore" refers to the power draw of the processor support logic (this seems perhaps a bit low)
  • "cores" is the total draw of each of the cores (probably P-cores)
  • "execution units" is part of core draw
  • "instruction decoders" is likewise part of core draw
  • caches (L1,L2,L3) are not part of either core or uncore draw

The graphic would suggest that the decoders add around 18% of the power draw (averaging the ~8% for FP with the ~20% for integer, with the consideration that integer will tend to get much more use than FP, most of the time).

Which, having looked at this mess, leaves me curious as to what the reality is. Clearly the small number I heard tossed about is not very accurate, and this source is a very long way from reliable. I apologize for repeating casual hearsay.

It does seem unlikely that an x86 core is actually using a sixth of its power to interpret code, but maybe it really is. Decoding ARM instructions cannot cost more than a fraction of 1% of the power a core uses, though the elaborate trip to one of the execution queues is kind of expensive.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.