Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
It's an M1+ basically. Nothing new, just turned up a notch. Stop-gap doesn't seem entirely unfair, before the genuinely 'new' chip arrives. Somewhat like how Intel's tick-tock worked, before they got stuck on the same generation for aaaaaages.
In all fairness, Intel go stuck in process node, not architecture. However, their architectures did not provide expected performance gains.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
It's an M1+ basically. Nothing new, just turned up a notch. Stop-gap doesn't seem entirely unfair, before the genuinely 'new' chip arrives. Somewhat like how Intel's tick-tock worked, before they got stuck on the same generation for aaaaaages.

I wouldn’t say that it’s the same as Intels tick-tock. The E-cores got a massive upgrade, the cache system was improved, GPU got new features, the NPU got redesigned etc. Sure, the P-cores didn’t see that much change but it’s just one component among many.
 
  • Like
Reactions: Tagbert

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I have heard it from many people that Microsofts insistency on keeping up support for legacy code basis is holding back CISCo/x86. In order for AMD to be competitive it would have to I'm speculating that AMD has to support those legacy items as well.

I honestly don’t know how much this is the case. It’s not that the legacy code holds back x86 but x86 itself is based a legacy design. If the legacy stuff were removed it wouldn’t be x86 anymore.

At the end of the day, once a x86 CPU converts the x86 to the internal microcode, it’s probably not that different in spirit from the representation Apple uses internally. Sure, x86 has some inherent disadvantages like more expensive decode, but so far it only seems to affect power consumption and AMD for example shows that it can still be done fairly efficiently.

I think most of it indeed boils down to money and effort. Apples decisive advantage is that their chips don’t need to compete on the free market. If Apple sold them directly I have little doubt they would be a commercial failure - it’s a very expensive product with relatively low volume. Whatever savings Apple gets from not buying Intel anymore is likely not enough to cover the costs of R&D.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
It would be cool to see 3 nm chips in Apple's late 2022 Macs. However, in TSMC's July 14, 2022 Earnings Call, they said revenue contribution (which means shipping to customers) for N3 is expected to start in the first half of 2023:

1659080680299.png


 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
I honestly don’t know how much this is the case. It’s not that the legacy code holds back x86 but x86 itself is based a legacy design. If the legacy stuff were removed it wouldn’t be x86 anymore.

At the end of the day, once a x86 CPU converts the x86 to the internal microcode, it’s probably not that different in spirit from the representation Apple uses internally. Sure, x86 has some inherent disadvantages like more expensive decode, but so far it only seems to affect power consumption and AMD for example shows that it can still be done fairly efficiently.

I think most of it indeed boils down to money and effort. Apples decisive advantage is that their chips don’t need to compete on the free market. If Apple sold them directly I have little doubt they would be a commercial failure - it’s a very expensive product with relatively low volume. Whatever savings Apple gets from not buying Intel anymore is likely not enough to cover the costs of R&D.
It's been probably 5 years since I heard the reasoning, and in abstract I agree with you. Micro-ops are micro-ops, in theory. I wish I could speak to it more specifically, but the way it was my understanding that the chip would have to essentially switch encode and decode "modes" in order to support functions that just weren't useful or efficient today.

To me it seems obvious that Apple is using the same methodology today in converting instructions to smaller micro-ops, but if it was as simple as allowing the encoder and decoder to simply support additional instruction sets than why did Apple push for the death of 32 bit apps today? I would personally think that it isn't just encode and decode blocks that make the micro-ops from RISC instructions, but reducing what's supported may make the execution pipeline more efficient as well as the different execution engines simpler thus faster and more efficient. This is a space well outside of my wheel house.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
It's been probably 5 years since I heard the reasoning, and in abstract I agree with you. Micro-ops are micro-ops, in theory. I wish I could speak to it more specifically, but the way it was my understanding that the chip would have to essentially switch encode and decode "modes" in order to support functions that just weren't useful or efficient today.

To me it seems obvious that Apple is using the same methodology today in converting instructions to smaller micro-ops, but if it was as simple as allowing the encoder and decoder to simply support additional instruction sets than why did Apple push for the death of 32 bit apps today? I would personally think that it isn't just encode and decode blocks that make the micro-ops from RISC instructions, but reducing what's supported may make the execution pipeline more efficient as well as the different execution engines simpler thus faster and more efficient. This is a space well outside of my wheel house.

I don't think that the legacy operations are a big problem in practice. Modern CPUs have to support these operations, but they don't put much effort into it. If a CPU hits one of those old instructions, it will simply enter a "slow decode" mode and you will suffer a performance penalty. This shouldn't affect the high-performance parts much. Maybe there is some subtle interaction that I am not aware of, but that's the basic idea.

As to why Apple moved fast to abandon 32bit software... well, that's not much of a mystery. Where x86-64 is mostly a straightforward extension of x86-32, ARM64 is a very different beast from ARM32. Instruction encoding is different, design philosophy is different. ARM32 was developed with embedded hardware in mind, where the implementation was simpler and ultra-efficient execution was required. ARM64 is a fully redesigned, modern approach targeted at high-performance superscalar CPUs. And of course, there is also software consideration — it's much easier (and less error prone) to only support one execution model instead of dealing with multiple ones.
 
  • Like
Reactions: writerinserepeat
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.