Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Homy

macrumors 68030
Jan 14, 2006
2,510
2,461
Sweden
Debugger's article is one of the best articles I've read on the subject. Intel and AMD will soon fall behind because they simply can not implement the same technological development as Apple does. Both the x86 architecture and their entire business model have limitations that to some extent can only be solved by switching to ARM. Not even then will they be able to have the same control and opportunity for innovation.
 
Last edited:

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,900
Anchorage, AK
The underlying architecture itself is a major differentiator between Apple's chips and the x86 scene. That Engadget page lightly touches on the width of the M1, but doesn't go quite far enough in terms of detailing how they did that and why it matters. The easiest way I can explain it is when you're entering a toll road such as either the Pennsylvania or Florida Turnpike - if you have only four booths open, you process traffic through at a slower place than if you had 6 or even 8 booths open. But the other difference is logistical in nature. Since x86 process variable-length instructions, each decoder (booth) has to check for a start and end of every instruction at every point in the data stream. (This would be like having to charge each individual in the vehicle the toll instead of charging on a per vehicle basis). Consequently, the practical limit for the x86 architecture is 4 decoder units, something AMD has gone on the record with publicly. Since ARM uses fixed-length instructions, it's is trivial to add more decoder units, as they do not have to play hide and seek with the data stream to find the instructions.
 

jjjoseph

macrumors 6502a
Sep 16, 2013
504
643
It is almost like Apple is having their final revenge against the CISC versus RISC argument. Apple had to abandon PowerPC because intel was able to strong arm a less efficient architecture into speed supremacy. Apple has always trusted science and innovation, they knew the x86 CISC would hit a wall quicker than RISC and here we are.

Every architecture has a limit but Apple is finally able to leap over Intel and AMD for at least the time being.

Intel has the potential to innovate but they were riding high for so long it seemed silly for them to fix what wasn’t broken.

Apples commitment to science, innovation and more efficient architecture is paying off. If they can capitalize on how much they have successfully pushed their RISC Arm architecture, Apple could be dominant in Silione for everything from Phones to AI Supercomputers.
 
  • Like
Reactions: StumpJumper

Gerdi

macrumors 6502
Apr 25, 2020
449
301
While the terms RISC and CISC explicitly refer to the complexity of the instruction set, the fundamental differences are not really question of complexity. The more fundamental differences are:
1) RISC typically has fixed length instructions - which enables decoding of many instructions in parallel without other dependencies
2) RISC typically is a load-store architecture - which makes a distinction on instructions with and without memory references
3) RISC typically has many general purpose registers - which gives the compiler more freedom and and reduces the number of memory references (e.g. loads and stores) - memory references are bad in the sense, that the microarchitecture has to assume other side effects - which limits instruction level parallelism potential
4) RISC architectures typically have a weakly ordered memory model - which gives the microarchitecture much more freedom in scheduling, buffering and re-ordering loads and stores.

x86-64 falls short in all of the above mentioned properties compared to say Aarch64 - but also compared to other modern architectures. But it is not really the complexity of the instruction set, which is the issue.
 
Last edited:
  • Like
Reactions: Significant1

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
Long and detailed.

Yes, but it starts with "On YouTube, I watched a Mac user who had bought an iMac last year. It was maxed out with 40 GB of RAM costing him about $4,000". That doesn't fill me with confidence about his technical abilities or the YouTuber.

A 2020 iMac maxes out at 128GB, a 40GB memory configuration suggests that the YouTuber bought another 32GB of third party RAM and kept the original 8GB stick it shipped with in the machine. This is not a smart thing to do if you care about performance. You need to use matched RAM sticks in the slots. I installed 2 32GB sticks in my iMac and removed the original 8GB. Adding another 2 32GB sticks would improve the performance. This highlights one major advantage of the M1 Macs over the 2020 Intel iMacs, you can't screw up the RAM configuration. :)

The M1 Macs do have higher single core performance than the 2020 iMacs and a big part of that is the memory architecture. As dmccloud suggests the memory bandwidth is key to its performance. If the YouTuber was running tests that took advantage of that, that might account for his disbelief. A lot of workloads are single threaded.

However a $4000 iMac properly configured has performance advantages over that $700 iMac Mini. It would probably have a least 64GB of RAM, 16GB of Video RAM and higher multicore and GPU performance. It would also a better selection of I/O ports and support 100GB ethernet.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.