Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
back in the late 80s early 90s research went nuts around reduced instruction sets as a way to focus on simplicity on-chip and exporting complexity to the compiler and OS. Load-store instruction sets brought higher clock speeds but memory bandwidth issues which led to an explosion in caches and DMAs and a lot of other cool stuff. Im an RTOS guy so I defer to the chip folks on the forum for details but from a compiler and OS perspective things got more complicated. For example MIPS chips had two address for every memory location (high bit determined cache access). Weird stuff in today's landscape but it was the wild west in terms of behaviors back then.

At the time the 386 was king but there was HP-PA RISC, DEC ALPHA, SUN SPARC, SG MIPS, and on and on. Everyone was getting in the game and this started a 'CISC vs RISC' sort of marketing talking point.

I forgot the name of the guy but an intel chip designer wrote a nice paper on the feature sets of RISC that were not available to CISC and the conclusion was /dev/null. for example, they introduced caches (486?).

But the one thing intel cannot shake is backwards compatibility. That ISA is going to be on their gravestone.

So RISC vs CISC is not really a thing so much as intel has to do x86 and its affecting their designs. ARM had a blank slate in the modern era.
Thanks for taking the time to formulate an elaborate answer, though it does not take into consideration the context in which I had asked my question.

As far as I understood the original motivation for RISC the offloading of complexity to the compiler was more a side-effect while the real goal was to simplify the implementation to increase clock frequency, and because initially only a simple processor could be pipelined with the available transistor budget.

I don't quite understand your point about caches. The 68020 and 68030 had on-chip caches before the 80486, albeit very small ones (1x and 2x 256 bytes respectively) and I'm reasonably sure some of the RISCs had caches before the 80486, too.

What I find interesting is that 32 bit ARM is 10 years younger than 8086, 64bit ARM is 12 years younger than 64bit x86. Why do you think the ISA will be such a problem?
 
Thanks for taking the time to formulate an elaborate answer, though it does not take into consideration the context in which I had asked my question.

As far as I understood the original motivation for RISC the offloading of complexity to the compiler was more a side-effect while the real goal was to simplify the implementation to increase clock frequency, and because initially only a simple processor could be pipelined with the available transistor budget.

I don't quite understand your point about caches. The 68020 and 68030 had on-chip caches before the 80486, albeit very small ones (1x and 2x 256 bytes respectively) and I'm reasonably sure some of the RISCs had caches before the 80486, too.

What I find interesting is that 32 bit ARM is 10 years younger than 8086, 64bit ARM is 12 years younger than 64bit x86. Why do you think the ISA will be such a problem?
The old meanings of RISC and CISC were kind of lost when processors went superscalar (multiple instructions running in parallel) and out of order. The simplicity of RISC remains in the ISA over CISC but that is not nearly as important as it used to be with modern transistor budgets.

The problem with the x86 ISA is that the decoder is more complex because of the variable length instructions and the need for deep pipelines because of the extensive use of micro-ops. If transistors can be used for wider decoding, larger out of order processing, and large caches then your CPU is going to be faster. Also it gives more flexibility on how to spend the power budget.

Modern RISC isn’t very similar to what the original proponents wanted but it was a good stepping stone to get where CPUs are now.
 
Thanks for taking the time to formulate an elaborate answer, though it does not take into consideration the context in which I had asked my question.

As far as I understood the original motivation for RISC the offloading of complexity to the compiler was more a side-effect while the real goal was to simplify the implementation to increase clock frequency, and because initially only a simple processor could be pipelined with the available transistor budget.

I don't quite understand your point about caches. The 68020 and 68030 had on-chip caches before the 80486, albeit very small ones (1x and 2x 256 bytes respectively) and I'm reasonably sure some of the RISCs had caches before the 80486, too.

What I find interesting is that 32 bit ARM is 10 years younger than 8086, 64bit ARM is 12 years younger than 64bit x86. Why do you think the ISA will be such a problem?


Sorry there really is not much of a difference other than historically. This is from the eyes of the software side (compiler/os) so again take my viewpoint with a grain of salt. Cmaier is probably rolling his eyes at my comments ;)

So with CISC architectures they could typically operate directly on memory and had smaller register files. With RISC architectures they eliminated a lot of these operations directly on memory for simplicity (i.e. reduced instruction set). Say you wanted to multiply two instructions on a traditional RISC chip vs INTEL. You would have to load the two numbers from RAM into registers, performed the multiply register*register into a register to hold the result, then store the result back to memory. What was one instruction on x86 becomes 4 smaller simpler instructions.

made up example
CISC:
MUL memaddr1, memaddr2, memaddr3
(multiply what is in addr1 what is in addr2 and put in addr3. imagine the timing and mess doing this in one shot)

RISC:
LOAD R1, memaddr1
LOAD R2, memaddr2
MUL R1, R2, R3
STOR R3, memaddr3

One complicated instruction under CISC could be broken down into 4 smaller, simpler ones. This is called a Load-Store architecture and is one of the staples of all RISC architectures. It takes more instructions to do the same work, but those instructions can run through the chip faster (instructions-per-second) and in chasing this simplification more ideas came along for the ride:
Instruction pipelines (and the debate over how deep you should make them)
Bigger caches to feed the higher clock speeds
Superscalar designs like FP coprocessors

As compilers got better (btw, this is where GCC started to take off in importance) the benefits became realized over their CISC rivals but they did not have the software catalog. CISC chips could eventually adopt a lot of the good ideas from RISC but at limited effectiveness because they could not afford to break compatibility. For example, intel had adopted all of the above mentioned features in various forms over the years since.

So net result: CISC has sucked in a lot of cool stuff from RISC but they are limited in how far they can go with them because their value to the world is in their instruction set compatability.

-d
 
I don’t see Apple upgrading the processor every year. Most people expect their laptop to last longer than their phone. I don’t know anyone that replaces their laptop on a yearly basis. I know quite a few people that replace their phones on a yearly basis.

I understand some people will say well just because a new version is released that doesn’t mean people have to buy it. That’s true but if someone spends $3500 for a laptop then in less than a year one comes out that’s 20% faster it’s going to cause some people to have bad feelings. That type of situation is going on in the smartphone market but I don’t think it would work in the laptop market
Do you buy a new car every year because manufacturers bring out new versions each year?
 
Do you buy a new car every year because manufacturers bring out new versions each year?
There’s a big difference though. New car models are usually just very minor updates. If the 2020 car got 35 miles per gallon and had 300 hp but the 2021 car got 50 miles per gallon and had 500 hp then yes some people would. It just depends on the upgrade. Perhaps if they make it a small upgrade like the iPhone it won’t be an issue for most.
 
There’s a big difference though. New car models are usually just very minor updates. If the 2020 car got 35 miles per gallon and had 300 hp but the 2021 car got 50 miles per gallon and had 500 hp then yes some people would. It just depends on the upgrade. Perhaps if they make it a small upgrade like the iPhone it won’t be an issue for most.
Apple’s historical cadence at releasing massively new macs is not that different from the typical car manufacturer’s cadence at releasing new cars. Typically they just upgrade specs (screen a little brighter, CPU a little faster) and only do redesigns every 5 years or so. Look at the MBP - from 2016-2020 not much changed.

So, no, there is not “a big difference though.”
 
  • Like
Reactions: ader42 and Tagbert
Apple’s historical cadence at releasing massively new macs is not that different from the typical car manufacturer’s cadence at releasing new cars. Typically they just upgrade specs (screen a little brighter, CPU a little faster) and only do redesigns every 5 years or so. Look at the MBP - from 2016-2020 not much changed.

So, no, there is not “a big difference though.”
There was when they went to Apple Silicon. Perhaps it will be more gradual from that point though
 
Apple’s historical cadence at releasing massively new macs is not that different from the typical car manufacturer’s cadence at releasing new cars. Typically they just upgrade specs (screen a little brighter, CPU a little faster) and only do redesigns every 5 years or so. Look at the MBP - from 2016-2020 not much changed.

So, no, there is not “a big difference though.”
Car manufacturers do telegraph the changes though. They aren’t secret.
 
There was when they went to Apple Silicon. Perhaps it will be more gradual from that point though
They updated everything for apple silicon, after years of no major changes. They aren’t going to do full redesigns any time soon (other than the machines they haven’t yet updated)
 
  • Like
Reactions: Tagbert
So how does that affect whether people are supposedly replacing their macs every year?
It wasn’t meant to? One year leases are what folk who replace their cars often use. Apple doesn’t have an equivalent. Though cars historically have been horrible with depreciation, this past year not withstanding. you probably lose less money flipping a Mac for a new one than you would flipping a car
 
It wasn’t meant to? One year leases are what folk who replace their cars often use. Apple doesn’t have an equivalent. Though cars historically have been horrible with depreciation, this past year not withstanding. you probably lose less money flipping a Mac for a new one than you would flipping a car
Normally yes, a Mac will hold its value better than a car. However, I just found out the hard way that Intel Mac resale values have dropped a lot with the introduction on the current Apple Silicon range. Contrarily, quite a lot of used cars are selling for more than their new price 1-2 years ago due to supply shortages.
 
Normally yes, a Mac will hold its value better than a car. However, I just found out the hard way that Intel Mac resale values have dropped a lot with the introduction on the current Apple Silicon range. Contrarily, quite a lot of used cars are selling for more than their new price 1-2 years ago due to supply shortages.
You didn’t think that a whole new design and processor would sink the used market price?
 
You didn’t think that a whole new design and processor would sink the used market price?
Of course, I expected it to affect prices...but it was bit painful nonetheless. I was actively using the Intel MBP16 for work so couldn't sell it before getting a new MBP.

Maybe the smarter option would have been to have sold the Intel MBP16 as soon as the M1 MBP came out, and then flipped that again when the M1 Pro/Max machines were available...but I don't know whether the losses from two sales would have been any better than selling now. I got about 43% of my purchase price back after 2 years.
 
made up example
CISC:
MUL memaddr1, memaddr2, memaddr3
(multiply what is in addr1 what is in addr2 and put in addr3. imagine the timing and mess doing this in one shot)
(...)
As compilers got better (btw, this is where GCC started to take off in importance) the benefits became realized over their CISC rivals but they did not have the software catalog. CISC chips could eventually adopt a lot of the good ideas from RISC but at limited effectiveness because they could not afford to break compatibility. For example, intel had adopted all of the above mentioned features in various forms over the years since.
Hard disagree; none of the commercially important CISCs were able to import substantial amounts of RISC. If they did, the result wouldn't be compatible with old software anymore. It's not easy to transform an ISA into something it's not if there's software you need to stay compatible with.

The key to understanding the modern world - and the decades leading up to it - is that whatever RISCiness you can say there is in x86 was there all along. This was by accident, not design, but it was an essential factor in Intel's 1990s push to catch up to and surpass competing RISCs - if x86 had been significantly CISCier, they probably could not have pulled that off.

You gave the example of a MUL instruction where all three operands reside in memory. That is indeed a very CISCy thing, but it's not something x86 can do! Two-input x86 ALU instructions have only two operands. One is the source, the other is both source and destination. Only one of the two is allowed to be memory. The other has to be a register.

Keeping memory operands down to one per instruction (and also keeping the address generation for that one memory operand relatively simple) is very significant. Contrast with the fate of 68K, the major 1980s CISC competitor to x86. Everyone who coded assembly loved 68K, it was much nicer, but it also supported things like multiple memory operands in one instruction, and after the 68020, sometimes could generate multiple memory references within a single operand thanks to how complex the addressing modes got. It is not an accident that 68K ran out of steam and had all its major customers (including Apple) migrate to RISC CPUs by the early 1990s.
 
What are the real-world advantages of RISC?
It seems that ARM ISA is getting as complex as x64, so ARM ISA is losing some of its advantages over x64.

x64vsARM.png


Source: https://chipsandcheese.com/2021/07/13/arm-or-x86-isa-doesnt-matter/
 
  • Like
Reactions: Basic75
You are right. That summary table comes from a paper of 2013, and writers used a Beagleboard (Cortex-A8), a Pandaboard (Cortex-A9), and an Atom board to make the comparison. Those ARM boards are 32 bits.

Link to paper: https://research.cs.wisc.edu/vertical/papers/2013/hpca13-isa-power-struggles.pdf

I wonder if someone has made a similar comparison using modern ARM.

And in general there are a few issues I have with that article. For example, they nitpick on the implementation of individual chips (like A64FX, which is a narrow purpose throughtput-oriented CPU) and portray that as general weakness of ARM architecture. Or take this random note about LDADD violating the principles load-store architecture... sorry, but how do you implement atomic operations on a "purely" load-store architecture without stalling everything? What is even more bizarre that they single this out as a weakness of ARM while singing praises to RISC-V which contains the same exact kind of load-modify-store atomic instructions (amoadd and friends)! And who am I to argue with Jim Keller, but I also find it weird that the article praises RISC-V as the "new clean slate design" but at the same time completely forgets to mention that RISC-V suffers from verbose instruction encoding and has to rely on instruction compression to make the code density competitive with modern ARM and x86...

I would love to see an in-depth comprehensive study of the topic. We now have a lot of different designs, so it should be possible to collect enough data for fair comparisons. From my amateur perspective, few things seem particularly striking. First of all, all modern high-performance implementations offer more or less the same performance. To me at least it seems fairly clear that the performance of a CPU is not limited by its ISA, but by the amount of money, time and talent spend on microarchitectural optimisations. But there does seems to be a power tradeoff, as x86 designs generally use at least 2-3 times more power. And power-optimised designs (Atom and friends) perform significantly worse. What I would be most curious to know is whether this is because of the historical path and thus specific implementation details these companies took (x86 focusing on desktop applications and scaling down to mobile, or ARM/Apple focusing on mobile applications and scaling up to desktop), or whether there is something inherent to implementing x86 that requires more power (e.g. x86 decode can be done in constant time via data-parallel techniques, but maybe it can never be as power efficient as ARM decode). These are interesting question IMO, not bickering about abbreviations.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.