Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Off topic, but I read somewhere that finding circuit designers is tough because of significant overlap with software engineering, which pays more.

It makes me wonder how many brilliant engineers that we’ve missed out on.

Also, speaking of software, knowing the trend towards adding specific hardware acceleration to processors makes me wonder if we’ll see ******** software that’s unoptimized but compatible across architectures.

I don’t see any circuit designer/software engineer overlap. Circuit design is all about physics. The great circuit designers I know were generally pretty terrible at code. :). I was, for awhile, on the circuit design team at AMD. I enjoyed it, but I didn’t design any circuits in that role.

There is overlap between architecture and software, and between logic design (as done by non-CPU design teams) and software, though. Logic designers on CPU design teams tend not to be “coding” their designs.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I read somewhere that finding circuit designers is tough because of significant overlap with software engineering, which pays more.
I have always thought that hardware engineers earn more money than software engineers because programming in HDL is more complex.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
I don’t see any circuit designer/software engineer overlap. Circuit design is all about physics. The great circuit designers I know were generally pretty terrible at code. :). I was, for awhile, on the circuit design team at AMD. I enjoyed it, but I didn’t design any circuits in that role.

There is overlap between architecture and software, and between logic design (as done by non-CPU design teams) and software, though. Logic designers on CPU design teams tend not to be “coding” their designs.
I meant logic design yeah, I’m not good with terminology
 

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
These things are all true. From a purely technical perspective, however, I think you’d see little performance difference between Arm and RISC-V. Back in the golden days of RISC ISA competition, the performance differences between chips based on ISA’s as different as PA-RISC, MIPS, SPARC, Alpha, etc. came down to differences in microarchitecture choices, really. You’d see Sparc do great one generation, then the designers would leave and the second tier designers would mess up the next, then they’d get a design team who did a good PowerPC and they’d be competitive again. You could actually predict which chip would win in a given year based on the microarchitectural block diagram and who the designers were this time.

The problems with RISC-V are largely due to its immaturity and the fact that it smells like a design-by-committee academic project, but in the end the difference in quality of an actual chip wouldn’t have much to do with that. The things you mention about modularity, etc. are a more serious problem, at least for some uses.
Hey Cliff,

I was wondering if you would mind having a read of this and giving your thoughts….


and his thoughts on ARM and RISC-V vector extensions:


Erik made the following statement that were of particular interest to me, especially to see the number of required instructions for MAFD put into context - although, as you or perhaps leman mentioned previously, modern desktop by necessity (and from a generated code complexity perspective) end up needing many many more instructions than these:
The base RISC-V instruction set RV32I is just 47 instructions. That is what allows you to build RISC-V CPUs with half the die area of an ARM CPU for the embedded space. That translates into 4x lower cost when producing similar volumes. Why? Because cost is the square of the die area.
It can be tempting to conclude from this that RISC-V has no advantages in the server and desktop space. If you are going to have desktop class computing you are going to need more instructions right? Sure but not nearly as many as you think. ARMv8 (64-bit ARM) has about 1000 instructions and x86–64 has about 1500 instructions. 32-bit ARM is around 500.
A desktop class RISC-V chip does not get anywhere near that. If you add all the standard extensions, M, A, F and D you get the RV32IMAFD instruction set abbriviated as RV32G. This only has 122 instructions. What about the 64-bit version, RV64G? That only adds 9 instructions. We could even add the vector extension V. That is around 50 instructions. Regardless you still don’t get anywhere near the instruction count of x86 and ARM despite offering desktop and server style functionality.
However it could be argued that once you make desktop class chips the size of your ISA doesn’t matter much. That is true. The number of transistors you need to allocate to making an efficient branch predictor, out of order execution, cache and many other things will completely dwarf the transistors required for your decoders.



The viewpoint that RISC-V provides a much smaller instruction starting point from which to develop highly specialized, optimized and targetted co-processors / accelerator units (like the ones that Apple uses for AMX, ISP, video encode, video decode etc… ) is fascinating to me.

From somebody with real world chip design experience with AMD, does any of this hold weight from your viewpoint or has he (and indeed am I) overly simplifying things?

Specifically, is there a valid technical or business use case for leveraging RISC-V as a unifying ISA to bootstrap coprocessors and accelerators on Apples silicon or a competitors silicon such as Qualcomm, AMD, Intel, etc…? Or do you feel that companies such as Apple are more likely to continue rolling their own ’non-public’ ISA for things such as AMX, ISP, Neural Engine etc…
I know back in my days with a large blue IT company, the culture and ‘we built this and dog food our own‘ can play a predominant part of the technical decision making process.

Very keen to hear your opinion and be ed-u-ma-cated … take me to school!


Thanks,

Tom.
 
  • Like
Reactions: Xiao_Xi

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
These things are all true. From a purely technical perspective, however, I think you’d see little performance difference between Arm and RISC-V. Back in the golden days of RISC ISA competition, the performance differences between chips based on ISA’s as different as PA-RISC, MIPS, SPARC, Alpha, etc. came down to differences in microarchitecture choices, really. You’d see Sparc do great one generation, then the designers would leave and the second tier designers would mess up the next, then they’d get a design team who did a good PowerPC and they’d be competitive again. You could actually predict which chip would win in a given year based on the microarchitectural block diagram and who the designers were this time.

The problems with RISC-V are largely due to its immaturity and the fact that it smells like a design-by-committee academic project, but in the end the difference in quality of an actual chip wouldn’t have much to do with that. The things you mention about modularity, etc. are a more serious problem, at least for some uses.

The immaturity problem is a significant one. The developers of the big 3 C/C++ toolchains (LLVM/GCC/MSVC) have been focused on optimizing for the Intel/AMD and ARM ISAs for decades now.
 
  • Like
Reactions: Xiao_Xi

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
The immaturity problem is a significant one. The developers of the big 3 C/C++ toolchains (LLVM/GCC/MSVC) have been focused on optimizing for the Intel/AMD and ARM ISAs for decades now.
The nice thing is that these are all open-source projects. Many Chinese companies are adopting RISC-V for high-performance cores because they can't rely on ARM being available to them due to politics. They, along with other RISC-V adopters will undoubtedly optimize open-source compilers.
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
A desktop class RISC-V chip does not get anywhere near that. If you add all the standard extensions, M, A, F and D you get the RV32IMAFD instruction set abbriviated as RV32G. This only has 122 instructions. What about the 64-bit version, RV64G? That only adds 9 instructions. We could even add the vector extension V. That is around 50 instructions. Regardless you still don’t get anywhere near the instruction count of x86 and ARM despite offering desktop and server style functionality.

This showcases one of the bigger problems of RISCV. There is just no baseline. Even if you take M, A, F, D extensions, there are no bit-manipulation extension, because they are proposed but not ratified. And this is just an example of missing instructions. But the absence of a baseline goes much further than just the ISA. Is it an implementation which supports machine mode only or does it support supervisor and hypervisor modes? Does it have an MMU or MPU?

RISCV is totally fine if your intention is to use it for your embedded system - outside of this there are mountains of issues. And these issues continue into the SW ecosystem - no mainlined JDK, .Net/Mono, QT etc.
 
  • Like
Reactions: Xiao_Xi

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,869
Hey Cliff,

I was wondering if you would mind having a read of this and giving your thoughts….


and his thoughts on ARM and RISC-V vector extensions:

Erik made the following statement that were of particular interest to me, especially to see the number of required instructions for MAFD put into context - although, as you or perhaps leman mentioned previously, modern desktop by necessity (and from a generated code complexity perspective) end up needing many many more instructions than these:
I'm not Cliff but I think it may help you to know that this guy you've linked to is clearly a software dude who doesn't really know what he's talking about, has a fixation on RISC-V taking over the world (I've seen that before in a different software dude, some of them are really convinced that 'open source' matters in ISA design), and is reaching hard to find anything which reassures him that it will.

For example, that stuff you quoted implying it's bad that Arm has more instructions? That's not something anyone who designs CPUs would say. The quantity of instructions you have to implement isn't nearly as important as their qualities. Are there any which make it difficult to track dependencies? which are unfriendly to power consumption? or are likely to limit clock speed? These questions are much more important than whether RISC-V has 238 instructions and Arm A64 1000 (or whatever the actual numbers are).

And as far as those numbers go, how do you define instruction count? Because it turns out the reason Arm A64 has lots of different opcodes is not that there's a thousand totally unique things an A64 instruction can do, but rather because for each basic operation, A64's designers thought through whether it would be useful to have any variants. So it's really a lot fewer types of instruction, and for each type of instruction you can implement all the variants with the same hardware at a fairly low cost.

The only thing RISC-V is arguably better at is a single ISA encoding which scales from ultra-minimalist low performance embedded CPU cores to high performance application processor (AP) cores. In the Arm world, due to legacy reasons, there's a different ISA encoding for 32- and 64-bit code, and the 32-bit ISA is quite different from the 64-bit. (in fact, for other legacy reasons there's two completely different encodings for 32-bit code!)

The question you should be asking yourself, though, is: does this matter? It doesn't, because there's no need for deep embedded cores to run the same ISA as the main AP cores. Nobody cares if the dozens of little helper cores in an Apple M1 SoC run the same ISA as the Firestorm and Icestorm cores (iirc, they don't), because user code doesn't (and shouldn't) run on them - everything they execute is firmware supplied by Apple.

The flip side of that: real world results show that every core an application can run code on should implement exactly the same ISA features as all the rest. Anything else tends to devolve into insanity quick, so much so that Intel disabled some instructions in Alder Lake's Golden Cove cores to minimize their ISA differences with the Gracemont efficiency cores.

So, Engheim's thesis has already been proven wrong by reality. When there's a chance any given bit of code could run on different types of core, you want all those cores to provide exactly the same ISA, but otherwise you just don't care.

For a really wild example, Intel has long used embedded microcontrollers in their x86 chips to govern dynamic voltage and frequency scaling (DVFS), known as Turbo Boost Technology in Intel marketing speak. Sometimes they've used a 486 core for this microcontroller. Other times they've used a Synopsys ARC core - a deep embedded RISC descended from the SuperFX accelerator chip used in the Super Nintendo game "StarFox". (Yes, really!) Users never knew the difference, nor should they have.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,231
This showcases one of the bigger problems of RISCV. There is just no baseline. Even if you take M, A, F, D extensions, there are no bit-manipulation extension, because they are proposed but not ratified. And this is just an example of missing instructions. But the absence of a baseline goes much further than just the ISA. Is it an implementation which supports machine mode only or does it support supervisor and hypervisor modes? Does it have an MMU or MPU?

RISCV is totally fine if your intention is to use it for your embedded system - outside of this there are mountains of issues. And these issues continue into the SW ecosystem - no mainlined JDK, .Net/Mono, QT etc.
I've seen some argue that the problem with RISCV is that they never learned from MIPS' mistakes - too open, too many wildly varied implementations that just lead to too many issues when practically using the cores for anything. You can design really good cores just fine, but then getting things to actually run on them becomes a nightmare. ARM suffers from this somewhat too of course, but RISCV is a whole other level of wild west.
 
Last edited:

leman

macrumors Core
Original poster
Oct 14, 2008
19,523
19,679
Specifically, is there a valid technical or business use case for leveraging RISC-V as a unifying ISA to bootstrap coprocessors and accelerators on Apples silicon or a competitors silicon such as Qualcomm, AMD, Intel, etc…? Or do you feel that companies such as Apple are more likely to continue rolling their own ’non-public’ ISA for things such as AMX, ISP, Neural Engine etc…

I wouldn't be surprised to learn that Apple is already using small RISC-V processors somewhere in their chips. Its as you say — the big advantage of RISC-V is the fact that you can build a basic processor with very little area footprint.

I'm not Cliff but I think it may help you to know that this guy you've linked to is clearly a software dude who doesn't really know what he's talking about, has a fixation on RISC-V taking over the world (I've seen that before in a different software dude, some of them are really convinced that 'open source' matters in ISA design), and is reaching hard to find anything which reassures him that it will.

Right? I mean, I love FOSS as much as the next dev, but some people really take their religious fervour too far. Its like mentioning that something is open completely shuts off the rational pat of the brain.

And as far as those numbers go, how do you define instruction count?

Given how ridiculous these numbers are they probably took every unique instruction opcode on RISC-V but counted different instruction variants (for the same opcode) on ARM. I mean, there are already what, 10 variants of conditional branch instructions on RISC-V? Which makes the claim of "only 47 instructions" very dubious. Of course, one can always claim that its a single instruction since all of them share the same opcode. On the other hand, ARMv8 has somewhere around 16 conditional branch instructions... which are all also encoded with the same opcode.

I mean, just have a look here: https://www.usna.edu/Users/cs/lmcdowel/courses/ic220/S20/resources/ARM-v8-Quick-Reference-Guide.pdf — I see only around 60 unique instructions in the core set. System ops and SIMD will of course push it up, but the claim of over 1000 instructions is just insane.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I mean, there are already what, 10 variants of conditional branch instructions on RISC-V? Which makes the claim of "only 47 instructions" very dubious.
RV32I has 47 instructions, 6 of them are conditional branch instructions plus JAL.
• BEQ - Branch Equality
• BNE - Branch Not Equal
• BLT - Branch Less Than
• BGE - Branch Greater Than
• BLTU - Branch Less Than Unsigned
• BGEU – Branch Greater Than Unsigned

Source: https://maxvytech.com/images/RV32I-11-2018.pdf
The page number 7 lists the branch instructions.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,523
19,679
RV32I has 47 instructions, 6 of them are conditional branch instructions plus JAL.
• BEQ - Branch Equality
• BNE - Branch Not Equal
• BLT - Branch Less Than
• BGE - Branch Greater Than
• BLTU - Branch Less Than Unsigned
• BGEU – Branch Greater Than Unsigned

Source: https://maxvytech.com/images/RV32I-11-2018.pdf
The page number 7 lists the branch instructions.

Why does BGE (it's branch greater or equal btw, not greater than) counts as an instruction and BLE (branch less or equal does not)?
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
According to Digital Design and Computer Architecture: RISC-V Edition (pag. 311) "There is no need for bgt or ble because these can be obtained by switching the order of the source registers of blt and bge. However, these are available as pseudoinstructions."
 
  • Like
Reactions: Basic75

leman

macrumors Core
Original poster
Oct 14, 2008
19,523
19,679
Code:
ble rs1, rs2, offset
expands to
Code:
bge rs2, rs1, offset

Well, sure, I was just pointing out that it all depends on how things are counted. It is perfectly valid to say that Aarch64 has only one conditional branch instruction for example, it's just that this instruction takes a condition mask as an operand. Or, you can claim that it has over a dozen, if you count mnemonics.

Anyway, the bottom-line is that the claim of 1000 instruction for ARM is way overblown and in reality the number of basic instructions between RISC-V and Aarch64 is comparable.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
bottom-line is that the claim of 1000 instruction for ARM is way overblown
I am not so sure. Armv8-A supports three instruction sets: A32, T32 and A64. It seems that A64 has at least 400 instructions and A32/T32, at least 250.

This link downloads a pdf that lists all A64 instructions: https://documentation-service.arm.com/static/61c04c7a2183326f21771ec6?token=

This links downloads a pdf that lists all A32/T32 instructions https://documentation-service.arm.com/static/61c04ba12183326f217711e0?token=
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
I am not so sure. Armv8-A supports three instruction sets: A32, T32 and A64. It seems that A64 has at least 400 instructions and A32/T32, at least 250.

This link downloads a pdf that lists all A64 instructions: https://documentation-service.arm.com/static/61c04c7a2183326f21771ec6?token=

This links downloads a pdf that lists all A32/T32 instructions https://documentation-service.arm.com/static/61c04ba12183326f217711e0?token=
I think @leman was really referring to just the Aarch64 ISA and not the legacy instructions. I.E. what Apple silicon uses.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Right? I mean, I love FOSS as much as the next dev, but some people really take their religious fervour too far. Its like mentioning that something is open completely shuts off the rational pat of the brain.
4chan calls them “freetards”. I’ve met with plenty of them and it’s really no use talking to them.

I adore the idea of FOSS myself, but there are pitfalls that you can’t ignore.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,523
19,679
I am not so sure. Armv8-A supports three instruction sets: A32, T32 and A64. It seems that A64 has at least 400 instructions and A32/T32, at least 250.

This link downloads a pdf that lists all A64 instructions: https://documentation-service.arm.com/static/61c04c7a2183326f21771ec6?token=

This links downloads a pdf that lists all A32/T32 instructions https://documentation-service.arm.com/static/61c04ba12183326f217711e0?token=

And that's exactly what I mean by double standards used by RISC-V promoters (not referring to you specifically, just how these things are often portrayed). First, one is comparing the core RISC-V (utterly insufficient for a general purpose CPU) to the entirety of ARMv8 and all of its optional architectures (and the document you link additionally contains all the optional extensions). I mean, if that's what one wants to do, then at least compare it to a similarly reduced ARMv8 instruction core (which I have linked above). Then, one is comparing 32-bit RISC-V to 64-bit ARM (which will obviously have more instructions to deal with different data sizes). And finally, one only counts unique instructions encodings for RISC-V but does not count instruction aliases for ARM (again, the document you link contains aliased instructions such as MOV which are aliases of other instructions).

Again, there is no doubt that RISC-V has fewer instructions, that is what it has been designed for after all. But it's far from the 20:1 ratio as claimed, not to mention that reduced instruction set also comes with its of drawbacks as many RISC-V instructions generally offer less functionality.
 
  • Like
Reactions: Xiao_Xi

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
In which RISC ISA is it easier to have out-of-order/ wider pipeline?

It probably makes little difference. Given that RISC decoders are generally all of similar complexity, the limiting factor probably ends up being the number of registers. The more registers you have, the more complex the remapping hardware is. But unless you go crazy with the register count, it probably won’t make much of a difference.

I’m pretty confident I could take any risc design I’ve done and convert it to use any other (mainstream) RISC ISA of identical instruction width without too much difficulty. (Other than the first one I did - F-RISC, which only supported a couple dozen instructions because we had an extreme limit on the number of transistors).
 

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
I'm not Cliff but I think it may help you to know that this guy you've linked to is clearly a software dude who doesn't really know what he's talking about, has a fixation on RISC-V taking over the world (I've seen that before in a different software dude, some of them are really convinced that 'open source' matters in ISA design), and is reaching hard to find anything which reassures him that it will.

For example, that stuff you quoted implying it's bad that Arm has more instructions? That's not something anyone who designs CPUs would say. The quantity of instructions you have to implement isn't nearly as important as their qualities. Are there any which make it difficult to track dependencies? which are unfriendly to power consumption? or are likely to limit clock speed? These questions are much more important than whether RISC-V has 238 instructions and Arm A64 1000 (or whatever the actual numbers are).

And as far as those numbers go, how do you define instruction count? Because it turns out the reason Arm A64 has lots of different opcodes is not that there's a thousand totally unique things an A64 instruction can do, but rather because for each basic operation, A64's designers thought through whether it would be useful to have any variants. So it's really a lot fewer types of instruction, and for each type of instruction you can implement all the variants with the same hardware at a fairly low cost.

The only thing RISC-V is arguably better at is a single ISA encoding which scales from ultra-minimalist low performance embedded CPU cores to high performance application processor (AP) cores. In the Arm world, due to legacy reasons, there's a different ISA encoding for 32- and 64-bit code, and the 32-bit ISA is quite different from the 64-bit. (in fact, for other legacy reasons there's two completely different encodings for 32-bit code!)

The question you should be asking yourself, though, is: does this matter? It doesn't, because there's no need for deep embedded cores to run the same ISA as the main AP cores. Nobody cares if the dozens of little helper cores in an Apple M1 SoC run the same ISA as the Firestorm and Icestorm cores (iirc, they don't), because user code doesn't (and shouldn't) run on them - everything they execute is firmware supplied by Apple.

The flip side of that: real world results show that every core an application can run code on should implement exactly the same ISA features as all the rest. Anything else tends to devolve into insanity quick, so much so that Intel disabled some instructions in Alder Lake's Golden Cove cores to minimize their ISA differences with the Gracemont efficiency cores.

So, Engheim's thesis has already been proven wrong by reality. When there's a chance any given bit of code could run on different types of core, you want all those cores to provide exactly the same ISA, but otherwise you just don't care.

For a really wild example, Intel has long used embedded microcontrollers in their x86 chips to govern dynamic voltage and frequency scaling (DVFS), known as Turbo Boost Technology in Intel marketing speak. Sometimes they've usesd a 486 core for this microcontroller. Other times they've used a Synopsys ARC core - a deep embedded RISC descended from the SuperFX accelerator chip used in the Super Nintendo game "StarFox". (Yes, really!) Users never knew the difference, nor should they have.
Just to wanted to write and say thank you for the brilliant response. Everything that you’ve said makes a lot of sense.
Having heard yours and other viewpoints here, I’m now leaning more towards the perspective that while RISC-V would be an option for building a co-processor in a multi-unit heterogeneous architecture, it made not necessarily be the first or best option to consider.
One other thing I was hoping to say in a more general sense, I appreciate that you commented and scrutinized Erik’s article in a professional way without getting digs at him or his work. :)
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,523
19,679
In which RISC ISA is it easier to have out-of-order/ wider pipeline?

I (again, an entirely amateur opinion) think that the biggest practical difference is how RISC-V and Aarch64 are how they package useful information. I am not a hardware guy by any means, but I do understand something about algorithms and complexity and it appears to me that these things can make a difference about how easy or difficult it is to extract performance from the implementation.

For example, a very common pattern in real world code involves computing addresses of data elements. Tasks such as: given a starting address X, find the address of the i-th element given that every element is 4 bytes long. What I admire about Aarch64 is that the designers have considered these things in the basic ISA. The core arithmetic routines (such as addition) have a shift parameter which essentially multiplies one of the arguments by a constant. That is, ARM addition instruction is not a + b, but actually a + b * shift where shift is a constant that can be 1, 2, 4, 8 etc. This means that the same operation can be used for both the regular addition and the more complex (but very common) address computation. In contrast, RISC-V needs two instructions (shift + addition) to do address computation of this style, which introduces dependencies, increases the code size and puts additional pressure on the decoder and the scheduler. Sure, there are ways of dealign with this: you could fuse sequences of shift + addition at the decoder stage and issue them as a single instruction to the scheduler (this takes care of the backend overhead but you still have the frontend overhead). I just think that Aarch64's approach is much smarter: reduce complexity and compact things before they even reach the CPU. Makes everyone's job easier.

Now I hear you say, wait a moment, packing multiple functions in the same instruction, doesn't that sound like x86 CISC that everyone has ben criticising? Well, not really. It's about how things can be executed. It is a fairly trivial exercise to make an ALU that will do addition + shift in one go, and you don't pay much for it in terms of hardware implementation, and there is a lot to win. In fact, a high-performance RISC-V core will probably want to design the ALU this way anyway and do instruction fusing (i.e. recoding the RISC-V instructions to something much more similar to ARM internally). The x86 CISC is different because it has many instructions that cannot be executed in one go and have to be split up into components with complex dependencies (e.g. memory load + operation + memory store).

Now, to be fair, there are a lot of interesting aspects about RISC-V design. For example, their design of jump instructions is kind of cool. Similarly, I like their compare and branch instructions — this can implement many common operations in one opcode where other ISAs like Aarch64 needs two instructions.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Tasks such as: given a starting address X, find the address of the i-th element given that every element is 4 bytes long.
I have always thought that RISC-V and ARM have the same addressing modes. What is the name of this adressing mode?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.