Incidentally, RISC engineers didn’t get it perfectly right the first time: the first thing they jettisoned was floating point.
Errrr. The original Berkley and Stanford RISC back in the early 80's???? See the opcode genelogy chart here and also note the strong correlation between empty boxes and 'older stuff'.
If you set the "way back' machine to the late 70's and 80's , almost nobody on small scale chips was doing Float. Especially not IEEE standard float ( which was not standardized until 1985 ) .
Berkeley and Standard CS/EE departments/programs were as big as they are now. Only so many grad students and some much funding for this stuff at the very early start. Early on, some stuff is not there because 'no budget or resources'.
Even IBM's stuff was relatively a 'side project'.
Much of RISC-V was taken from doing an intersection of selected 'good' instruction sets from RISC 'history'. Plus adjusting to the bigger transistor budgets afforded by modern fab techniques ( no good reason to restrict to the max transistor counts of 1983-89 era ) and better automated design tools.
[ The T1/Niagara ejecting float in the 2000's , two decades later, was a radically different context. Evolution in fab processes, toos, and actually having a standard to follow. ]
P.S. An addition on the "done by a grad student team" front ...
" ... The Berkeley team asked the question ‘Are we crazy not to use a standard ISA?’ before noting that existing standard ISAs (x86, ARM and GPUs) would probably be too complex for a university project anyway. ...
...
The team iterated on the new ISA. In order to make implementation as simple as possible, they gradually reduced the number of instructions down to the absolute minimum needed.
..."
"
RISC-V - Part 1: Origins and Architecture
The revolutionary instruction set with roots in the first Berkeley RISC design
thechipletter.substack.com
As RISC-V gets commercialized by bigger and bigger players (with relatively large , expensive team budgets ) . The budget for doing RISC-V will go up.
Last edited: