Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Gaming, for one thing. Games vary in their ability to take advantage of multiple cores but for the most part, higher single core benchmark translates to better gaming performance

From an end user point of view, single core performance matters a lot because even though the OS is always doing multiple things at once, the user is mainly interested in the foreground task. When they open an app, how fast does it start. When they export a large file, how long does it take. These tasks can be parallelized to some extent, but not perfectly. Some are more or less linear. So single core matters to end user's perception of how fast their system is
You are missing a major point: foreground task may and should be multithreaded if it requires significant compute resources. You also have a few other misconceptions. "When they open an app, how fast does it start" it starts as fast as your SSD works (and if you want the fastest SSD you should not use a Mac). "When they export a large file, how long does it take" -same story. The performance may also depend on the speed of encoding which depends on MC performance not SD performance. Single core performance is only important if the number of cores is identical (because that would mean higher MC performance).
 

R2DHue

macrumors 6502
Sep 9, 2019
292
270
The 68000 wasn't a VERY different ISA. The 68000 was relatively clean for the era it was initially designed in.
" ...
The design implements a 32-bit instruction set, with 32-bit registers and a 16-bit internal data bus.[4] The address bus is 24 bits and does not use memory segmentation, which made it easier to program for. Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two more 16-bit ALUs used mostly for addresses,[4] and has a 16-bit external data bus.[5] For this reason, Motorola termed it a 16/32-bit processor.

As one of the first widely available processors with a 32-bit instruction set, large unsegmented address space, and relatively high speed for the era, the 68k was a popular design through the 1980s.
..."


The 68000 was 32-bit from the start. Didn't have any hocus pocus memory segmentation at all. The narrow amount of 16 bit stuff was more so in the 'inside' than the outside. It always had a decent number of data registers ( 32 ) . There was only 56 instructions.

It was big endian just like PPC ( the default of PPC .. PPC allow to flip. ) . Sun and Apollo ( later HP) workstation used them from the start to run Unix.

The design of the 68000 was done in the late 70's around the same time IBM was doing ROMP. Basically invented before RISC was invented , but mainly was on a similar tract. Kind of hard to be exactly 'RISC' before RISC is even invented.

The 68000 was going to run into issues when the workstation market was going to diverge from the more price constrained systems. Same Wikipedia page.

"... By the start of 1981, the 68k was making multiple design wins on the high end, and Gunter began to approach Apple to win their business. At that time, the 68k sold for about $125 in quantity. In meetings with Steve Jobs, Jobs talked about using the 68k in the Apple Lisa, but stated "the real future is in this product that I'm personally doing. If you want this business, you got to commit that you'll sell it for $15."[27] Motorola countered by offering to sell it at $55 at first, then step down to $35, and so on. Jobs agreed, and the Macintosh moved from the 6809 to the 68k. The average price eventually reached $14.76.[
..."

[ Always an eye-roll wherever Tim Cook is the bean counter who is 'ruining' Apple is contrasted to Steve 'spend whatever it takes' Jobs fantasy is rolled out on these forums. ]




" ... Into this came the early 1980's introduction of the RISC concept. At first, there was an intense debate within the industry whether the concept would actually improve performance, or if its longer machine language programs would actually slow the execution through additional memory accesses. All such debate was ended by the mid-1980s when the first RISC-based workstations emerged; the latest Sun-3/80 running on a 20 MHz Motorola 68030 delivered about 3 MIPS, whereas the first SPARC-based Sun-4/260 with a 16 MHz SPARC delivered 10 MIPS. Hewlett-Packard, DEC and other large vendors all began moving to RISC platforms ..."

And not shooting for $15/processor prices.

PowerPC stripped some instructions out of Power ( a reduced 'RISC' ? ) . PowerPC didn't bring any higher number of general purpose registers ( also 32). PowerPC has more instructions than 68000 ( >100 versus ~50 ... so which one is the 'Reduced' one ? )

Is is 'different' , but 68000 never was a hyper 'CICS' instruction set . A 68000 that wasn't trying to maximize code footprint compression could write "RISCy" code that leaned on registers load/store to do most of the work.

I can’t agree with the uncited, published observation that there was, “intense debate within the industry whether the [RISC] concept would actually improve performance, or if its longer machine language programs would actually slow the execution through additional memory accesses.”

Maybe there was debate among business executives and industry pundits and tech journalists, but not computer scientists (unless you were Intel and had ulterior motives).

RISC will forever seem paradoxical at first glance.

But, before it had a name, the design concept was borne of statistics showing that compiled software ignored a large number of instructions available on CPUs of the time.

It was observed that instructions executing on a central processor made more extensive use of processor registers than the array of processor instructions. A majority of instructions in ISA sets of the time went unused. And transistor count was (and still is, I guess) a precious commodity.

Compilers could yield binaries that did more software processing to break down tasks before they passed from RAM to a CPU — and the resulting software application ended up running faster using the fewer instructions of a reduced instruction set processor that focused on more processor registers and higher clock rates.

The proof was empiric.

So I don’t think the clear superiority of what would ultimately be termed “RISC” was ever in doubt among people in the know.

And, however many or few instructions the Motorola 68k had, it was still a CISC design. If not, why would Motorola bother designing an 88000?

The AIM alliance/project that produced the PowerPC relied almost entirely on existing, mature RISC designs at IBM — designs whose original concepts dated as far back as 1975 — than it did on the Motorola 680#0 architecture, or the 88000 for that matter.

The real job was to take processors that could occupy unlimited space, draw unlimited power and produce lots of heat inside Mainframe or Minicomputers and scale them down into processors suitable for use in consumer personal computers.

Back onto RISC’s history, you could make a plausible argument that the design concept that would much later be given the name RISC dated back even further than 1975 to Seymour Cray in 1964.

But that’s — in a bag of nutshells — why I concluded that the 68000 architecture was “very different” than the PowerPC’s anyway.

Also of note, Apple did their best at the time on the SDK and tool side, but in hindsight of course, Apple could have done more to make ISVs happier when porting from 68k to PowerPC Macs.

It was the “least painless” of Apple’s three major Mac hardware migrations.
 
Last edited:

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
PowerPC has more instructions than 68000 ( >100 versus ~50 ... so which one is the 'Reduced' one ? )
My understanding is that the RISC concept is not about the reducing the number of instruction codes, but about reduced complexity of the instruction codes. RISC can in fact increase the number of required instructions; e.g. one load and one store instruction in RISC vs one instruction that does both in CISC.
 
  • Like
Reactions: bcortens and R2DHue

R2DHue

macrumors 6502
Sep 9, 2019
292
270
My understanding is that the RISC concept is not about the reducing the number of instruction codes, but about reduced complexity of the instruction codes. RISC can in fact increase the number of required instructions; e.g. one load and one store instruction in RISC vs one instruction that does both in CISC.

Exactly right.

“Reduced” in RISC does not always equate to a number of instructions in an ISA but a reduction in the complexity of the operations a processor is required to perform and an improvement to the proximity of the data to be processed.

EDIT: And a substantial increase in the number of registers compared to conventional designs of the time.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,673
“Reduced” in RISC does not always equate to a number of instructions in an ISA but a reduction in the complexity of the operations a processor is required to perform and an improvement to the proximity of the data to be processed.

I still don't find this definition very satisfactory. For example, AMR has a single instruction that will increment a register value and load two consecutive 64-bit values at the resulting address. I wound't call this a "simple" instruction. The second part of your definition is more interesting. Maybe a better way to define RISC would be as only using instructions that can are "fast" and have no variable costs? But then again ARM has dedicated memcpy instructions...
 
  • Like
Reactions: altaic

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Niagara failed big time.

Niagara (UltraSparc T1) took 'small' cores to an almost fanatical extreme. They really should have budgeted a substantially bigger die. One problem is that they went 'too cheap'. They threw decent L1/L2 cache sizes out the window. There was just one Floating point unit for every eight cores to share. Intel dropping AVX-512 from their E-cores looks tame in comparison to ejecting the whole float unit from your core. This was back to the 80's and 90's with pragmatically external float units. "Out-of-order' completely dropped. Almost everything to just to inflate the core count by leaving behind parts of the 'core'. They were not as much 'smaller' as they were 'butchered'.

The other problem is the die constraint is also pragmatically limiting RAM bandwidth ( not going to be adding more memory channels. ) .

The T1 ( Niagara) was a 378mm^2 die. The UltraSparc IV was 356mm^2 die (about a generation behind fab process) and two cores. ( Sparc64 VI 421 mm^2 same process 'nm' as T1 and also two cores. )

Benchmarked on Apache/Web services that used essentially no float and high percentage of network data traffic I/O they looked reasonable. But a more generic workload mix was not really the focus. ( impart on purpose because there were other Sparc options from that they didn't want to fratricide. )


If going to jump from two to eight cores maybe a 50% bigger die would at least be halfway reasonable starting point than almost the exactly the same size. ( or drop the count multiplier; 3x instead of 4x. 3x would have been a difficult to solve with an incremental area budget. ). The T2, T3 , etc all got process shrinks that make the 350-400mm^2 budget more sane ( and some of the 'crazy' ejections were undone) , but that was years away.

Sun effectively bought the baseline design of the T1 from a start-up. Much of the 'shoe string budget' constraints of the start up got baked into the design. The T1 was a 'get it out the door' product, and Sun probably over-hyped it for what it was.


[ AMD ran into similar buzz-saw when Bulldozer (?) went the mode of 'save space share FPU between two chopped down cores'. Remains to be seen if Intel's 'rentable units' will tap-dance close to that 'share units' line better than several previous attempts. )


And for user-facing computers the answer is clear, you always want a couple of really fast cores to keep the single-threaded portions, like single web pages and large parts of the user interface, as snappy as possible.

the number of folks with just one browser tab open is shrinking. For security each tab is being tossed into separate threads and sometimes into another process. Almost every modern GUI interface is inherently multiple threaded. ( a mouse down and/or cursor/menu tracking isn't going to stop the other GUI elements from working. )

Single threaded is shrinking over time.


There's a reason iPhones have 2 large cores. There's a reason a Mac with an M1 feels just as fast as one with an M1 Max for many "everyday" applications. Sure, we don't want to go back to single processor machines, but we also don't want a Mac with 16 or even 24 E-cores. That would suck for too many things, including everything interactive.

Older e-cores. But e-cores 2-3 years from now that are just as good as 'P' cores 3-4 years ago will be just as fine as P cores were 3-4 years ago.

Intel's Meteor Lake will start all new processes on the LP-E core. When it appears they need more resources it will escalator them up, but the initial triage is to the LP-E cores.
 
  • Like
Reactions: wegster

R2DHue

macrumors 6502
Sep 9, 2019
292
270
I still don't find this definition very satisfactory. For example, AMR has a single instruction that will increment a register value and load two consecutive 64-bit values at the resulting address. I wound't call this a "simple" instruction. The second part of your definition is more interesting. Maybe a better way to define RISC would be as only using instructions that can are "fast" and have no variable costs? But then again ARM has dedicated memcpy instructions...

It sounds like you’re describing SIMD. (Which would be pretty impressive if it's part of the standard instruction set and not from an extension set like NEON.)

Either that or an equivalent to pure SIMD like “SIMD within a register.”

Reading from memory is “expensive.” (That’s why fast SRAM in caches is so computationally valuable compared to DRAM — and as pricey, too!)

But one fetch with two executions sounds even more efficient, not as efficient — let alone less.

Two separate increments performed on two sets of 64 “bits” sounds costlier — which may seem only minutely costly until it’s done a billion times.

I say “bits” because, one or even two sets of 64 “bits” does not necessarily equal one or two “values” or operands.

64 bits can mean two 32 bit “values,” four 16 bit values, or less/more depending on how narrow a shift the architecture allows. (It should allow 4 to 64 in a string, but maybe not.)

In this case, two 64 sets of bits, but one increment instruction to however many “values” the two sets mean to the programmer sounds less costly (at least to me).

It’s all up to the coder how many “values” any given set of bits represents; the processor has no idea what they mean to the programmer.

Bear in mind, the whole paradoxical-seeming concept of RISC is that it was discovered through statistics that showed that if compiled code broke down tasks into smaller ops before handing them to the CPU to process/execute, the overall performance was appreciably higher, not lower as you’d think (because software is always slower than hardware).

This had the consequence of requiring fewer instructions in the ISA — but this always has and always will be relative. The number of instructions can increase as the architecture evolves while remaining a RISC architecture.

Simple Vector extensions, for example, usually add new instructions, but still comport with RISC design philosophy.

And a Matrix coprocessor is incredibly simple yet incredibly fast and powerful for what it specializes in doing —processing numbers ordered in a way that CPUs and GPUs and ALUs just aren’t architected to handle.

Incidentally, RISC engineers didn’t get it perfectly right the first time: the first thing they jettisoned was floating point.

Then software evolved from simple “mass calculator” and massive & fast telephone switching operations and financial accounting, etc. to Scientific applications, graphics, 3D and simulation software — and, later, VisualFX and games, like 3D FPSs as one salient example.

For today’s needs, floating point is a must, and even way back when you could buy a PC with an empty socket on the motherboard for an optional dedicated floating point math coprocessor, it still had to communicate over a bus. Floating point is now an on-die feature of RISC chips (save for a few embedded designs) yet it maintains congruence with RISC design philosophy.

It’s natural to think that software doing more processing itself equals slower (because it usually is), but think of the overhead of high-level programming languages versus Assembly language.

Assembly is a lot harder for people because it “speaks” closer to the level of the hardware that its instructions will be performed on. It requires a lot more work by the programmer AND the program — BUT the software working harder doesn’t translate to slower — just the opposite.

In contrast, high-level programming languages, which — varying depending on the efficiency of the compiler — do almost no “prechewing” or breaking down of tasks into smaller instructions in software, they just throw all the work at the processor to handle for them.

If you rewrote a simple python program in Assembly, you’d be doing a lot more work on behalf of the processor, and your uncompiled code might even be longer, but your extra work on behalf of the processor would pay off in greatly improved speed of execution, smoothness and overall better UX.

All is relative, and a programmer can write “slow” python code, while a more skilled programmer can write fast(er) python code — but never as fast as low-level.

To your last point, modern ARM designs now includes the instructions and performs the functions that were traditionally handled by a dedicated Memory Management Unit. (Another example of additional instructions while still being RISC.)

Tightly coupled memory is but one of the many other features that makes the ARM design so fast and power efficient (so fast and efficient that it now powers desktop Macs as well as the world’s fastest supercomputer).

Its many design advantages probably account for why ARM is giving RISC-V such a “run for its money.” (Despite ARM’s proprietary IP and relatively tight control over it compared to RISC-V’s inherent Open nature.)

YET! “ARM” probably “wouldn’t be a thing” today if it weren’t for Apple.

Apple chose an ARM CPU iteration for its Newton PDA, then formed a joint partnership with the chip’s inventor, Acorn, that was spun off as an independent company called Advanced RISC Machines (ARM).

Then Apple came along again to bolster the company just ahead of its IPO by inking long-term agreement with ARM that extends beyond the year 2040. (Did Apple “make” then “save” ARM? 🤔.)

Personally, I suspect Microsoft/Microsoft Windows will ultimately go all-ARM — while keeping Intel happy by (though ditching Intel’s longtime, proprietary IP) having them fab and supply custom ARM designs — custom like Apple. Microsoft always wants to be Apple; always has, always will.

Historical chip designer Intel will have to swallow its pride as it fabs ARM-based designs for Microsoft, but, hey, x86-maker AMD is already doing it.

x86 will go the way of DOS, IMHO. (We’ll see.)

It’s fascinating that today’s most cutting-edge ARM technology evolved from a design that began a long time ago. But the best, most modern operating systems in the world today are Unixes, and Unix’s development also began a long time ago — in the late 1960’s.

It all has to do with the design philosophy at the start — and the imagination required to build things — originally — with headroom for a limitless future in mind. The fruits of this philosophy can be seen in Unix, ARM and Next — and all three now at Apple.

So Acorn’s design outlook from the start of its efforts in 1984 to design a RISC processor was versatility and extensibility.

The result today is an ARM ecosystem and a broad family of scalable ARM designs used in embedded systems, controllers, inexpensive IoT products, phones & “devices,” Macs — and even Supercomputers.

The simplicity of instructions of RISC designs equals less power and requires fewer transistors, so, given that, think how powerful it must be to have 35 Billion transistors on Apple’s new A17!

And I can’t wait to learn all the new things Apple will introduce in the upcoming M3.
 
Last edited:
  • Like
Reactions: wegster

leman

macrumors Core
Oct 14, 2008
19,521
19,673
It sounds like you’re describing SIMD.

No, I am referring to the LDP (load pair instruction). And in general AMD has fairly complex addressing modes — more complex than x86, for example. And it has other interesting quirks, like encoding operand shifts within arithmetic instructions.

I tried to read your post but I don't really understand what you are saying.

(Which would be pretty impressive if it's part of the standard instruction set and not from an extension set like NEON.)

NEON used to be an extension back in the day. It's part of core ARM nowadays.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
It’s a basic fact of computing. Since you mentioned that you develop software I’m a bit puzzled that this is something you require proof for.
I think bobcomer's claimed to be a manager. If so he's the kind of manager I would absolutely hate to work for - supremely confident that he understands it all even though he doesn't. And, if you try to inform him, he refuses to accept it.

I still don't find this definition very satisfactory. For example, AMR has a single instruction that will increment a register value and load two consecutive 64-bit values at the resulting address. I wound't call this a "simple" instruction. The second part of your definition is more interesting. Maybe a better way to define RISC would be as only using instructions that can are "fast" and have no variable costs? But then again ARM has dedicated memcpy instructions...
IMO, there's a missing letter in the "RISC" acronym. It should really be something like RISIC - Reduced Instruction Set Implementation Complexity. But even that doesn't fully capture it.

The generalized RISC philosophy is: Instructions should be as featureful as possible without violating two key principles.

1. The features don't cause implementation problems. "Problems" has many components, but the two big ones are area and cycle time.

2. The features should be useful. That might sound obvious, but in the pre-RISC era, people designed lots of ISAs with incredibly powerful features which looked nice on paper but, in practice, were barely used. There was an entire school of thought in computer design which held that as transistor budgets increased, that should be used to close the "semantic gap" between high-level languages and low-level assembly. This design philosophy proved to be a giant mistake (see for example Intel's iAPX 432 project).

So let's consider the instruction you mention: increment a register and use its new value for loading two consecutive 64-bit values.

Is it useful? This one, I don't have a great feel for, but as the parts of AArch64 I understand the need for seem very thoughtfully designed, I'm giving Arm the benefit of the doubt on it.

Does it cause implementation problems? Probably not. Incrementing a register is dead simple. So is loading 128 consecutive bits from memory (an Arm load/store unit already has to be able to do that, thanks to NEON). The only weird thing is that it writes the loaded data to two different 64-bit integer registers. One instruction producing more than one register write is unusual, but it's also not likely to be extraordinarily difficult to implement either.
 
  • Like
Reactions: Basic75 and altaic

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I think bobcomer's claimed to be a manager. If so he's the kind of manager I would absolutely hate to work for - supremely confident that he understands it all even though he doesn't. And, if you try to inform him, he refuses to accept it.
Don't worry about it, I wouldn't hire you. Supremely confident you know more than I and that attitude comes across quite readily in an interview. :)

You know, you could have gone quite easily without making a personal attack. I guess I wont be reading any of your stuff either.
 

APCX

Suspended
Sep 19, 2023
262
337
Don't worry about it, I wouldn't hire you. Supremely confident you know more than I and that attitude comes across quite readily in an interview. :)

You know, you could have gone quite easily without making a personal attack. I guess I wont be reading any of your stuff either.
Nothing but facts there. No personal attacks. As said previously, you just don’t like disagreement.
 
  • Like
Reactions: Romain_H

R2DHue

macrumors 6502
Sep 9, 2019
292
270
No, I am referring to the LDP (load pair instruction). And in general AMD has fairly complex addressing modes — more complex than x86, for example. And it has other interesting quirks, like encoding operand shifts within arithmetic instructions.

I tried to read your post but I don't really understand what you are saying.



NEON used to be an extension back in the day. It's part of core ARM nowadays.

So you’re saying that NEON being core makes ARM a non-RISC design.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,673
IMO, there's a missing letter in the "RISC" acronym. It should really be something like RISIC - Reduced Instruction Set Implementation Complexity. But even that doesn't fully capture it.

Yeah, this also mirrors my understanding. But that's why I don't find acronyms such as RISC/CISC very useful. They represent certain historical design principles (RISC was originally about simplifying hardware implementation/improving performance of individual instructions; CISC was about saving memory and making working with assembly easier), which haven't been relevant in their original form for decades.

Another problem for me is that people usually make this mean whatever they want to. So it's stops being a technical term and becomes an emotional one. proponents of RISC-V for example prefer ISA simplicity above all — RISC for them means "one instruction does exactly one thing". ARM's SIMD is about pragmatism and packing as much functionality as possible into a single instruction — provided it can be implemented in a predictable way in hardware. And so on.

For example RISC-V rejects complex addressing modes because... I don't really know, their designers felt that doing integer arithmetics does not thematically belong in a memory instruction I suppose? But any modern high-performance CPU contains an address calculation unit in its load/store pipelines (because it's cheaper than using the actual int pipeline for common address calculation). So to achieve good performance RISC-V CPUs have to detect relevant int instructions preceding a load and fuse them into one thing that can be efficiently executed on the actual hardware. On the other hand, ARM fully embraces complex addressing modes (including shifts, register + register, register pre/post increment etc). A good example where a reduced (RISC-V) instruction set ends up being a worse abstraction for the hardware than a more complex one.

Is it useful? This one, I don't have a great feel for, but as the parts of AArch64 I understand the need for seem very thoughtfully designed, I'm giving Arm the benefit of the doubt on it.

It's very useful because of the stack. LPD/STP allows you to push/pop two registers to/from the stack and increment the stack pointer simultaneously. Or doing things like working with structs. To me it's one of great examples how ARM ISA is pragmatic in its design. One can in principle argue that this is not really a reduced/simple instruction, but it's something that can be implemented efficiently in the hardware, and it improves both code density and performance. On Apple CPUs for example load pair and load single register have the same throughput/latency!


So you’re saying that NEON being core makes ARM a non-RISC design.

I am not talking about NEON at all. I have no idea why you have brought up NEON in the first place.
 
  • Like
Reactions: Basic75

leman

macrumors Core
Oct 14, 2008
19,521
19,673
Don't worry about it, I wouldn't hire you. Supremely confident you know more than I and that attitude comes across quite readily in an interview. :)

You know, you could have gone quite easily without making a personal attack. I guess I wont be reading any of your stuff either.

@mr_roboto is not the only one who feels this way. I have been programming for over 25 years now (with a healthy amount of low-level C and sometimes assembly in the mix), and many things you are saying are.. odd to me, to say the least. I think you might have deep misconceptions/gaps in understanding how the hardware actually works.

I also don't see it as a personal attack, rather as a constructive criticism. Of course, these things are complicated. Maybe it's us as a group who lack understanding and you are the frustrated one trying to explain obvious things to us. Who knows :)
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
For example RISC-V rejects complex addressing modes because... I don't really know, their designers felt that doing integer arithmetics does not thematically belong in a memory instruction I suppose? But any modern high-performance CPU contains an address calculation unit in its load/store pipelines (because it's cheaper than using the actual int pipeline for common address calculation). So to achieve good performance RISC-V CPUs have to detect relevant int instructions preceding a load and fuse them into one thing that can be efficiently executed on the actual hardware. On the other hand, ARM fully embraces complex addressing modes (including shifts, register + register, register pre/post increment etc). A good example where a reduced (RISC-V) instruction set ends up being a worse abstraction for the hardware than a more complex one.
RISC-V's rejection of rolling these address calculations into addressing modes is especially strange because they aren't even all that complex to implement, and are so useful that they're generally regarded as table stakes.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,673
RISC-V's rejection of rolling these address calculations into addressing modes is especially strange because they aren't even all that complex to implement, and are so useful that they're generally regarded as table stakes.

My suspicion is that RISC-V has been designed with focus on simple implementations. It seems to me that all they wanted was a basic, straightforward ISA without too many details and complications. Makes sense if one considers the original background of RISC-V. And it also makes sense if one is targeting microcontrollers or other very small CPU cores, maybe you don't want to include a more complex ALU with your load/store unit. For high-performance hardware though, it's definitely problematic.

What's funny is that Qualcomm just recently published a proposal to add complex addressing modes to RISC-V, which essentially copies the relevant bits from ARMv8. I would be surprised if this were ratified to the RISC-V standard, as it's a huge change from the usual RISC-V philosophy, but nothing is stopping Qualcomm from developing their own ARM-like equivalent.
 
  • Like
Reactions: Basic75

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
@mr_roboto is not the only one who feels this way.
I know and don't particularly care. If they want to make a personal attack about it, then I do care. The likelihood of me ever being their boss is nill, and being a good boss around here is not my goal -- this is entertainment, not work.

I have been programming for over 25 years now (with a healthy amount of low-level C and sometimes assembly in the mix)
And I've been doing it for 50 years. I've been programming a lot more different types of hardware, and yes, I've used c and assembly among many other languages, both very high and very low level. That doesn't make me an expert, I just have a lot of experience and a good memory.

and many things you are saying are.. odd to me, to say the least. I think you might have deep misconceptions/gaps in understanding how the hardware actually works.
That's your own perception and that's fine. It could well be that I'm mixing things too much in what I'm saying, I've been working with computers for too long, As I've said before, I'm a software guy, so I look at hardware just as a means to an end and yes, my knowledge is lacking in that area. I don't mind that people correct me in that area but if it goes against what software (OS mainly) and a computer do, then I have a problem with it. And I don't like personal attacks on anyone for anything. You don't like what I say, say your piece without getting personal. I may not agree like everyone here may not agree. It seems strong personalities are common. <g>

If you (the collective you) don't like what I say, then ignore me. Discussions go nowhere without more than one side.

Note I'm not complaining about what you are saying, you really try not to make it personal. Others are extremely different and I have no use for talking with them anyway as I wont stand that crud, period. It puts a person on the defensive and that's not helpful in *any* discussion.

I also don't see it as a personal attack, rather as a constructive criticism. Of course, these things are complicated. Maybe it's us as a group who lack understanding and you are the frustrated one trying to explain obvious things to us. Who knows
It's personal, always, when you talking about someone else, rather than the argument itself. You do this, you seem to think this, you wouldn't notice, I wouldn't work for you, whatever, that's personal like I said, totally useless in a tech discussion.

Extreme frustration is definitely there for me in this discussion, and it has probably made me say some stupid things.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
Don't worry about it, I wouldn't hire you. Supremely confident you know more than I and that attitude comes across quite readily in an interview.
I would and always hire people that provides that are smarter than me, its good for my business especially if they ask almost the same money as the "not so smart" ones
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I would and always hire people that provides that are smarter than me, its good for my business especially if they ask almost the same money as the "not so smart" ones
I didn't say I wouldn't hire someone smarter than me, far from it, but I wouldn't hire them if they browbeat me with it.

If they had that kind of attitude with me, they couldn't handle users, among other problems. And handling users problems and requests is a really important part of the job around here. (small shop, we don't do commercial software to sell, everything is for internal usage)
 

leman

macrumors Core
Oct 14, 2008
19,521
19,673
As I've said before, I'm a software guy, so I look at hardware just as a means to an end and yes, my knowledge is lacking in that area. I don't mind that people correct me in that area but if it goes against what software (OS mainly) and a computer do, then I have a problem with it.

You see, this is exactly the attitude I have a problem with. There is no such thing as "software", programs run on hardware. And if you don't have a decent command of how the hardware works, you won't be able to design and build good software.

I mean, would you trust a racetrack driver who says "I just drive cars, I don't know how the clutch or the gearbox works" or an electrician who says "I just lay cables, I have no idea about basic electrical science"?
 
  • Like
Reactions: Basic75

FlyingTexan

macrumors 6502a
Jul 13, 2015
941
783
I mean, would you trust a racetrack driver who says "I just drive cars, I don't know how the clutch or the gearbox works"
Absolutely. A driver has no reason to know these things. To get that involved and that in-depth means he's not focusing on the things that matters. I have almost 12,000hrs flying jets and the amount of things I don't know keeps growing. If I can't fix it from the cockpit I have no reason to know it nor should I want to. My first course of action is to call maintenance and let the people that specialize in that handle it. I'm focused on flying and where I'm going (terrain, distances, fuel planning, route planning, airspace legalities, etc) just like that racecar driver is focused on the track, driving, etc. Best thing that driver can do is know when something's broken then get the right people to fix it. All men can't be all things.
 
  • Like
Reactions: bobcomer

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
You see, this is exactly the attitude I have a problem with. There is no such thing as "software", programs run on hardware. And if you don't have a decent command of how the hardware works, you won't be able to design and build good software.

I mean, would you trust a racetrack driver who says "I just drive cars, I don't know how the clutch or the gearbox works" or an electrician who says "I just lay cables, I have no idea about basic electrical science"?
We'll just have to agree to disagree on this point. It's not like I know nothing about hardware.
 

APCX

Suspended
Sep 19, 2023
262
337
Absolutely. A driver has no reason to know these things. To get that involved and that in-depth means he's not focusing on the things that matters. I have almost 12,000hrs flying jets and the amount of things I don't know keeps growing. If I can't fix it from the cockpit I have no reason to know it nor should I want to. My first course of action is to call maintenance and let the people that specialize in that handle it. I'm focused on flying and where I'm going (terrain, distances, fuel planning, route planning, airspace legalities, etc) just like that racecar driver is focused on the track, driving, etc. Best thing that driver can do is know when something's broken then get the right people to fix it. All men can't be all things.
In that case, why would a supposed software person spend so long disagreeing about hardware with this who are more well rounded?

Seems like a bad decision to spend so long denying the reality of well known truths, insisting that your feelings trump facts and calling out others for getting personal when you don’t like those facts.

In any case those situations aren’t really comparable. Certainly software development is aided greatly by a knowledge of the hardware.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.