Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
There is some ridiculous folklore about power supplies among people who build gaming PCs. GPU manufacturers already recommend using bigger power supplies than necessary, because some people may have a low-quality low-efficiency PSU. Then gamers buy expensive high-end PSUs and often choose a bit bigger unit than recommended just in case.

For example, the iMac 27" uses ~300 W under full load. A GPU manufacturer might recommend a 500-600 W PSU for a similar system, and a gamer might buy a 650 W or 750 W PSU with a 80 Plus Platinum / Titanium certification. Meanwhile, Apple ships the iMac with a 300 W PSU.

The good thing about this overengineering is that if you upgrade the GPU by two generations and one tier, the PSU will still probably be good enough. I recently replaced a GTX 1060 (120 W) with an RX 6700 XT (230 W), and the 600 W PSU is still sufficient and the fans remain almost silent.

Depends on the quality of the PSU too. There are so many crappy cheap PSUs in machines... I remember, a while ago, reading that poor PSU is the number one reason for premature component death.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Depends on the quality of the PSU too. There are so many crappy cheap PSUs in machines... I remember, a while ago, reading that poor PSU is the number one reason for premature component death.

I bought two Dell systems back in 2008 and they both started failing in 2010. Just random crashing. I searched around and found that Dell puts crappy PSUs in. I replaced the PSUs and the systems run fine today. I'd guess that a lot of people would just replace the systems. They both had strong CPUs - shame to throw in crap PSUs. But it seems that they did it back then. I don't know if they still do it but I don't see why they wouldn't.
 
  • Like
Reactions: JMacHack

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I bought two Dell systems back in 2008 and they both started failing in 2010. Just random crashing. I searched around and found that Dell puts crappy PSUs in. I replaced the PSUs and the systems run fine today. I'd guess that a lot of people would just replace the systems. They both had strong CPUs - shame to throw in crap PSUs. But it seems that they did it back then. I don't know if they still do it but I don't see why they wouldn't.
I've replaced more dell PS's than any other brand by far, but I think they have gotten better, finally.
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
I personally don't care about fan noise or being hot under load. That is to be expected. What I do find irritating is that any random Windows 10 update, or even launching Visual Studio 2019 or 2022, or sometimes launching Chrome and looking at Azure DevOps causes my Windows laptop fan to be very loud. These things should be very quiet to do, as proof by my M1 Max MacBook Pro doing more than those things and it is still silent, sometimes a LITTLE warm to the touch but not as bad as my Windows laptop is.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I personally don't care about fan noise or being hot under load. That is to be expected. What I do find irritating is that any random Windows 10 update, or even launching Visual Studio 2019 or 2022, or sometimes launching Chrome and looking at Azure DevOps causes my Windows laptop fan to be very loud. These things should be very quiet to do, as proof by my M1 Max MacBook Pro doing more than those things and it is still silent, sometimes a LITTLE warm to the touch but not as bad as my Windows laptop is.
Not all Windows laptops are loud. I never hear my Lenovo X1 Carbon, and it gets warm to the touch, especially when i run VM's. (i7)

It never gets as hot as my M1 MBA doing the same type things. (yes, I know it's passively cooled and my mistake in buying it for my workload)
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
Not all Windows laptops are loud. I never hear my Lenovo X1 Carbon, and it gets warm to the touch, especially when i run VM's. (i7)

It never gets as hot as my M1 MBA doing the same type things. (yes, I know it's passively cooled and my mistake in buying it for my workload)
I have tried several Dell, HP and Microsoft systems. They all get loud.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I have tried several Dell, HP and Microsoft systems. They all get loud.
Try a Lenovo X series... (not the X1 Extreme, and not really an X1 design, that's just for performance.) We've been buying Lenovo's for a very long time. The X1 10'th generation is just about to come out with Alder Lake CPU's, and i'll probably be buying one. My 4 year old X1 Carbon is getting a bit long in the tooth. (6th gen)
 
  • Like
Reactions: Ethosik

carlob

macrumors regular
Nov 23, 2014
148
78
Wait, I knew Apple prevented sellers from selling under MSRP, but this is the first I've heard of them preventing sellers from raising the price.

Apple has pretty good anti scalping policies in place plus they monitor the channel and are often able to track down activations/IMEI/serial numbers to the dealer and take appropriate actions (i.e. fine and cancel the agent licence). Back in the past they had problems in China with iPhones and changed the preorders policy (2 phones max). Not perfect but much better than Amazon where bots are buying every single PS5 one second after a restock.
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,298
I’m concerned with current trends that you may have to upgrade your psu every two gens or so. It wasn’t very long ago that a 1000 watt psu was “overkill”, and now it’s almost necessary with a midrange pc.

Don't need 1000W. People commonly run 3090 GPU with TGP up to 395W along with high end CPU on tiny apple-sized Corsair SF750 SFX 750W power supply. I run 300W GPU + 105W CPU with the 600W SF600 version.

Size comparison with SFX on left for SFF ITX shoebox-sized case.
2019-07-23-10_13_21-cp-9020186-na-sf750_size_compare.jpg-1920%C3%97712.jpg
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,869
Synthethic benchmars are *******. If you need a powerful stuff like M1 Max or 12900 the best way to see what comes on the top is to use your workflow on both.
Who's using synthetic benchmarks? They've been discredited for ages and you hardly ever see anyone cite them any more. You'll sometimes still see embedded microcontrollers marketed with Dhrystone MIPS, which I always shake my head at, but that's about it.

If what you were going on about was Geekbench, FYI, that's not a synthetic benchmark. "Synthetic" describes benchmark programs written exclusively to benchmark CPUs, with no direct relation to real world problems.

For example, the famous one I mentioned above, Dhrystone, was invented in the 1980s. The engineer who designed it thought that the best way to benchmark a machine was just to survey a broad range of real programs, assemble some statistics on the proportion of various low level constructs (branches, function calls, math, etc), then write some really insane do-nothing code whose sole purpose is to cause the CPU to perform the surveyed low level operations in about the same proportion.

This was an extremely naive approach, but it kinda sorta worked OK in the absence of significant compiler optimization, caches, and a billion other things. Which meant it was already bad the day it was first released. Didn't stop it from enjoying some popularity, but like I said, it's mostly dead now.

Geekbench isn't synthetic. It consists of a bunch of real programs which are invoked to do the real work they're designed to do on a reference data set. Do things like "PDF Rendering", "Image Compression", "Gaussian Blur", and so forth sound synthetic to you? They don't to me, these are real tasks people use computers for all the time.

While this approach isn't as good as "your workflow", in the absence of numbers for your workflow, a benchmark suite like Geekbench (or the suite it was somewhat clearly inspired by, SPEC) can be a useful tool.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Who's using synthetic benchmarks? They've been discredited for ages and you hardly ever see anyone cite them any more. You'll sometimes still see embedded microcontrollers marketed with Dhrystone MIPS, which I always shake my head at, but that's about it.

If what you were going on about was Geekbench, FYI, that's not a synthetic benchmark. "Synthetic" describes benchmark programs written exclusively to benchmark CPUs, with no direct relation to real world problems.

For example, the famous one I mentioned above, Dhrystone, was invented in the 1980s. The engineer who designed it thought that the best way to benchmark a machine was just to survey a broad range of real programs, assemble some statistics on the proportion of various low level constructs (branches, function calls, math, etc), then write some really insane do-nothing code whose sole purpose is to cause the CPU to perform the surveyed low level operations in about the same proportion.

This was an extremely naive approach, but it kinda sorta worked OK in the absence of significant compiler optimization, caches, and a billion other things. Which meant it was already bad the day it was first released. Didn't stop it from enjoying some popularity, but like I said, it's mostly dead now.

Geekbench isn't synthetic. It consists of a bunch of real programs which are invoked to do the real work they're designed to do on a reference data set. Do things like "PDF Rendering", "Image Compression", "Gaussian Blur", and so forth sound synthetic to you? They don't to me, these are real tasks people use computers for all the time.

While this approach isn't as good as "your workflow", in the absence of numbers for your workflow, a benchmark suite like Geekbench (or the suite it was somewhat clearly inspired by, SPEC) can be a useful tool.

I actually find Geekbench 5 is a good measure of performance for my office tasks. I don’t use it for my workflow which runs on separate systems. I like to use older equipment if possible for office stuff.

It is nice to get sub-second response to everything but it doesn’t matter in getting work done.
 

januarydrive7

macrumors 6502a
Oct 23, 2020
537
578
Synthethic benchmars are *******. If you need a powerful stuff like M1 Max or 12900 the best way to see what comes on the top is to use your workflow on both. But honestly if you are that power hungry I have to ask why are you using a laptop in the first place. Sure with Apple you have no choice if you are constrained to few thousands of dollars but for Windows or Linux you definetely don't need that power hungy laptop.
There are at least a few really great benchmarks to end all benchmarks out there. Stockfish and hashcat, for example, are widely regarded as de facto harbingers of true power, performance, and all around godliness.
 

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
Many of thee cross platform benchmark tests that we still use today have a legacy heritage targeting traditional monolithic homogenous chip architectures like x86, PowerPC , mips etc…
It’s going to become a lot more difficult, costly (and questionably relevant??) to develop benchmarks that are indicative of real world performance with newer more heterogenous designs where traditional cpu compute tasks are intended by the silicon designer to be farmed out to more targeted, dedicated coprocessors and accelerators for things like decrypt, encrypt, video encode, video decode, isp, machine learning accelerators etc… as is the case with m1.

This is a nice essay on the topic from Erik Engheim where he describes much better than I ever could albeit from the context that RISC-V could be the unifying ISA for said co-processors and accelerators.

For this very reason I struggle when I see broadbrush definitive statements such as ‘ADL is 5-10% faster than M1 on Nerdbench, stockcrabs chess, a random corner case test or even a slightly more mainstream (yet still unoptimized) cinebench or blender test where that app has been built around traditional compute with a homogenous CPU and traditional IMR GPU.
Well yeah… of course, if that is what you want to run today based on the level of software optimization today, go buy a PC / x86 system for no doubt better out and out ultimate performance (at the cost of TDP) today.

Just check-in again periodically in 6 months, a year or even two years time when software teams have had time to catch up to the hardware and tuned their software for a more heterogeneous computing world!!!

FWIW Qualcomm was singing the same tune back in 2017: https://developer.qualcomm.com/blog/heterogeneous-computing-architecture-and-technique

 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Could PC world move to RISC-V instead of ARM? What does RISC-V need to be in commercial products? Better LLVM support?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Could PC world move to RISC-V instead of ARM? What does RISC-V need to be in commercial products? Better LLVM support?

In the grand scheme of things, not much different between Arm and RISC-V from a technical perspective. If Apple or someone with design talent decided to design a high end RISC-V, its performance/watt would be similar to what you get from Arm.
 

januarydrive7

macrumors 6502a
Oct 23, 2020
537
578
In the grand scheme of things, not much different between Arm and RISC-V from a technical perspective. If Apple or someone with design talent decided to design a high end RISC-V, its performance/watt would be similar to what you get from Arm.
Perhaps better question: why Arm now instead of RISC-V, especially as there are licensing requirements for Arm? We have a decent idea of the answer, but it might be better insight into the heart of the question posed.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
In the grand scheme of things, not much different between Arm and RISC-V from a technical perspective.
Is the RISC-V ISA more compact and modular than ARM because of the RISC-V extensions?

RISC-V [...] performance/watt would be similar to what you get from Arm.
Has anyone published a benchmark between these two ISAs?

why Arm now instead of RISC-V, especially as there are licensing requirements for Arm?
RISC-V is still maturing and some extensions are not finished.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Is the RISC-V ISA more compact and modular than ARM because of the RISC-V extensions?


Has anyone published a benchmark between these two ISAs?


RISC-V is still maturing and some extensions are not finished.

RISC-V is arguably more compact and modular, but in a practical sense that doesn’t make much difference. By the time you add extensions necessary for a particular purpose (like desktop computing) you probably don’t save much complexity.

As for benchmarking, it doesn’t really make sense to benchmark an ISA. It wound be a completely artificial endeavour. What matter is implementation (microarchitecture, design, process node, etc.). When you are comparing two very different ISA’s (like a CISC and a RISC) you can draw general conclusions, but RISC instruction sets are mostly pretty similar.
 
  • Like
Reactions: jdb8167 and Xiao_Xi

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Is RISC-V mature enough to replace ARM in embedded systems? Are SiFive designs as good as ARM ones?
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,869
RISC-V is arguably more compact and modular, but in a practical sense that doesn’t make much difference. By the time you add extensions necessary for a particular purpose (like desktop computing) you probably don’t save much complexity.

As for benchmarking, it doesn’t really make sense to benchmark an ISA. It wound be a completely artificial endeavour. What matter is implementation (microarchitecture, design, process node, etc.). When you are comparing two very different ISA’s (like a CISC and a RISC) you can draw general conclusions, but RISC instruction sets are mostly pretty similar.
Have you ever seen this critique of RISC-V? It and other things have convinced me that RISC-V ain't all that great. A decent enough option if you don't want to pay licensing fees, but otherwise meh.


The linked twitter discussion includes the point that it was originally an academic ISA where, presumably, simplification was given highest priority. That (and the modularity allowing it to be pared down to a very minimalistic base ISA) is what you want when asking individual students to come up with a working implementation during a semester course. But that level of simplification isn't necessarily the best idea for a commercial ISA.

There's nothing like x86-sized tumors, of course. Given absolutely equal resources, I'm sure it's possible to design a RISC-V core almost as good as an Arm v8 core in the same process node. But I don't think it would be quite as good.
 
  • Like
Reactions: Xiao_Xi

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Perhaps better question: why Arm now instead of RISC-V, especially as there are licensing requirements for Arm? We have a decent idea of the answer, but it might be better insight into the heart of the question posed.

Few things that come to my non-expert mind:

- ARM is a much more mature ISA, with a mature feature set and compiler/library support
- ARM is a more pragmatic ISA (targeting real-world usability, unlike RISC-V)
- ARM is ready to use today — some fundamental RISC-V features are not yet ratified/stabilised
- RISC-V offers no tangible benefits over ARM

In addition, designing a modern high-performance CPU is a non-trivial endeavour that requires resources of a large corporation. ARM licensing fees don't matter much in this model. The main benefit of RISC-V — its open source nature — makes it great for designing specialised microcontrollers or other types of devices. You can get started as a small passionate team on a tight budget for example. And that's where RISC-V excels.

As a general-purpose high-performance computing ISA? I just don't see the point. ARM's Aarch64 is arguably currently as close to a register-based ISA as the industry has managed to get. What concrete benefits does RISC-V bring to the table? The main arguments seems to be "it's open" which I personally find a bit silly.

RISC-V is arguably more compact and modular, but in a practical sense that doesn’t make much difference.

RISC-V seems to have a massive problem with code density due to its design prioritising certain "theoretical purity". In general, RISC-V will need more instructions to encode common operations. The problem is so bad that RISC-V has introduced an instruction compression format to bring down the code density to an acceptable level (talk about ironic). A high-performance RISC-V CPUs (which still don't exist) will probably need to rely heavily on operation fusing to work around these RISC-V limitations.

Modularity is also a double-edges sword. Its great for prototyping and fast iteration, but a huge issue for actual software stability. If every vendor has their own version of ISA, often with subtle differences, you can't ship binary software and you can't optimise. This is not a problem when you are in a typical HPC environment — you will be using custom code targeting a specific CPU, and a adjustment is only a recompile away, but the reality is very different in user market where you want binary stability.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Could PC world move to RISC-V instead of ARM? What does RISC-V need to be in commercial products? Better LLVM support?
Purely theoretically the PC world could move to whatever it wants. Unfortunately choices like that aren’t made in a vacuum.

Risc-V, like arm, really just needs some big cheese player to make the first move. Apple built up to the M1 from the A series, which of course have a lot of custom features that reference ARM doesn’t have, which is usually why Apple has a consistent lead over other arm cpu designs.

RISC-V Doesn’t have any players like that at the moment. And has very little to no support from any OS. Even POWER has better support right now.

After the M1 dropped, AMD, and NVIDIA announced their intentions to make ARM cpus in the same class. I don’t know how that will pan out but it’s a possibility. Intel is a wild card, they have a vested interest in keeping x86 the big dog in computing, but the recent poach from Apple could be a sign that they’re willing to hop on ARM. “Slay your sacred cattle” is a business mantra.

And all of that is just hardware, software is another beast. For 15 years x86 has been the only viable choice in desktop computing, and refactoring code for ARM, let alone specific implementations of ARM, would be difficult to say the least. Compilers would have to be rewritten, with features added for vendor specific accelerators, etc. And the most ubiquitous os, windows, is still barely functional on ARM.

Apple had the advantage of being the sole provider of MacOS, Macs, Apple Silicon, and creator of the development tools for them, so they had the unique position to shove their platform in this direction.

If the industry shifts away from x86, I’d see it going to ARM, as it stands. But there’s other options to keep x86 around that I can think of, such as axing 32-bit support (which seems popular anymore), or making sure any new instructions are a fixed length.

Everything’s up in the air really.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Few things that come to my non-expert mind:

- ARM is a much more mature ISA, with a mature feature set and compiler/library support
- ARM is a more pragmatic ISA (targeting real-world usability, unlike RISC-V)
- ARM is ready to use today — some fundamental RISC-V features are not yet ratified/stabilised
- RISC-V offers no tangible benefits over ARM

In addition, designing a modern high-performance CPU is a non-trivial endeavour that requires resources of a large corporation. ARM licensing fees don't matter much in this model. The main benefit of RISC-V — its open source nature — makes it great for designing specialised microcontrollers or other types of devices. You can get started as a small passionate team on a tight budget for example. And that's where RISC-V excels.

As a general-purpose high-performance computing ISA? I just don't see the point. ARM's Aarch64 is arguably currently as close to a register-based ISA as the industry has managed to get. What concrete benefits does RISC-V bring to the table? The main arguments seems to be "it's open" which I personally find a bit silly.



RISC-V seems to have a massive problem with code density due to its design prioritising certain "theoretical purity". In general, RISC-V will need more instructions to encode common operations. The problem is so bad that RISC-V has introduced an instruction compression format to bring down the code density to an acceptable level (talk about ironic). A high-performance RISC-V CPUs (which still don't exist) will probably need to rely heavily on operation fusing to work around these RISC-V limitations.

Modularity is also a double-edges sword. Its great for prototyping and fast iteration, but a huge issue for actual software stability. If every vendor has their own version of ISA, often with subtle differences, you can't ship binary software and you can't optimise. This is not a problem when you are in a typical HPC environment — you will be using custom code targeting a specific CPU, and a adjustment is only a recompile away, but the reality is very different in user market where you want binary stability.

These things are all true. From a purely technical perspective, however, I think you’d see little performance difference between Arm and RISC-V. Back in the golden days of RISC ISA competition, the performance differences between chips based on ISA’s as different as PA-RISC, MIPS, SPARC, Alpha, etc. came down to differences in microarchitecture choices, really. You’d see Sparc do great one generation, then the designers would leave and the second tier designers would mess up the next, then they’d get a design team who did a good PowerPC and they’d be competitive again. You could actually predict which chip would win in a given year based on the microarchitectural block diagram and who the designers were this time.

The problems with RISC-V are largely due to its immaturity and the fact that it smells like a design-by-committee academic project, but in the end the difference in quality of an actual chip wouldn’t have much to do with that. The things you mention about modularity, etc. are a more serious problem, at least for some uses.
 
  • Like
Reactions: jdb8167

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
You could actually predict which chip would win in a given year based on the microarchitectural block diagram and who the designers were this time.
Off topic, but I read somewhere that finding circuit designers is tough because of significant overlap with software engineering, which pays more.

It makes me wonder how many brilliant engineers that we’ve missed out on.

Also, speaking of software, knowing the trend towards adding specific hardware acceleration to processors makes me wonder if we’ll see ******** software that’s unoptimized but compatible across architectures.
 

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
Also, speaking of software, knowing the trend towards adding specific hardware acceleration to processors makes me wonder if we’ll see ******** software that’s unoptimized but compatible across architectures.
We already do, unfortunately.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.