Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

im_jerry87

macrumors newbie
Original poster
Mar 17, 2022
2
33
tl;dr - They are not fair comparisons.

I'm not going to be very deep but just enough to make you guys understand things.

1. Cinebench R23

CR23's render engine uses Intel Embree which is Intel's library to accelerate Ray tracing compute using CPU. It supports various SIMD instruction sets for x86 architecture and among these are SSE or AVX2. AVX2 is Intel's latest SIMD instruction set which is superior to SSE. And, CR23 is AVX heavy, so you know where this is going. Now, ARM's SIMD instruction set is NEON. But Intel Embree obviously doesn't support NEON native implementation. So, for CR23 to even run on Apple silicon, Intel Embree needs to be rewritten for ARM64 which thanks to Syoyo Fujita, became possible. Now, SSE or AVX2 intrinsics need to be translated to NEON intrinsics for every application which is a huge pain in the ass. But there's a library, it's a header actually, available to do that but it's only SSE2NEON and not AVX2NEON. Going by the Github comments for Apple's pull request on Intel Embree, Apple is working on bringing AVX2NEON support for Apple silicon. Even after that, I'm not sure if CR23 will be a fair comparison. Intel might introduce a superior SIMD instruction set and then Apple again has to do a pull request on Intel Embree for NEON translation? Man, that's PAIN.

2. Geekbench GPU Compute

First of all, I've seen a few comments here that you can't compare Metal vs CUDA. Not true. Geekbench is a cross-platform benchmark and it's perfectly fine to compare Metal vs CUDA. What is not a fair comparison is OpenCL comparisons since it's deprecated in macOS. But, the real issue is, for some reason, the GPU compute benchmark doesn't ramp up GPU frequencies or even consume close to maximum power GPU would consume when it's on full load for Apple silicon. How would this be a fair comparison when GPU is not even utilized to its fullest in Apple silicon? This was first noted in M1 Max/M1 Pro review as a comment by Andrei Frumusanu who is ex Anandtech and currently works at Nuvia.

3. Question you might have

A. If Geekbench GPU compute doesn't work as expected for Apple silicon, how can we compare GPU performance against Nvidia or AMD?

I would highly recommend GFXBench 5.0 Aztec Ruins High 1440p Offscreen and 3DMark Wild Life Extreme Unlimited. They both are native to Apple silicon supporting Metal and more importantly, really stress the GPU and give you a clear picture of the performance since they are offscreen tests. But keep in mind, 3DMark is still an iOS app. Not sure if there would be any penalty 'cause of that vs native windows implementation. And, no, SPECviewperf v2.0 doesn't support Metal if you are wondering.

Below are the screencaps from Dave2D's and Arstechnica's Mac Studio review:

Screenshot 2022-03-18 at 12.51.56.png


Mac-Studio-review.006.png


B. If Apple Silicon GPUs are so powerful then why Blender benchmarks are underwhelming compared to that of Nvidia?

Two Reasons:

-> Blender 3.1 is just the first stable release supporting Metal in cycles and even Blender themselves in a video going over all the updates said that more performance optimizations for Metal are yet to come. I would definitely expect Apple silicon GPU to match CUDA scores of the latest Nvidia GPUs in blender benchmarks in the future.

-> But that's only in CUDA. Nvidia would still smoke Apple Silicon in Optix 'cause Apple doesn't have anything close to Optix since there are no Ray Tracing cores in Apple GPUs for Metal to take advantage of. I'd love to see Apple package RT cores in their GPU designs and optimize Metal to take advantage of those cores or even write separate API for accelerated ray tracing like Optix.

C. How can we compare the CPU performance of Apple Silicon against an x86 chip if CR23 is not fair?

As a consumer, I really don't know. Maybe, Blender benchmarks using CPU? If you're a professional, you already know about industry-standard benchmarks like SPEC, SPECint, SPECfp, etc. But I don't think anyone except Anandtech uses these benchmarks and the real problem is these YouTubers, man. It's just painful to watch and even more painful to read the comments of the viewers who take these benchmarks results as if it's all that matters when buying a machine.

D. Is there any game(s) out there that would be a fair comparison to measure GPU performance?

World of Warcraft. It's one of the very few games that's native to Apple Silicon and also supports Metal.

4. Final Note

I have reached out to Verge(Becca, Monica, Nilay, and Chaim) and Arstechnica(Andrew Cunningham) to correct them on their recent Mac Studio video/article. I didn't get any reply. I even reached out to Linux and MKBHD guys(Andrew, Adam, and Vinh) for their upcoming reviews with these points. But again, no reply. I don't blame them though. Maybe they didn't see my messages yet. I reached out via Twitter DM after all. Hence I wrote this post to bring little awareness to people who might not know about these details. Finally, it is very important to understand that Apple doesn't sell you SoCs. They sell you computers so make a choice wisely w/o falling for these youtubers or tech publications like Verge who run these benchmarks w/o doing any research on the tools they use and the inaccurate information that might come off of these results.

Cheers!
 
Last edited:
Great analysis and great job on letting the reviewers know.

The review on The Verge was particularly dismissive of the M1 Ultra GPU, for no good reason other than to bash on Apple in my opinion.

Of course Apple's charts are not to be taken as indisputable truth, but the gap between what was reported by The Verge and Apple claims is too large.

So large, in fact, that if the reviewer knew something about computers at all, he would run more tests and more benchmarks to confirm his findings.

Instead they just run Geekbench compute (which by now we know it's flawed) and Tomb Raider (which is not even native) to demonstrate their point.

I'm not gonna visit The Verge anymore. Their reviews are a rushed job at best and utter garbage at worst.
 
Great analysis and great job on letting the reviewers know.

The review on The Verge was particularly dismissive of the M1 Ultra GPU, for no good reason other than to bash on Apple in my opinion.

Of course Apple's charts are not to be taken as indisputable truth, but the gap between what was reported by The Verge and Apple claims is too large.

So large, in fact, that if the reviewer knew something about computers at all, he would run more tests and more benchmarks to confirm his findings.

Instead they just run Geekbench compute (which by now we know it's flawed) and Tomb Raider (which is not even native) to demonstrate their point.

I'm not gonna visit The Verge anymore. Their reviews are a rushed job at best and utter garbage at worst.
As someone who works in the gaming industry, my team is asking about the workload to ramp up ratio, if high power mode was used, and acknowledged that most software tests don't take into account that Apple Silicon will not utilize all resources of each core until it is absolutely necessary. The M1 Max and Ultra will prove itself in the long haul when it comes to practical performance outshining it's PC brethren. It's the flexibility and utility of the archetechture that makes these chips unique, not just it's raw computing capabilities.
 
  • Like
Reactions: Argoduck
1. Cinebench R23

CR23's render engine uses Intel Embree which is Intel's library to accelerate Ray tracing compute using CPU. It supports various SIMD instruction sets for x86 architecture and among these are SSE or AVX2. AVX2 is Intel's latest SIMD instruction set which is superior to SSE. And, CR23 is AVX heavy, so you know where this is going. Now, ARM's SIMD instruction set is NEON. But Intel Embree obviously doesn't support NEON native implementation. So, for CR23 to even run on Apple silicon, Intel Embree needs to be rewritten for ARM64 which thanks to Syoyo Fujita, became possible. Now, SSE or AVX2 intrinsics need to be translated to NEON intrinsics for every application which is a huge pain in the ass. But there's a library, it's a header actually, available to do that but it's only SSE2NEON and not AVX2NEON. Going by the Github comments for Apple's pull request on Intel Embree, Apple is working on bringing AVX2NEON support for Apple silicon. Even after that, I'm not sure if CR23 will be a fair comparison. Intel might introduce a superior SIMD instruction set and then Apple again has to do a pull request on Intel Embree for NEON translation? Man, that's PAIN.

The big underlaying issue here is that Embree is developed with SSE/AVX instruction semantics in mind. They compile to ARM-native NEON by bridging Intel SIMD instructions to ARM native instructions, but it’s a bit like translating a poem word by word from French to German. It just won’t be optimal. For optimal support Embree would need to be rewritten by hand using ARM intrinsics, but that is very unlikely to happen.

I hope Apple has learned its lesson from this initial reaction and starts disclosing how it does benchmarks, so that third parties can check them.

Why? A bunch of tech nerds on forums complaining about intransparent benchmarks doesn’t have any influence on Apple marketing.
 
1. Cinebench R23
Another issue with Cinebench is that it heavily favors CPUs with many slow cores over CPUs with many fast cores. Most users will benefit from a few fast cores over many slow cores.

I explained here why Cinebench sucks as a CPU benchmark:
Again, the same Andrei Frumusanu agreed with me that Geekbench 5 is better for CPU benchmarking than Cinebench.

The reason why Cinebench is so popular is because AMD pushed it in its marketing when it had many slow cores (Zen 1 & Zen 2) over a fewer faster cores (Intel).
 
I firmly believe that we deserve reproducible benchmarks that accurately reflect the performance of daily tasks.

I fully agree. Unfortunately, that is not the most effective marketing. What we really need is enforceable legislations that would require all the relevant information to be published.
 
They compile to ARM-native NEON by bridging Intel SIMD instructions to ARM native instructions, but it’s a bit like translating a poem word by word from French to German.
Your translation vs. transliteration analogy is on point. Good stuff.

Why? A bunch of tech nerds on forums complaining about intransparent benchmarks doesn’t have any influence on Apple marketing.
You’re fallacious here. Apple’s MO has been cognitive ease since its inception. Perhaps they wanted controversy to attract attention and trolls— that’s a normal marketing tactic these days. It sucks. Dubious marketing puzzles are below them and their customers. They’re good at interesting puzzles and they should stick to that.
 
Last edited:
You’re fallacious here. Apple’s MO has been cognitive ease since its inception. Perhaps they wanted controversy to attract attention and trolls— that’s a normal marketing tactic these days. It sucks. Dubious marketing puzzles are below them and their customers. They’re good at interesting puzzles and they should stick to that.

Past experience tells us that it is uncommon for Apple to engage in benchmark manipulation. But yeah, comparing Ultra to a 3090 is a bit… too much. It’s definitely on par with desktop 3080 though.
 
What we really need is enforceable legislations that would require all the relevant information to be published.
Enforceable legislation would be the perfect solution, as it could establish clear guidelines that would prevent hand-picked benchmarks and non-reproducible charts. But legislation is slow, and we should aim for stopgap solutions.
 
Nice write up.

Some comments I would make to is that it is hard to compare as a lot of software is still not really M1 optimized and even Adobe how has Photoshop for Apple Silicon told me to run it under Rosetta2 to fix some bugs I was having.

One good result I have seen that gives more of an idea of the M1 Ultra GPU power is Maxon Redshift where it is pretty close to a 6800 XT.
 
Last edited:
Apple really shot themselves in the foot by comparing the GPU to a 3090 and not specifying what it was actually testing.

Despite the prevailing narrative on tech sites, as Leman noted, Apple is usually in the correct ballpark when it comes to figures, and this appears to be the second time they've significantly over promised on the GPU side of things with the M1 chips. Not detailing their testing methods is really hurting them, particularly for pro machines. If nothing else, this is a pretty big PR blunder and they should release how they're testing the GPUs, otherwise they're severely denting their reputation.

Enforceable legislation would be the perfect solution, as it could establish clear guidelines that would prevent hand-picked benchmarks and non-reproducible charts. But legislation is slow, and we should aim for stopgap solutions.

Both the major graphics card manufacturers have been caught cheating in benchmarks, Ati in quake and Nvidia in 3Dmark spring to mind. It would be satisfying if they could get slapped with fines for such deliberate chicanery.

And then there's Samsung:

 
  • Like
Reactions: ikir
Another issue with Cinebench is that it heavily favors CPUs with many slow cores over CPUs with many fast cores. Most users will benefit from a few fast cores over many slow cores.
Which game did you use for the 1080p correlation graph? I imagine it's not available for Apple Silicon, but adding another vendor to the graph would've probably strengthened the point about CineBench scores being a bad predictor of gaming performance, despite same-vendor results being correlated.

Despite the prevailing narrative on tech sites, as Leman noted, Apple is usually in the correct ballpark when it comes to figures, and this appears to be the second time they've significantly over promised on the GPU side of things with the M1 chips. Not detailing their testing methods is really hurting them, particularly for pro machines. If nothing else, this is a pretty big PR blunder and they should release how they're testing the GPUs, otherwise they're severely denting their reputation.
My guess is that their GPU results were picked to reflect what the hardware is capable to do, not necessarily what the available software does with that hardware right now.

Or maybe they just used Aztec Ruins High 1440p.
 
My guess is that their GPU results were picked to reflect what the hardware is capable to do, not necessarily what the available software does with that hardware right now.

Or maybe they just used Aztec Ruins High 1440p.

It's funny though, cause when you look at the example benchmarks on the product page, they look mostly accurate (despite being nebulous x time faster), with the scaling being 1.5 on most and the redshift benchmark looking pretty disappointing.

And then they have these magical graphs in the product launch which seem to have no relationship to the rest of the benchmarks... As you say, perhaps geared towards the capability of the hardware, but that feels a bit misleading. Or perhaps Aztec Ruins High 1440p :)

Hopefully when they get around to launching the Mac Pro they'll actually detail wtf they are benchmarking...
 
  • Like
Reactions: Reggaenald
Both the major graphics card manufacturers have been caught cheating in benchmarks, Ati in quake and Nvidia in 3Dmark spring to mind. It would be satisfying if they could get slapped with fines for such deliberate chicanery.

And just last year AMD was caught tweaking internal CPU performance settings on app by app based. Wouldn’t be a big deal but those CPU settings could result in occasional crashes (but games crash, so it’s fine, right?). Best part: the utility doing that was disguised as a power management driver or something like that.
 
From my perspective, it’s worse than that: Apple has never used GPU performance as a gaming metric (perhaps with the Pippin? ?) and yet testers are using gaming FPS as GPU metrics.

There’s no way whatsoever that Apple meant the M1Ultra outperformed the RTX3090 in gaming.

While it might be entertaining to see how high quality one can game on a Mac (I do so, on my laptop when travelling, but generally default to a dedicated WinPC when I’m home), if you’re buying a Mac for its gaming performance you’ll be on the wrong side of a price/performance chart. And that’s fine. Just don’t argue that this is not the case or that HOLY COW THE M1ULTRA CANNOT HIT EVEN 50% GAMING PERFORMANCE VS A 3090?!

Take this next comment with a grain of salt… the size of a salt lick: I would guess that the reason Apple used the nVidia chip as point of comparison is not that it’s the top consumer GPU (arguable, I understand) but rather that it’s from nVidia and Apple is still ticked at them. And still sells AMD GPUs in its Intel-based Macs. Hardly great to directly compare the Studio GPUs with those (workstation class GPUs, not gaming) in the Mac Pro…
 
From my perspective, it’s worse than that: Apple has never used GPU performance as a gaming metric (perhaps with the Pippin? ?) and yet testers are using gaming FPS as GPU metrics.

There’s no way whatsoever that Apple meant the M1Ultra outperformed the RTX3090 in gaming.

While it might be entertaining to see how high quality one can game on a Mac (I do so, on my laptop when travelling, but generally default to a dedicated WinPC when I’m home), if you’re buying a Mac for its gaming performance you’ll be on the wrong side of a price/performance chart. And that’s fine. Just don’t argue that this is not the case or that HOLY COW THE M1ULTRA CANNOT HIT EVEN 50% GAMING PERFORMANCE VS A 3090?!

Take this next comment with a grain of salt… the size of a salt lick: I would guess that the reason Apple used the nVidia chip as point of comparison is not that it’s the top consumer GPU (arguable, I understand) but rather that it’s from nVidia and Apple is still ticked at them. And still sells AMD GPUs in its Intel-based Macs. Hardly great to directly compare the Studio GPUs with those (workstation class GPUs, not gaming) in the Mac Pro…

The funny thing is that gaming capability is probably where Ultra most closely matches the 3090… although RTX 3080 would be a more apt comparison. Anyway, in a sophisticated and well optimized title Ultra would have a chance to outperform the 3090
 
The funny thing is that gaming capability is probably where Ultra most closely matches the 3090… although RTX 3080 would be a more apt comparison. Anyway, in a sophisticated and well optimized title Ultra would have a chance to outperform the 3090
I had a huge piece but deleted it. Because we need to compare Apples to apples and that subject is substantially more complex than even the OP hinted at.

Just be aware that a 2022 AAA video game supporting DLSS and ray tracing on a desktop RTX3080 on a 4K monitor will know no equals from Apple (nor AMD). But if you're playing 2015s Rise of the Tomb Raider at 1080p versus a WinPC laptop with DLSS turned off (and no ray tracing available), you will indeed be content with Apple's gaming specs...
 
I had a huge piece but deleted it. Because we need to compare Apples to apples and that subject is substantially more complex than even the OP hinted at.

Just be aware that a 2022 AAA video game supporting DLSS and ray tracing on a desktop RTX3080 on a 4K monitor will know no equals from Apple (nor AMD). But if you're playing 2015s Rise of the Tomb Raider at 1080p versus a WinPC laptop with DLSS turned off (and no ray tracing available), you will indeed be content with Apple's gaming specs...

Oh well, sure, if you take ray tracing into account, Apple has no chance. But rasterization and shading performance, especially taking Apples persistent shader pipelines into consideration, is absolutely comparable.
 
  • Like
Reactions: Ethosik
Just be aware that a 2022 AAA video game supporting DLSS and ray tracing on a desktop RTX3080 on a 4K monitor will know no equals from Apple (nor AMD). But if you're playing 2015s Rise of the Tomb Raider at 1080p versus a WinPC laptop with DLSS turned off (and no ray tracing available), you will indeed be content with Apple's gaming specs...
Yep I agree. There is no competing in that space you quoted!

Chasing the top end gaming market is not something Apple wants or should focus on. There is a very very VERY small percentage of users that max everything out, use DLSS and Ray Tracing, and run at 4K. I prefer 1080p or 1440p for the 200+/144+ Hz high refresh rates vs visuals and I have many people I know that want the same thing higher frame rate vs higher graphics. The top 5 GPUs are the following (totaling in 30% of the gaming market on Steam), and in fact the 3080 (not 3090) only reports a total of 1.14% usage while the 3090 is at 0.42%.

NVIDIA GeForce GTX 1060
NVIDIA GeForce GTX 1650
NVIDIA GeForce GTX 1050 Ti
NVIDIA GeForce RTX 2060
NVIDIA GeForce GTX 1050

If you want to max out everything and play at highest settings, go right ahead. That is why there are options in hardware and settings in game to change. However, Macs will NEVER.....EVER be for you. Period. The only possible system that can compete is the Mac Pro for a large PSU and that was and will continue to be way outside of anyone's budget except extreme professionals and a COMPLETE 100% waste of money no-one can deny that. If my $2,500 custom built gaming PC can play games better than the $7,000 Mac Pro could, why spend that money?

But Apple CAN and their hardware certainly DOES compete with the 1060, 1650, 1050 Ti and others in that range. Those are what is the most popular.

I really can't understand the constant hate/frustration Apple gets about "gaming on Macs". This is also been a recurring thing since I was interested in Macs back in 2006 and even had several heated discussions about "why the heck I would get a 2010 Mac Pro" so its over a decade and a half pretty much of hearing the same things. Have you looked at Surface products and seen their prices? Those can't do DLSS/Ray Tracing and some even use Integrated Graphics and still be almost $2,000!

A ~$4,000 Dell Precision 7820 (work systems which is what Macs are too) has a NVIDIA RTX A4000 which gets a bit shy from matching a 3060 Ti. Not that great either from a "gaming" perspective at $4,000.

I guess it depends on what people are familiar with. I have spent $7,000+ on computers before for work and they have been HORRIBLE for gaming compared to a $2,500 gaming system. Work and gaming don't always result in a valid comparison. In fact, one system a while back was a $5,000 Dell workstation and it was beat gaming wise by a $400 Playstation 4!
 
Last edited:
Yep I agree. There is no competing in that space you quoted!

Chasing the top end gaming market is not something Apple wants or should focus on. There is a very very VERY small percentage of users that max everything out, use DLSS and Ray Tracing, and run at 4K. I prefer 1080p or 1440p for the 200+/144+ Hz high refresh rates vs visuals and I have many people I know that want the same thing higher frame rate vs higher graphics. The top 5 GPUs are the following (totaling in 30% of the gaming market on Steam), and in fact the 3080 (not 3090) only reports a total of 1.14% usage while the 3090 is at 0.42%.

NVIDIA GeForce GTX 1060
NVIDIA GeForce GTX 1650
NVIDIA GeForce GTX 1050 Ti
NVIDIA GeForce RTX 2060
NVIDIA GeForce GTX 1050

If you want to max out everything and play at highest settings, go right ahead. That is why there are options in hardware and settings in game to change. However, Macs will NEVER.....EVER be for you. Period. The only possible system that can compete is the Mac Pro for a large PSU and that was and will continue to be way outside of anyone's budget except extreme professionals and a COMPLETE 100% waste of money no-one can deny that. If my $2,500 custom built gaming PC can play games better than the $7,000 Mac Pro could, why spend that money?

But Apple CAN and their hardware certainly DOES compete with the 1060, 1650, 1050 Ti and others in that range. Those are what is the most popular.

I really can't understand the constant hate/frustration Apple gets about "gaming on Macs". This is also been a recurring thing since I was interested in Macs back in 2006 and even had several heated discussions about "why the heck I would get a 2010 Mac Pro" so its over a decade and a half pretty much of hearing the same things. Have you looked at Surface products and seen their prices? Those can't do DLSS/Ray Tracing and some even use Integrated Graphics and still be almost $2,000!

A ~$4,000 Dell Precision 7820 (work systems which is what Macs are too) has a NVIDIA RTX A4000 which gets a bit shy from matching a 3060 Ti. Not that great either from a "gaming" perspective at $4,000.

I guess it depends on what people are familiar with. I have spent $7,000+ on computers before for work and they have been HORRIBLE for gaming compared to a $2,500 gaming system. Work and gaming don't always result in a valid comparison. In fact, one system a while back was a $5,000 Dell workstation and it was beat gaming wise by a $400 Playstation 4!

There are two relevant tiers here IMO. First is the “everyday gaming”, Apple got it firmly covered by the base consumer M-series. The prosumer Pro/Max/Ultra are priced out of the normal consumer range and are positioned as professional GPUs, so they are less relevant to the gaming discussion IMO. But so far my M1 Max smoothly runs everything I tried at 4K
 
Baldur's gate 3 is a newer example of ARM native games. There is also something strange going on with GFXBench. While Ultra is faster than Max in Aztech High Tier 1440p in several other tests it's slower than Max.
 
  • Like
Reactions: ikir
Yep I agree. There is no competing in that space you quoted!

Chasing the top end gaming market is not something Apple wants or should focus on. There is a very very VERY small percentage of users that max everything out, use DLSS and Ray Tracing, and run at 4K. I prefer 1080p or 1440p for the 200+/144+ Hz high refresh rates vs visuals and I have many people I know that want the same thing higher frame rate vs higher graphics. The top 5 GPUs are the following (totaling in 30% of the gaming market on Steam), and in fact the 3080 (not 3090) only reports a total of 1.14% usage while the 3090 is at 0.42%.

NVIDIA GeForce GTX 1060
NVIDIA GeForce GTX 1650
NVIDIA GeForce GTX 1050 Ti
NVIDIA GeForce RTX 2060
NVIDIA GeForce GTX 1050

If you want to max out everything and play at highest settings, go right ahead. That is why there are options in hardware and settings in game to change. However, Macs will NEVER.....EVER be for you. Period. The only possible system that can compete is the Mac Pro for a large PSU and that was and will continue to be way outside of anyone's budget except extreme professionals and a COMPLETE 100% waste of money no-one can deny that. If my $2,500 custom built gaming PC can play games better than the $7,000 Mac Pro could, why spend that money?

But Apple CAN and their hardware certainly DOES compete with the 1060, 1650, 1050 Ti and others in that range. Those are what is the most popular.

I really can't understand the constant hate/frustration Apple gets about "gaming on Macs". This is also been a recurring thing since I was interested in Macs back in 2006 and even had several heated discussions about "why the heck I would get a 2010 Mac Pro" so its over a decade and a half pretty much of hearing the same things. Have you looked at Surface products and seen their prices? Those can't do DLSS/Ray Tracing and some even use Integrated Graphics and still be almost $2,000!

A ~$4,000 Dell Precision 7820 (work systems which is what Macs are too) has a NVIDIA RTX A4000 which gets a bit shy from matching a 3060 Ti. Not that great either from a "gaming" perspective at $4,000.

I guess it depends on what people are familiar with. I have spent $7,000+ on computers before for work and they have been HORRIBLE for gaming compared to a $2,500 gaming system. Work and gaming don't always result in a valid comparison. In fact, one system a while back was a $5,000 Dell workstation and it was beat gaming wise by a $400 Playstation 4!

There is no such things as “workstations” anymore. It is something of the past. Consumer chips are as powerful as server chips now.

Now you can build gaming PC’s with a 64-core AMD cpu and a RTX 3090 for a fraction of the price.

Unless there is a specific use case in which you cannot use a gaming / consumer component (for example for SolidWorks), you are better off with a gaming PC.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.