Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

applesaucePro

macrumors member
Original poster
Jun 19, 2018
61
26
Since the upcoming mac pro may turn out to be a huge disappointment, here are some alternatives and help for figuring out what system specs you might need if you build or buy something else.

#1 do you need an expensive xeon cpu? probably not. the main reason to get a xeon is so you can run multiple cpus or ecc memory.

#2 do you need ecc memory? if an error in your work means someone could die, then you need ecc, otherwise it's optional. memory errors are extremely rare, and are usually harmless. non-ecc is cheaper and runs faster, so it's better in several aspects.

#3 do you need a "professional" graphics card? a top end gamer card runs faster and better in most autodesk apps than a low end quadro/firepro these days, most people don't know that.

#4 brand name or diy? a diy system is cheaper and lets you customize everything. all parts in a diy system do have a manufacturers warranty. custom shops can build a diy system to your specifications and some offer support contracts. benefits of buying a brand name can vary by brand, some benefits may be dubious/worthless. the best benefit of apple to me is being able to run osX apps and windows apps. HP, dell, and etc offer no benefits that i care about.

confusing xeon cpu options:
gold and platinum are the real xeons. xeon gold allows up to 4 cpus, and xeon platinum allows up to 8 cpus. if you want a dual cpu xeon workstation then you want to get xeon golds. spreading work across two physical cpu chips can make the computer faster at tasks, sometimes much faster. e.g. two 8 core chips are faster than a single 16 core chip.

xeon-w is crippled and it only permits a single cpu. thus it's almost pointless. the only real benefit of this cpu is that it supports ecc memory. they use a xeon-w in the imac pro. i expect the upcoming modular mac will use two xeon golds.

avoid all xeon bronze and silver, and avoid xeon gold 51xx models. these are all low spec chips, designed for low-power datacenters and other special use cases.


the best brand-name xeon systems right now are by HP:
HP Z8 ...supports dual cpu.
HP Z6 ...supports dual cpu on a janky as hell adapter!
HP Z4 ...supports a single xeon-w cpu only.

be careful if you need to use pcie cards, using pcie lanes can disable features on your motherboard such as usb ports and sata ports for your disk drives. study the manual of the motherboard very carefully to see what happens when certain pcie slots are in use. the Z8 is the least restrictive when it comes to pcie lanes, but it still does have restrictions.

the best diy motherboards i know of are by supermicro and asus.

if you want to know about xeons, here is a good site:
https://www.servethehome.com/intel-...atinum-gold-silver-bronze-naming-conventions/
 
  • Like
Reactions: BlueTide
Since #2 do you need ecc memory? if an error in your work means someone could die, then you need ecc, otherwise it's optional. memory errors are extremely rare, and are usually harmless. non-ecc is cheaper and runs faster, so it's better in several aspects.
And an error in non-ECC RAM might silently corrupt data on disk, or cause an OS or application crash without any troubleshooting info.

An error in ECC RAM will blue-screen the system with a memory error. No corruption, and no question that the problem is anything but a hardware error.

And the "non-ECC runs faster" is such a minor issue that it should be ignored. Do you want your system to be 0.02% faster, and crash or corrupt data at random?
 
During the recent year or so, I've been more and more tempted to just build my own. AMD 32 core Threadripper is supposed to come out and having a few GPUs for rendering is so tempting. Last draw was when I checked Infiniband cards for fast storage (available in 40GB/s and 100GB/s) that is far from the workstation. Those are way, way cheaper and faster than 10GBe and allow longer (if more expensive) cables than Thunderbolt. Add a nice monitor and it's so very, very tempting.

What holds me back is the macOS. I just like it too much if I can have a choice. And for personal, non-critical work, I still do. For work, I already have a workstation and don't need to worry about that. That train left a while ago already.
 
What holds me back is the macOS. I just like it too much if I can have a choice. And for personal, non-critical work, I still do.
maybe there is hope for hackintosh configurations, everyone seems to be thinking this capability will get patched out soon but i'm not sure why. something to do with dark mode.

Do you want your system to be 0.02% faster, and crash or corrupt data at random?
you are right that errors can potentially cause problems, but it can be an acceptable risk, everyone should decide if its worth the very significant added cost. engineers will want ecc but those doing visualization may not need it or want it. for those playing games on the workstation, ecc is going to cost 10FPS at least. bad news for game developers or those who do non-engineering design work and want to game.

even using ecc isn't completely immune to problems. with ecc you still suffer every single memory error that non-ecc suffers from. errors are checked, and if possible corrected, some errors are impossible to fix.

OS and programs themselves can have bugs, this corrupts files worse than anything. disk failures, power supply failures, a crypto-virus or ransomware can lock/destroy your files. memory errors are the least of my concern.

so i think risk of memory errors shouldn't be blown out of proportion. ecc is not a voodoo amulet that takes away all risk. studies on ecc are based on servers in large data farms, i doubt any study was ever done on ecc in workstations. this means error rates for our uses can't be accurately predicted, but we know they are low.

google runs servers hard and found their errors are mostly caused by the memory itself going bad, not solar radiation bit flips, so they suggest replacing memory every 2 years. i seriously doubt a workstation would have memory fail in 2yrs time.
[doublepost=1529907997][/doublepost]
Right now I am looking at a BOXX system, was originally looking at HP Z Series. Still decideing on specific model/specs, but hell there are some serious contenders.
boxx looks good, some other builders to take a look at are
https://www.pugetsystems.com
https://www.falcon-nw.com
https://www.maingear.com

if you don't need a xeon, you can also consider the HP omenX, it's in a giant cube shaped chassis.
maingear also sells an omenX, it costs slightly more but is also more customizable. a review of it here:
https://arstechnica.com/gadgets/2016/08/hp-omen-x-cube-gaming-pc-price-details/
 
you are right that errors can potentially cause problems, but it can be an acceptable risk, everyone should decide if its worth the very significant added cost. engineers will want ecc but those doing visualization may not need it or want it. for those playing games on the workstation, ecc is going to cost 10FPS at least. bad news for game developers or those who do non-engineering design work and want to game.

Where this 10FPS coming from? It doesn't fit my observation.

I bet you know that we can use non ECC RAM on the cMP. And I can't see 10FPS improvement by using non ECC RAM.
 
Another benefit, albeit artificial, of Xeon CPU is the availability of more PCIe lanes. ECC memory is slightly slower than non-ECC memory. Very few applications would benefit with non-ECC memory. As for the workstation graphics cards the primary difference is in the drivers, not the hardware. The drivers are optimized for workstation use where stability and accuracy are preferred.

One needs to understand their workload and test suitable configurations to determine the best solution that fits their needs. This is a lot simpler with Apple as they offer a limited number of options. Unfortunately some of those options are slowly fading away as the technology has aged and Apple hasn't updated them.
 
  • Like
Reactions: applesaucePro
And an error in non-ECC RAM might silently corrupt data on disk, or cause an OS or application crash without any troubleshooting info.

An error in ECC RAM will blue-screen the system with a memory error. No corruption, and no question that the problem is anything but a hardware error.

And the "non-ECC runs faster" is such a minor issue that it should be ignored. Do you want your system to be 0.02% faster, and crash or corrupt data at random?

Look at the graphics card problem thread and the users affected. What are the chances of having a non-ECC memory crash vs the problems faced by those users?
 
  • Like
Reactions: applesaucePro
Look at the graphics card problem thread and the users affected. What are the chances of having a non-ECC memory crash vs the problems faced by those users?
The vast majority of graphics card problems I've read revolve around people trying to use unsupported cards. Are there other issues you have in mind?
 
you are right that errors can potentially cause problems, but it can be an acceptable risk, everyone should decide if its worth the very significant added cost. engineers will want ecc but those doing visualization may not need it or want it. for those playing games on the workstation, ecc is going to cost 10FPS at least. bad news for game developers or those who do non-engineering design work and want to game.

  1. I'm very skeptical of "cost 10fps at least".
  2. The majority of people here prefer to have ECC, even those who explicitly state it's not a fundamental requirement:
upload_2018-6-25_11-16-45.png
 
Where this 10FPS coming from? It doesn't fit my observation.
I'm very skeptical of "cost 10fps at least".

it's a known and well established fact that ecc ram can cause computer tasks to run up to 2% slower. and obviously slowing the computer will reduce your fps in games. i rattled off 10fps as a ballpark estimate based on benchmarks i saw a while ago.

the truth is you'll never know for sure how many fps you lost in each game unless you spend all day benchmarking them on your own hardware. then after that, what about the input lag?

my point was really that any drop in performance is unacceptable for someone who needs realtime video and input response. this might also go for people who want to monitor live video or render it or do compositing.

the poor film student who needs a ton of ram can buy 2 or maybe even 3 times as much non-ecc ram for the same price. right? is he better off getting more student loans to buy ram, or is he better off buying non-ecc?
 
it's a known and well established fact that ecc ram can cause computer tasks to run up to 2% slower. and obviously slowing the computer will reduce your fps in games. i rattled off 10fps as a ballpark estimate based on benchmarks i saw a while ago.
Can you provide references as I am unable to find anything regarding this well established fact?

the truth is you'll never know for sure how many fps you lost in each game unless you spend all day benchmarking them on your own hardware. then after that, what about the input lag?
IMO ECC is not a requirement for gaming. However computers are used for tasks other than gaming.

my point was really that any drop in performance is unacceptable for someone who needs realtime video and input response. this might also go for people who want to monitor live video or render it or do compositing.
One can easily capture real time video using ECC memory.

the poor film student who needs a ton of ram can buy 2 or maybe even 3 times as much non-ecc ram for the same price. right? is he better off getting more student loans to buy ram, or is he better off buying non-ecc?
If you're a poor film student then I would say the work you're performing is not absolutely critical and therefore passing on ECC memory is acceptable.
 
it's a known and well established fact that ecc ram can cause computer tasks to run up to 2% slower. and obviously slowing the computer will reduce your fps in games. i rattled off 10fps as a ballpark estimate based on benchmarks i saw a while ago.

the truth is you'll never know for sure how many fps you lost in each game unless you spend all day benchmarking them on your own hardware. then after that, what about the input lag?

my point was really that any drop in performance is unacceptable for someone who needs realtime video and input response. this might also go for people who want to monitor live video or render it or do compositing.

the poor film student who needs a ton of ram can buy 2 or maybe even 3 times as much non-ecc ram for the same price. right? is he better off getting more student loans to buy ram, or is he better off buying non-ecc?

First of all, supporting ECC does not mean ECC is required. In your scenario of wanting to buy a large amount of RAM and save money feel free to replace it with non-ECC RAM. The fact that ECC RAM is supported shouldn't stop you from choosing non-ECC RAM.

Secondly, gamers who are serious enough to care about a 2% performance difference shouldn't be using a Mac Pro. If they were forced to because that's their one computer and they need it for work too, then there are many other things working against them that will dwarf that 2%.

Third, for a 2% loss to equal 10FPS loss, you'd have to be getting 500FPS. That doesn't sound like a problem to me.

Fourth, I don't accept that 2% number anyway. I know that's the common claim floating about, but here are a slew of benchmarks that show otherwise:
upload_2018-6-25_14-2-50.png

These numbers are so close that they might as well be testing variations. And note in some cases the ECC actually benches faster.

So given that there is no real performance hit and anyone who wants non-ECC can just go ahead and buy it, there's just not a problem here.
 
Last edited:
yep this might lead us to believe that the fps drop is between 1 and 3 frames on average, if this flawed benchmark can be trusted. one big problem with those bench numbers though is that they didn't do any actual game benchmarks, uniengine is a synthetic benchmark, also they didn't report the actual fps numbers. a real game may throw out entirely different numbers. but once again, i didn't intend to cause a ruckus about specific fps numbers.
 
it's a known and well established fact that ecc ram can cause computer tasks to run up to 2% slower.
Perhaps the ECC penalty would be noticeable if your "computer task" is to run fine grain synthetic memory tests that measure latency with the cache essentially disabled.

For the sake of argument I'll accept your 2% guess. If you get 90% cache hits, it drops to 0.2%. 95% hit is 0.1%.

Look at this thread around https://forums.macrumors.com/thread...th-various-mem-configs.1704700/#post-18745317 - it discusses standard benchmark results which show little difference between an MP6,1 with 1 DIMM, 2 DIMMs, 3 DIMMs, and 4 DIMMs. The "uncached" synthetic benchmarks show a big difference, but real applications that use the cache see little difference.

This is probably part of the explanation as to why the quote from ActionableMango shows so little difference. Real applications are usually tuned to make as good a use of cache as possible.
 
  • Like
Reactions: h9826790
as far as i know the 2% statistic is for overall performance, so if you were to disable caching the latency would be worse as now everything must come from ram or the swapfile on disk.
 
it's a known and well established fact that ecc ram can cause computer tasks to run up to 2% slower. and obviously slowing the computer will reduce your fps in games. i rattled off 10fps as a ballpark estimate based on benchmarks i saw a while ago.

the truth is you'll never know for sure how many fps you lost in each game unless you spend all day benchmarking them on your own hardware. then after that, what about the input lag?

my point was really that any drop in performance is unacceptable for someone who needs realtime video and input response. this might also go for people who want to monitor live video or render it or do compositing.

the poor film student who needs a ton of ram can buy 2 or maybe even 3 times as much non-ecc ram for the same price. right? is he better off getting more student loans to buy ram, or is he better off buying non-ecc?

I talked from my own experience, ECC RAM won't cause any noticeable difference in gaming. Also, as ActionableMango pointed out. If that 2% equals to 10FPS, then 100% is 500FPS. This number doesn't make any sense to me. On the cMP, it's almost impossible to achieve this FPS for any settings due to limited CPU single core performance.

Besides, that 2% is theoretical figure, and should only has effect when memory bandwidth is the real bottleneck (e.g. extremely large data movement in the memory channels with cache intentionally disabled). And this is far from real world anyway.

If we can never know how many FPS drop exist, then how can you confirm there is a FPS drop? We show you all the evidence that there is no real world performance penalty. It's very clear.

And why suddenly talk about input lag? or video editing? All high end studio using ECC RAM during video editing. They even use ECC VRAM graphic cards for editing. So, you mean those microsecond input lag (if really exist) is very important for a student youtuber but not the Hollywood firmware maker???

And please go to eBay and check the DDR3 price again. ECC RAM is actually cheaper or much cheaper than non ECC RAM.

It sounds strange, but there is a reason. ECC RAM usually only be used on high end computer / server. And for today, anyone require high end stuff should already moved to DDR4. Those server company just want to sell their DDR3 ECC RAM at the e-waste price to clear the stock. So, high supply, but extremely low demand.

On the other hand, the normal DDR3 is still the major memory type for home use mid to low end computers. The demand is still very high. For poor students. It's much more preferable to buy the server pulled DDR3 ECC RAM.

as far as i know the 2% statistic is for overall performance, so if you were to disable caching the latency would be worse as now everything must come from ram or the swapfile on disk.

Please re-read the post #15 again, that's the real world result. And your 2% is not from statistic but pure theory.
 
the 2% statistic is backed by the manufacturers of ecc ram, it's not a theory. add this to the slowdown you get from microcode patches (up to 30% thanks meltdown and spectre!), the latency of disk drive access, and on and on it goes. ever hear of stacking of tolerances? same idea. why give up even a little bit of performance, if that is your main need in a workstation? there are people buying keyboards with cherry mx speed switches to get milliseconds better response times and you want to say x% which is non-zero is acceptable performance loss... nope, not to them.

also who says that 2% ecc penalty with 10fps hit scales linearly up to 500fps? you can't make assumptions of that kind, its why real world benchmarks are needed.
 
everything must come from ram or the swapfile on disk
Leave the pagefile out of the discussion. Even an SSD pagefile is many orders of magnitude slower than RAM (hundreds of microseconds vs. nanoseconds).
[doublepost=1529974388][/doublepost]
also who says that 2% ecc penalty with 10fps hit scales linearly up to 500fps? you can't make assumptions of that kind, its why real world benchmarks are needed.
ActionableMango showed you real world benchmarks that show essentially no difference between real world applications on different memory types.

And, by the way, can you send a link to who said that there was a "2% ecc penalty with 10fps"? That seems to be your invention.
 
the 2% statistic is backed by the manufacturers of ecc ram, it's not a theory. add this to the slowdown you get from microcode patches (up to 30% thanks meltdown and spectre!), the latency of disk drive access, and on and on it goes. ever hear of stacking of tolerances? same idea. why give up even a little bit of performance, if that is your main need in a workstation? there are people buying keyboards with cherry mx speed switches to get milliseconds better response times and you want to say x% which is non-zero is acceptable performance loss... nope, not to them.

also who says that 2% ecc penalty with 10fps hit scales linearly up to 500fps? you can't make assumptions of that kind, its why real world benchmarks are needed.

What?

Please solve "X" in the following

if 2% = 10FPS

then

100% = X FPS?

And please show us any real world result which can prove that 2% exist on almost any normal usage.

I can understand why memory manufactures claim that ECC RAM may be up to 2% slower. They have to do that to protect themselves. It’s just some legal / commercial activities, nothing to do with the real world performance. And you decided to keep your opinion with those theories / spec / worst case claim.... and ignore all the facts from the real world benchmarks / usage?
 
you can't make assumptions of that kind, its why real world benchmarks are needed

Fair enough. I can't find any gaming benchmarks comparing non-ECC and ECC RAM, which in itself is telling. Gaming on a workstation is just an afterthought.

So the closest I can find is this AnandTech article. It's not comparing ECC vs non-ECC, but it is comparing the effects of RAM speed (both throughput and latency) on gaming.

For those who don't want to click through and read, the results are basically that there is little to no difference in gaming performance, even with some pretty dramatic differences in RAM speed. Here is one example:

19.png


I'm not much of a gamer, but from what little I've seen, it generally seems like the GPU is by far the biggest determining factor, with CPU as a distant second place. RAM speed differences don't seem to amount to anything.

Basically I think if you want cheap RAM you should buy ECC server pulls. If you want faster RAM it doesn't really matter ECC or not.

P.S. I feel kind of bad that it seems like this is turning into a big debate. I know you're new and you were just trying to help people and you put some effort into it. I don't really want to discourage that. But on the other hand, I just disagree with the assertions being made.
 
Last edited:
The vast majority of graphics card problems I've read revolve around people trying to use unsupported cards. Are there other issues you have in mind?
Not the thread where users buy or modify or use "supported" cards. That would obviously be user fault. I meant the thread with the 2013 Mac Pros running on faulty D300 and D500s, causing crashes and freezes. The 2013 Mac Pros that Apple has acknowledged that has problems.
https://forums.macrumors.com/thread...u-driver-issues.1860297/page-63#post-26186654

In a tech forum, there are many complaints, but people complaining about non-ECC memory problems are quite rare.
 
P.S. I feel kind of bad that it seems like this is turning into a big debate. I know you're new and you were just trying to help people and you put some effort into it. I don't really want to discourage that. But on the other hand, I just disagree with the assertions being made.

i don't mind to debate it, i think everyone finds it interesting and is having fun to try and prove me wrong! the debate is complicated to follow as it gets longer so i am going to put underline and bold on things which i think are the most important points or can be confusing.

comparing the effects of RAM speed (both throughput and latency) on gaming.

the performance of non-ecc memory in games is well understood because many benchmarks have been done on it. but i don't think you will find many gaming benchmarks of ecc memory, and that's what we are interested in. you can't assume that the trends in benchmarks of non-ecc ram will match that of ecc, they are two separate things.

its also important to know that the benchmarks you posted earlier of ecc ram were done on ddr3 modules, not ddr4. nobody is using ddr3 anymore, we really need tests showing ddr4. do we assume dual-channel ddr3 behaves the same way as 6-channel ddr4 on some test? we can't assume that, we would be guessing.

there are some benefits to faster ram speeds that while small aren't completely trivial. i could try to dig up that info if anyone wants to know about it. i think the benchmarks are all for non-ecc ram though.

I'm not much of a gamer, but from what little I've seen, it generally seems like the GPU is by far the biggest determining factor, with CPU as a distant second place. RAM speed differences don't seem to amount to anything.

performance of games is tricky. some games are more CPU than GPU dependant, and vice versa. some games will run better on ati vs nvidia hardware, and vice versa. there are so many factors, this is why you need to benchmark a number of games to eliminate those biases in the testing.

Leave the pagefile out of the discussion. Even an SSD pagefile is many orders of magnitude slower than RAM (hundreds of microseconds vs. nanoseconds).

true, but in real world performance it means you do have a disk system, otherwise how can you fill the ram or cache with data? benchmarks are going to be influenced by the speed of those components. ssd vs platter disk, cas latency on the ram, did they have a raid 0 array? it's why i said the benchmark should really be done on your own system to make it most relevant. it may be impossible to know exactly how many fps you lost with all these variables in play that differ with each system.

ActionableMango showed you real world benchmarks that show essentially no difference between real world applications on different memory types.

real benchmarks of games are definitely needed, that's what everyone is questioning the most, how big is the fps hit? if these app benchmarks are demonstrating heavily optimized cache hits, it may not be the case for a game which needs to load tons of assets in and out of memory. anything that doesn't fit into the texture memory of the graphics card loads from system memory or disk for example.

which brings us to another point: if ecc is so great and doesn't affect the speed in any appreciable way, why isn't the GPU using it? hmm!

also, why would it be considered OK to render frames using non-ecc ram on my GPU, but you guys consider it dangerous to render frames with the non-ecc system-wide memory? explain to me how it's not the exact same thing.


can you send a link to who said that there was a "2% ecc penalty with 10fps"? That seems to be your invention.

the benchmarks ActionableMango showed come from puget systems here:
https://www.pugetsystems.com/labs/articles/ECC-and-REG-ECC-Memory-Performance-560/

on that page, you will also find:
Crucial has a statement on their knowledge base that ECC memory will decrease your computer's performance by about 2% compared to standard memory.

so that's not my invention, its stated as a fact by at least one manufacturer, and it's referenced in many places on the net. puget systems didn't see a 2% penalty in that specific battery of tests. but it's possible they did the tests wrong because they have suspicious results with registered ecc being faster than non-ecc. and registered ecc has an extra step involved that standard ecc does not have. furthermore, this test was of ddr3 not ddr4 ram, it's not even the right type of memory module. tests from anandtech and toms hardware would be more reliable if any exist.

a 10 fps hit in games was my own ballpark estimate of what you may see with ecc ram. this is what you guys seem to disagree with the most.

ActionableMango combined two separate data points and came up with a formula to link them together:
2% of 500fps = 10fps

his suggestion is that a 10fps hit means we are getting 500fps. but that formula is not correct at all. you cannot just take some numbers and create a linear equation out of it and expect it to make sense.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.