We do know that RAM errors happen, and that they cause random crashes and data corruption. Another benefit of ECC RAM is that you can get an advance warning before it is failing, i.e. while it is starting to fail and producing a higher rate of errors.
And what if it isn't a crash but silent data corruption? Why is that only bad if it's on a server? A businessman's Excel sheet, a software developer's source code, your Amazon order and online banking transaction, are they not important?
The same persistent data that passes through RAM at least once on its way to and from persistent storage? ECC is not rocket science. We are investing so much into nicer screens, faster drives, faster everything actually, why not invest just a little bit to make your system more reliable, too?
I totally understand what you mean. However, at the end of the day this is about cost–benefit analysis. The main question for me is how real a chance of „proper„ data corruption is as opposed to a crash. I mean, if it costs 5–10% of performance at every given moment to avoid something that will probably not happen in my lifetime… is it really a must–have feature? But if instead one can demonstrate that data corruption is real and happens with a high enough frequency (even if it’s one file in a few months) than ECC tradeoffs is definitely worth it.