All wrong yet again. Enterprise drives are only in the Spinning domain , latest technology in SSD makes them more expensive and less available in the beginning so they Target the Enterprise ... Cloud providers like us actually use plain old what you call consumer grade (which is the technology they call enterprise 2 years ago). Due to cost no real advantage in “enterprise” we need lots and cheap.
And completely wrong on the stop start of VM. That’s today 100% ssd, as is the caching of memory for the host again ssd
Here a blog that also completely kills all your SSD arguments including a study by google and a university (hyperscalers and a researcher)
Read about the common lifespan of SSD and what you can do to prepare for when your solid-state drive fails.
www.solarwindsmsp.com
So keep on with your version of reality, and Please go start an anti ssd ... Apple sucks campaign on Facebook .. or why not start a class action against Apple ... seems you know all the facts ...
Last question to you, so does 5G mean that the government is trying to control us ?
I think you seemed to confuse “reliability” (as in Time between Failure) vs “endurance of the NAND Cell”. Since endurance is affected only by the NAND cell wearing out vs reliability that involves the controller and dram its DRAM Cache.
Since VMs will have limited amounts of RAM available to them (given M1 has abysmal 16 GB to allocate to begin with)
it is likely to use swap, which means it will use ssd. And going by “your” paper, once we wear the cells out that it start giving unrecoverable read/write errors. You will start having more and more.
i’ve never said that SSD is more reliable than HDD. That would be hilarious. My concern was that once it dies (or worse started to silently corrupt users file by bit rot), you have no option to replace it and would have to buy a new machine as it is likely to die outside of the 5 year Apple is going support it.
Im afraid my version of reality is backed by numbers. Please go check my prior posts in this thread with link to source of the original testing and specs.
Also it seemed you have confused me with someone else. And stop with name calling. I think that it’s immature. (And I’m actually pro SSD in fact)
Excerpt of the paper your mentioned:
summary:
“we see that this perception is cor- rect when it comes to SLC drives and their RBER, as they are orders of magnitude lower than for MLC and eMLC drives. However, Tables 2 and 5 show that SLC drives do not perform better for those measures of reliability that matter most in practice: SLC drives don’t have lower re-pair or replacement rates, and don’t typically have lower rates of non-transparent errors.”
“• While flash drives offer lower field replacement rates than hard disk drives, they have a significantly higher rate of problems that can impact the user, such as un- correctable errors.”
• Previous errors of various types are predictive of later uncorrectable errors”
and
“• Bad blocks and bad chips occur at a significant rate: depending on the model, 30-80% of drives develop at least one bad block and and 2-7% develop at least one bad chip during the first four years in the field. The latter emphasizes the importance of mechanisms for mapping out bad chips, as otherwise drives with a bad chips will require repairs or be returned to the vendor.
• Drives tend to either have less than a handful of bad blocks, or a large number of them, suggesting that im- pending chip failure could be predicted based on prior number of bad blocks (and maybe other factors). Also, a drive with a large number of factory bad blocks has a higher chance of developing more bad blocks in the field, as well as certain types of errors.”