I addressed this way back on post
#1,496 which I will repost the guts of for everyone's convenance:
I think this is a variation of what
Ryan Hileman quoted on twitter:
"I have someone with a failing M1 disk. 4mo old,
2% spare, 10% thresh,
98% used,
600TB write 500TB read,
200h "on", 10,000 "Media and Data Integrity Errors". Machine had an inconsistent glitch in my app. 16/512GB machine, typical RAM use 9GB. Working on RCA, ama."
Well it was claimed this genius was using this as the
postgres server for a bank. This is like using a crowbar in place of hammer; yes you can use it that way but it is
really really dumb.
Something I didn't notice and now do is that 2% spare;
that starts out at 100%. So they have blown through 98% of the base capacity
and 98% of the spare. Based on other results there is something way wrong with those numbers.
New material
Let's be clear here these Macs were
never designed for that.
The reply right after this #
1,497 summed it up perfectly: "Clearly that case is
completely invalid then, and can be disregarded, as
that is not something the system was designed to do at all."
Some additional research produced
SSDs for SQL Server? Good or Bad Idea? which stated that SQL has insanely write happy with one person saying "unless you have a ton of money to spend -
it's a bad idea. SQL does a lot of writing, which SSD is slow at. if you do it, ONLY go
Enterprise class SSD (which will double to triple the price,) but extend the life of the drives by 5-6 times because they have smarter controllers and embed roughly double the amount of drive space as printed on the label to account for sector death."
I seriously doubt the first M1 Mac have Enterprise class SSDs. So our only example of a M1 SSD failure is flawed to the point of uselessness and involves apparent misuse of the M1 Macs available.