Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.

Maximara

macrumors 68000
Jun 16, 2008
1,707
909
To be honest, I don't think the size is so much an issue as the types of writes. Large sequential write is ideal for SSDs and has pretty much 1x write amplification.

It's rewriting small pieces of data - indexing, access timestamps, etc. that can kill P/E cycles more quickly.
The flaw in that reasoning is, IIUC, TBW only cares about how much total not how data is written. So a 200 TBW drive would still say 2TB was 1% of the TBW regardless of that 2TB being written as 200 10 GB files or one huge 2TB file.
 

rui no onna

Contributor
Oct 25, 2013
14,920
13,264
The flaw in that reasoning is, IIRC, TBW only cares about how much total not how it is written ie 200 10 GB files is the same as one huge 2TB file you still have 2TB removed from the life of the drive.

We don't know if drive writes is measured in terms of host writes or NAND writes. Most SSDs, I believe it typically is host writes and a separate S.M.A.R.T. attribute is used to track NAND writes or P/E cycles.

As I recall, NAND flash can be written in smaller pages but it can only be erased by block. Rewriting 4K bytes (host write) could mean you're causing 8MB worth of P/E cycle wear depending on block size. There's a reason SSDs employ wear leveling techniques to reduce write amplification. I expect write amplification is also partly why the TBW warranty figures are usually lower than what NAND writes should be based solely on capacity * P/E cycles.

For what it's worth, all the endurance testing websites use on SSDs showing petabytes worth of writes before dying are based on large block sequential writes (1x write amplification) rather than random small block writes (which tend to be the real killer).


TL;DR

500,000,000 random writes of 4 KiB each is very likely much harder on the SSD than 200 sequential writes of 10 GiB or a single sequential write of 2 TiB.
 
Last edited:

DeanL

macrumors 65816
May 29, 2014
1,357
1,290
London
If RAM memory chips can sustain intensive writes for decades, what makes everyone here think that Apple's SSDs cannot?
 
  • Like
Reactions: osplo

osplo

macrumors 6502
Nov 1, 2008
351
196
If RAM memory chips can sustain intensive writes for decades, what makes everyone here think that Apple's SSDs cannot?
Good point.

I am not much concerned about SSDs failing. It might happen of course, but I had many many drives (and many many computers) and only spinning Hard Drives died on me. Never an internal SSD, never a RAM chip.
 

Maximara

macrumors 68000
Jun 16, 2008
1,707
909
If RAM memory chips can sustain intensive writes for decades, what makes everyone here think that Apple's SSDs cannot?
RAM and SSD are two way different things. Unlike RAM every write (reading doesn't matter) damages the storage components of an SSD "cell". After so much damage the cell simply won't reliability hold data (comparative to a bad sector on platter drive).

We have a ball park idea of what the warranted range is from Samsung - 3 years or 70 TB TBW (for the lowest end 250 TB to 4,800 TBW) top of the line (5 years or 4,800 TB TBW( SSD. As The SSD Endurance Experiment: Casualties on the way to a petabyte showed even 2014 SSDs lasted insane amounts of TBW with them getting to 700 TBW before the first died and many going well past that.

The Samsung 840 Pro 256GB is warranted for 73 TB TBW and yet lasted to 900 TBW before going bye bye - over 12 times what it was warranted for. Its wear leveling count (Percentage used? not sure) hit 0 at 300 TB or 4 times the warranted amount.
 

Maximara

macrumors 68000
Jun 16, 2008
1,707
909
TL;DR

500,000,000 random writes of 4 KiB each is very likely much harder on the SSD than 200 sequential writes of 10 GiB or a single sequential write of 2 TiB.
That would help explain why trying to back figure TBW from percentage used can produce totally gonzo numbers.
 
  • Like
Reactions: Fomalhaut

rui no onna

Contributor
Oct 25, 2013
14,920
13,264
That would help explain why trying to back figure TBW from percentage used can produce totally gonzo numbers.

While it might be gonzo for general use, it's actually perfectly valid for determining SSD longevity for the person using it based on their actual workload.
 

leons

macrumors 6502a
Apr 22, 2009
662
344
Thanks to everyone for bringing this important subject back on topic. Some good discussion here!
We have a ball park idea of what the warranted range is from Samsung - 3 years or 70 TB TBW (for the lowest end 250 TB to 4,800 TBW) top of the line (5 years or 4,800 TB TBW( SSD. As The SSD Endurance Experiment: Casualties on the way to a petabyte showed even 2014 SSDs lasted insane amounts of TBW with them getting to 700 TBW before the first died and many going well past that.

The Samsung 840 Pro 256GB is warranted for 73 TB TBW and yet lasted to 900 TBW before going bye bye - over 12 times what it was warranted for. Its wear leveling count (Percentage used? not sure) hit 0 at 300 TB or 4 times the warranted amount.
These numbers are the very reason I feel that our current best estimate of 1600TBW (for the M1 256MB drive) is reasonable, and based on a totality of the empirical evidence I choose to believe it to be true. I have verified by numerous ways/on numerous M1 macs (not relying only on typical tools) that the percentage "used" as reported by SMART in these drives is increasing by 1% for every 16TBW. So far, (although it remains to be further verified as these machines are used more) it appears to be relatively linear (e.g. similar use over a similar period increases the TBW/wear percentage indicator consistently). Looking at the above industry numbers, I am making the assumptions that SSD reliability has increased at least somewhat over the last seven years and that Apple chose at least reasonable SSDs in these M1s. Apple is also most certainly aware of the SMART data loaded onto these drives (including the wear computation), and most likely even specified it themselves. There are some who may doubt some of these assumptions. For doubters, take one-half that predicted number (800TBW). Even without considering the fact that study-after-study has historically shown that SSDs typically last many multiples past their predicted SMART reported wear limits, all evidence points to the conclusion that these SSDs will last well past the useful life of the M1 computer itself.
 
Last edited:

Thistle41

macrumors member
Mar 25, 2021
74
39
UK
Thanks to everyone for bringing this important subject back on topic. Some good discussion here!

These numbers are the very reason I feel that our current best estimate of 1600TBW (for the M1 256MB drive) is reasonable, and based on a totality of the evidence I choose to believe it to be true. I have verified by numerous ways/on numerous M1 macs (not relying on any one specific tool) that the percentage "used" as reported by SMART in these drives is increasing by 1% for every 16TBW. So far, (although it remains to be further verified as these machines are used more) it appears to be relatively linear (e.g. similar use over a similar period increases the TBW/wear percentage indicator consistently). Looking at the above industry numbers, I am making the assumptions that SSD reliability has increased at least somewhat over the last seven years and that Apple chose at least reasonable SSD's in these M1s. Apple is also most certainly aware of the SMART data loaded onto these drives, and most likely even specified it themselves. There are some who may doubt some of these assumptions. For doubters, take one-half that predicted number (800TBW). Even without considering the fact that study-after-study has historically shown that SSDs typically last many multiples past their predicted SMART wear limits, all evidence points to the conclusion that these SSDs will last well past the useful life of the M1 computer itself.
Thanks for the summary. I forget further back down the thread there was some rule-of-thumb about daily use. However, I can confirm that my TBW of 20.2 gives a 1% life use with the CL tool. Based on that assumption of 1600TBW figure above, let's say we expect 10 years of SSD life.

Therefore 160TBW/yr -> 438GB/day -> 18GB/hr. But all I am getting is usage of 3.3 GB/hr over a 13-day run. So even as you say, we halve those figures above it still looks (in my case anyway) good for a long time yet.

NB. I should add that I've turned off disk cache in Firefox and run some other minor tweaks as detailed earlier which seems to have no effect on performance or general use. I also have the Auto tab Discard installed.
 

rob984

macrumors newbie
Apr 11, 2021
18
10
Hi, new here, and new with Mac's. I bought a week ago new Macbook Air M1 256/8GB. I was looking at the numbers constantly in Activity monitor for Disk writes, and I know that I did around 130GB for writes. But in the DriveDx there is a number of 440GB for the disk writes. I know that I did not use it much, so how is that possible, which number is correct?
 

deaglecat

macrumors 6502a
Mar 9, 2012
638
773
Hi, new here, and new with Mac's. I bought a week ago new Macbook Air M1 256/8GB. I was looking at the numbers constantly in Activity monitor for Disk writes, and I know that I did around 130GB for writes. But in the DriveDx there is a number of 440GB for the disk writes. I know that I did not use it much, so how is that possible, which number is correct?
I think your answer is that you have knowingly written 180Gb e.g. installations, file copies... but behind the scenes some applications and the O/S have been swapping data to the SSD in large volume. That is the crux of the issue here... excessive writes which do not relate to explicit user actions.
 

leons

macrumors 6502a
Apr 22, 2009
662
344
You can't base endurance on previous data since endurance has decreased with higher density NAND.

SLC > MLC > TLC > QLC

It's also known that endurance decreases with smaller size.

So, if 512GB died after 600TB and four months of usage then 256GB could be worse.

That particular disk was reported as being defective unrelated for reasons other than excess usage. In addition, the previous SSD studies done at storage farms surprisingly found a higher incidence of usage failures in SSDs of greater size.
 

Ningj

macrumors member
Nov 21, 2020
59
36
I think your answer is that you have knowingly written 180Gb e.g. installations, file copies... but behind the scenes some applications and the O/S have been swapping data to the SSD in large volume. That is the crux of the issue here... excessive writes which do not relate to explicit user actions.
Activity monitor I/O counters get reset every reboot. SMART data is persistent and usually gets written out to flash on the drive controller every 1-2 minutes and reflects the total I/O's the drive has processed over its lifetime
 
  • Like
Reactions: leons

DeanL

macrumors 65816
May 29, 2014
1,357
1,290
London
We have a ball park idea of what the warranted range is from Samsung - 3 years or 70 TB TBW (for the lowest end 250 TB to 4,800 TBW) top of the line (5 years or 4,800 TB TBW( SSD. As The SSD Endurance Experiment: Casualties on the way to a petabyte showed even 2014 SSDs lasted insane amounts of TBW with them getting to 700 TBW before the first died and many going well past that.
And that was in 2014-we're seven years later, and years after Apple bought a lot of semiconductor/chip companies like Anobit... I don't know but I feel like most of the references for longevity being thrown around here are outdated
 

Maximara

macrumors 68000
Jun 16, 2008
1,707
909
You can't base endurance on previous data since endurance has decreased with higher density NAND.

SLC > MLC > TLC > QLC

It's also known that endurance decreases with smaller size.

So, if 512GB died after 600TB and four months of usage then 256GB could be worse.

I addressed this way back on post #1,496 which I will repost the guts of for everyone's convenance:

I think this is a variation of what Ryan Hileman quoted on twitter:
"I have someone with a failing M1 disk. 4mo old, 2% spare, 10% thresh, 98% used, 600TB write 500TB read, 200h "on", 10,000 "Media and Data Integrity Errors". Machine had an inconsistent glitch in my app. 16/512GB machine, typical RAM use 9GB. Working on RCA, ama."

Well it was claimed this genius was using this as the postgres server for a bank. This is like using a crowbar in place of hammer; yes you can use it that way but it is really really dumb.

Something I didn't notice and now do is that 2% spare; that starts out at 100%. So they have blown through 98% of the base capacity and 98% of the spare. Based on other results there is something way wrong with those numbers.

New material

Let's be clear here these Macs were never designed for that.

The reply right after this #1,497 summed it up perfectly: "Clearly that case is completely invalid then, and can be disregarded, as that is not something the system was designed to do at all."

Some additional research produced SSDs for SQL Server? Good or Bad Idea? which stated that SQL has insanely write happy with one person saying "unless you have a ton of money to spend - it's a bad idea. SQL does a lot of writing, which SSD is slow at. if you do it, ONLY go Enterprise class SSD (which will double to triple the price,) but extend the life of the drives by 5-6 times because they have smarter controllers and embed roughly double the amount of drive space as printed on the label to account for sector death."

I seriously doubt the first M1 Mac have Enterprise class SSDs. So our only example of a M1 SSD failure is flawed to the point of uselessness and involves apparent misuse of the M1 Macs available.
 
Last edited:
  • Like
Reactions: osplo

rob984

macrumors newbie
Apr 11, 2021
18
10
I think your answer is that you have knowingly written 180Gb e.g. installations, file copies... but behind the scenes some applications and the O/S have been swapping data to the SSD in large volume. That is the crux of the issue here... excessive writes which do not relate to explicit user actions.
And I guess one part of that number is from factory testing and installing OS?
 
  • Like
Reactions: deaglecat

rui no onna

Contributor
Oct 25, 2013
14,920
13,264
You can't base endurance on previous data since endurance has decreased with higher density NAND.

SLC > MLC > TLC > QLC

It's also known that endurance decreases with smaller size.

So, if 512GB died after 600TB and four months of usage then 256GB could be worse.


It should be noted that we saw an increase in longevity with the 3D/Vertical NAND.

P/E cycles (typical, consumer level)
SLC NAND: 100K
50 nm MLC NAND: 10K
3x nm MLC NAND: 5K
2x nm MLC NAND: 3K
TLC NAND: 1K (e.g. Samsung 840 & 840 EVO, https://www.overclock.net/threads/r...arks-needed-to-confirm-affected-ssds.1512915/)
3D TLC VNAND: 3K
3D QLC VNAND: 1K

I addressed this way back on post #1,496 which I will repost the guts of for everyone's convenance:

I think this is a variation of what Ryan Hileman quoted on twitter:
"I have someone with a failing M1 disk. 4mo old, 2% spare, 10% thresh, 98% used, 600TB write 500TB read, 200h "on", 10,000 "Media and Data Integrity Errors". Machine had an inconsistent glitch in my app. 16/512GB machine, typical RAM use 9GB. Working on RCA, ama."

Well it was claimed this genius was using this as the postgres server for a bank. This is like using a crowbar in place of hammer; yes you can use it that way but it is really really dumb.

Something I didn't notice and now do is that 2% spare; that starts out at 100%. So they have blown through 98% of the base capacity and 98% of the spare. Based on other results there is something way wrong with those numbers.

New material

Let's be clear here these Macs were never designed for that.

The reply right after this #1,497 summed it up perfectly: "Clearly that case is completely invalid then, and can be disregarded, as that is not something the system was designed to do at all."

Some additional research produced SSDs for SQL Server? Good or Bad Idea? which stated that SQL has insanely write happy with one person saying "unless you have a ton of money to spend - it's a bad idea. SQL does a lot of writing, which SSD is slow at. if you do it, ONLY go Enterprise class SSD (which will double to triple the price,) but extend the life of the drives by 5-6 times because they have smarter controllers and embed roughly double the amount of drive space as printed on the label to account for sector death."

I seriously doubt the first M1 Mac have Enterprise class SSDs. So our only example of a M1 SSD failure is flawed to the point of uselessness and involves apparent misuse of the M1 Macs available.

Actually, SSDs deliver much higher IOPs than HDDs for database. Only way you'd get faster is via RAM. Based on pure performance, SSDs are much better than HDDs for this task. Writes only slow down when the SSD is full/dirty and you can mitigate that by adding extra overprovisioning. Iirc, one of my older SSDs, a Micron 5100 has a utility for adjusting internal OP. It's perfectly fine using SSDs for the task but appropriate sizing/OP and regular SSD replacement is required. Well, they do that for spinning platters, too, anyway (usually recommended at the 3-year mark).

Mind, this is one of those instances where it's doing a ton of random small writes and write amplification must be much higher than normal.

I estimate the NAND flash Apple uses have either 6K P/E cycles or 3K+100% OP based on writes vs % used for most reports I've seen. 512GB should be capable of ~3PB worth of NAND writes so this particular workload looks like 5x WA.
 
  • Like
Reactions: leons

southerndoc

Contributor
May 15, 2006
1,851
522
USA
This is why I get AppleCare and have backups. If it fails, I'll just have Apple fix it. If they fix enough of these, then they'll have to do a recall or face a class action lawsuit.

I'm sure they are aware of what is occurring as it likely showed up during their testing. I doubt Apple only tested the M1 Macs for a year. They've probably been testing for years.
 

rui no onna

Contributor
Oct 25, 2013
14,920
13,264
This is why I get AppleCare and have backups. If it fails, I'll just have Apple fix it. If they fix enough of these, then they'll have to do a recall or face a class action lawsuit.

I'm sure they are aware of what is occurring as it likely showed up during their testing. I doubt Apple only tested the M1 Macs for a year. They've probably been testing for years.

The years of Apple chipset testing would be on iPads and iPhones, and I expect they control the writes more heavily on those. iOS doesn't even have swap files in the traditional sense.

The sad reality is manufacturers can't really predict nor test all the myriad ways devices are used.
 

osplo

macrumors 6502
Nov 1, 2008
351
196
yeah. a write is a write... the drive doesn’t know/care that it is pre-delivery.

Which is also a very good point. The first weeks or even the first two months' figures could be a bit biased by the initial load of the macOS, patches, apps, and user data (on top of factory testing). Maybe some people here got scared when they checked the writes in their brand new machine without considering that.
 
  • Like
Reactions: Thistle41 and leons

rob984

macrumors newbie
Apr 11, 2021
18
10
I just installed CrystalDisk on my Windows machine, my C: drive has 77TB of data writen and it says the disk health is at 80%. And after I searched on web for Total Host writes, I came across some very big numbers on Windows machines, like 1.2PB for Sandisk SSD's and it's still good. Maybe we are panicking without good reason (at least me sometimes), I know that the main disadvantage is because it's soldered on, but most of us are buying mobile phones worth 1000 or more USD, and the parts are soldered as well, right?
 
  • Like
Reactions: Maximara

Maximara

macrumors 68000
Jun 16, 2008
1,707
909
I just installed CrystalDisk on my Windows machine, my C: drive has 77TB of data writen and it says the disk health is at 80%. And after I searched on web for Total Host writes, I came across some very big numbers on Windows machines, like 1.2PB for Sandisk SSD's and it's still good. Maybe we are panicking without good reason (at least me sometimes), I know that the main disadvantage is because it's soldered on, but most of us are buying mobile phones worth 1000 or more USD, and the parts are soldered as well, right?
Interesting as by the numbers the TBW for that drive works out to 385 (77 x 100 / (100-80) which depending on the size of the drive could be well within reason.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.