Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
How do we do these tests for ourselves? I'd like to compare my 2018 Mac Mini SSD to the one in my M1 Mac mini.
 
I don't know, but I suspect the previous poster means something like "how did you achieve 224 TBW in 612 hours of power on time?".
Well, that's the question I came to ask here.

The truth is I'm on macOS Beta and also I can tell I have a lot and I mean a lot stutters since using this M1 mac. And every time I check activity monitor drive usage is in red and I have to wait till everything chill out and "move everything to it's place" then I can use it again without being disturbed with lags


How do we do these tests for ourselves? I'd like to compare my 2018 Mac Mini SSD to the one in my M1 Mac mini.
brew install smartmontools && sudo smartctl --all /dev/disk0
 
There is a lot of noise about this. I have not much wear on my Macs and had them for three months now. But there are a few tasks that writes a lot of data to the disks. And Apple should address those.

The first is time machine caching and the second one is pasteboard. The latter one is what enables copy and paste on macs.

More importantly it enables this functionality across devices and here is where I have had issues. If you copy extremely large files on one machine and try to paste on another machine it will have to copy those obviously and also enable the undo function as well. That cannot be stored in memory so the OS writes to the disc. That would be expected behavior, but if you try to paste because you think you copied something recently, but did actually copy something else on another Apple device.

It will try to copy that, and the system will bring up a dialog. But there must have been some bug because although it cancels, it sometimes keep on copying to the pasteboard cache. And if you copied say a movie they can be extremely large and fill up space unnecessarily.
 
It's been a week since I've created this thread.

Week ago I had 0.9 TBW (after 8days)
Today, week lates (15 days of use)


Code:
Maximum Data Transfer Size:         256 Pages
Data Units Read:                    6,572,917 [3.36 TB]
Data Units Written:                 5,238,520 [2.68 TB]
Power On Hours:                     41
Media and Data Integrity Errors:    0

Let me be clear here - I have no idea what I did to create the extra 1.6 TB in just one week.
1. no new projects, only old stuff (node.js, Xcode)
2. no reinstall, no updates as far as I know
3. I've even stop using Spotify week ago :) and playing music on my bluetooth headset from the phone
4. I was using node.js in old version - 10, with terminal opened via rosetta 2. Couple of days ago I switched to new, v15 and terminal native (without rosetta) - day-by-day watching TBW and no difference at all

I was trying to figure out where could it came from, but for now I have no idea.

From tomorrow I will stop using a few apps that are running in the background all the time:
- Gifox
- Desk remote control
- Tunnelblick (however there is no active connection for 99.9% of time anyway)
- Numi (I'm using it a lot, but I will try to just stop for couple of days and see)
- Spark email client (I will move to my phone with that for couple of days)
- Slack (again - I will move with it to my phone for couple of days)

And to be honest that's all I have on my computer.

Tools for my job, that I can't turn off:
- Visual studio code
- Node.js (it's running almost 75% of time I'm using computer)
- Xcode
- Android studio (I'm almost not using it anyway, currently)

Let's see next the result in the next 5-7 days.


----- edit
Yes, I know. 1.6TBW in one week is nothing "to be worry about". It should last for couple of years at least. But still - would be nice to be able to use this Mac mini OR just sell it in next 3-4 years, right? Especially since we can't replace the SSD.
 
  • Like
Reactions: mi7chy and gank41
Here are some benchmarks on non-M1 MacBook Pros:

1. MacBook Pro (Retina, 13-inch, Early 2015)
Processor: 3,1 GHz Dual-Core Intel Core i7
Memory: 16 GB 1867 MHz DDR3
Storage: 500 GB SSD
Age: ~4 years (3.5 years of professional use)

Code:
/usr/local/sbin/smartctl -x /dev/disk0 | grep Host_Writes_MiB | awk '{printf("Data written: %.4f MiB / %.4f GiB / %.4f TiB\n", $NF, $NF/1000, $NF/1000000)}'
Data written: 79180865.0000 MiB / 79180.8650 GiB / 79.1809 TiB

/usr/local/sbin/smartctl -x /dev/disk0 | grep Power_On_Hours | awk '{printf("Power On: %d hours / %.4f days / %.4f months / %.4f years\n", $NF, $NF/24, $NF/24/365.25*12, $NF/24/365.25)}'
Power On: 14292 hours / 595.5000 days / 19.5647 months / 1.6304 years

That machine averaged writing 79180865 MiB / 14292 h = 5.54 GiB/h (5.95 GB/h).
Most of the time, that machine has been running MacOS 10.x (up to Catalina). Only recently it was upgraded to MacOS 11 (Big Sur).

2. MacBook Pro (16-inch, 2019)
Processor: 2,6 GHz 6-Core Intel Core i7
Memory: 32 GB 2667 MHz DDR4
Storage: 1 TB SSD
Age: ~5 months

Code:
SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        29 Celsius
Available Spare:                    100%
Available Spare Threshold:          99%
Percentage Used:                    0%
Data Units Read:                    15,066,845 [7.71 TB]
Data Units Written:                 11,599,382 [5.93 TB]
Host Read Commands:                 371,113,221
Host Write Commands:                202,204,141
Controller Busy Time:               0
Power Cycles:                       134
Power On Hours:                     258
Unsafe Shutdowns:                   19
Media and Data Integrity Errors:    0
Error Information Log Entries:      0

Likewise, that machine averaged writing 7.71 TB in 258 hours, or 30.6 GB/h. However... that machine has seen way much more powered on time than only 258 hours in those 5 months of professional use 😂.
So there must be a glitch in the reported powered on hours here. Impossible for now to draw conclusions.

In contrast to the first machine, this one has been operating on macOS 11 (Big Sur) most of the time.
 
Last edited:
An old tool might help narrow this issue down. It is called iofileb.d uses dtrace. But you need to disable SIP.

- reboot
- option-R to recovery, open terminal: csrutil disable
- reboot
- get two windows open
- in one window sudo iofileb.d where you will not interrupt for say 1 hour or 8 hours
- in another window sudo iofileb.d and wait 5 minutes, hit-ctrl C

It will dump in order of least-to-most bytes written and by what process. The benefit of this over Activity Monitor is that short-lived procs should be caught too.

Another tool which may be useful is isnoop and it shows live io.

when finished don't forget to enable SIP again
- reboot, option-R to recover, open terminal: csrutil enable

Funny thing when testing this, I logged in and there was quite a bit of Xcode activity. I didn't launch it, don't have it set as open-at-login.
 
Last edited:
Sooo to view this data, a HomeBrew package is needed. Has anyone considered that maybe the package reports incorrect data on ARM Macs?
 
I asm beginning to suspect this is a reporting error and not real wear. This is because I ran DriveX on my MBA which came online on 12-20 and I have AVERAGED at least 40 hours powered on a week and DriveX is giving a total powered on time of less than 40 hours when it should be in the hundreds. It also shows my total write as 0.9 TB since 12-20.
 
I asm beginning to suspect this is a reporting error and not real wear. This is because I ran DriveX on my MBA which came online on 12-20 and I have AVERAGED at least 40 hours powered on a week and DriveX is giving a total powered on time of less than 40 hours when it should be in the hundreds. It also shows my total write as 0.9 TB since 12-20.

SMART power on hours reported is for SSD and not system.
 
Well, that's the question I came to ask here.

The truth is I'm on macOS Beta and also I can tell I have a lot and I mean a lot stutters since using this M1 mac. And every time I check activity monitor drive usage is in red and I have to wait till everything chill out and "move everything to it's place" then I can use it again without being disturbed with lags



brew install smartmontools && sudo smartctl --all /dev/disk0
I think the people complaining are doing something that the rest of us "normal" users are not doing. The only system I have with 2% spare used is my 2018 Intel MacMini w/ a 1TB SSD and 32GB of RAM. My 2019 MacBook Pro 16in is fine and so is my M1 MacMini 16gb 1TB.
 
2020 MBP 13" used for work, purchased May 2020:
Code:
SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        34 Celsius
Available Spare:                    100%
Available Spare Threshold:          99%
Percentage Used:                    0%
Data Units Read:                    51,504,350 [26.3 TB]
Data Units Written:                 17,677,809 [9.05 TB]
Host Read Commands:                 588,854,809
Host Write Commands:                341,702,338
Controller Busy Time:               0
Power Cycles:                       199
Power On Hours:                     389
Unsafe Shutdowns:                   79
Media and Data Integrity Errors:    0
Error Information Log Entries:      0

13" M1 Air purchased in December (smartctl gives the same output whether I run the intel or arm64 binary):
Code:
SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        34 Celsius
Available Spare:                    100%
Available Spare Threshold:          99%
Percentage Used:                    0%
Data Units Read:                    68,599,283 [35.1 TB]
Data Units Written:                 19,319,271 [9.89 TB]
Host Read Commands:                 1,880,612,779
Host Write Commands:                118,045,262
Controller Busy Time:               0
Power Cycles:                       128
Power On Hours:                     149
Unsafe Shutdowns:                   15
Media and Data Integrity Errors:    0
Error Information Log Entries:      0

So higher than I expected, but not terrible. I did push the M1 quite a bit when it first arrived but it's totally casual usage now.

SMART power on hours reported is for SSD and not system.
This is the kicker for me, people calculating wild gigabytes written per second numbers appear to be treating SSD uptime as system uptime.
 
Has anyone even concluded that these writes are a problem for the SSD?

I still don't trust the smart data, but obviously it shows different numbers for different users.

I mean I'm not worried for my system at all, my system the writes don't seem excessive.

But the ones reporting a high number, is there really a problem?

Has anyone here ever written and SSD to death or physically know anyone that has at any point in time an how many writes did it take?
 
Guys... it has been proven that reading S.m.a.R.t is correct. Those
Numbers are correct.

hour on power is not computer up-time ;)

and once again - guys, what are you doing on your computers that you have a few terabytes written in couple of weekend?

1 TB = 256 ssd written to full and cleared 4 times.

It doesnt matter if it will last for 2-3 years. The ssd is integrated into the SoC so you cant replce it. In the next 2-3 years we wont be able to sell those computers because some of the SSD could be at the end of their life cycle.


—-edit:

10-30 TBW in 1 year may be ok. But not in 3-10 weeks ;)
 
  • Like
Reactions: m-a and theSeb
Has anyone even concluded that these writes are a problem for the SSD?

I still don't trust the smart data, but obviously it shows different numbers for different users.

I mean I'm not worried for my system at all, my system the writes don't seem excessive.

But the ones reporting a high number, is there really a problem?

Has anyone here ever written and SSD to death or physically know anyone that has at any point in time an how many writes did it take?

I think this is one of the earliest if not the earliest effort to test NAND endurance on consumer level SSDs.


Iirc, back then there was a die-shrink almost yearly and there were concerns that 25nm would be inadequate for consumer use. Some SSDs just reached capacity * P/E cycles in terms of TBW while there were those who well exceeded their P/E cycle ratings. Intel and Samsung SSDs were pretty notable. Understandable considering they manufactured their own NAND and had the opportunity to cherry pick the best for their own use. Of course, there were also problematic controllers such as Indilinx which didn't do proper wear leveling. You'd have a dead SSD after a few months because some cells had 1000s of P/E cycles used while others had less than 10.

Mind, Apple reports the Life Percentage Used. Based on some posts I've seen of 256GB and 512GB, it looks like these have around 6,000 P/E cycles or 1.5 and 3 petabytes of writes respectively. I guess we'll see if Apple's "SSDs" can continue working even when they exceed their rated P/E cycles. From what I can tell, Apple considers the SSD a goner only when NAND cells actually start dying and starts needing to be replaced by spare blocks.
 
Guys... it has been proven that reading S.m.a.R.t is correct. Those
Numbers are correct.

hour on power is not computer up-time ;)

and once again - guys, what are you doing on your computers that you have a few terabytes written in couple of weekend?

1 TB = 256 ssd written to full and cleared 4 times.

It doesnt matter if it will last for 2-3 years. The ssd is integrated into the SoC so you cant replce it. In the next 2-3 years we wont be able to sell those computers because some of the SSD could be at the end of their life cycle.


—-edit:

10-30 TBW in 1 year may be ok. But not in 3-10 weeks ;)
How has it been proven? I haven't seen anything definitive. I've looked at the smartmontools code and even their code documentation says they are guessing and they only have two specific drives in their database neither of which matches the NVMe SSD in the M1 Macs.

I'm not saying that the data is incorrect but I also haven't seen anything definitive that it is correct either. I suspect that it is but that's not the same as proven.
 
How has it been proven? I haven't seen anything definitive. I've looked at the smartmontools code and even their code documentation says they are guessing and they only have to specific drives in their database neither of which matches the NVMe SSD in the M1 Macs.

I'm not saying that the data is incorrect but I also haven't seen anything definitive that it is correct either. I suspect that it is but that's not the same as proven.
Just download 50GB files on your computer and see the smartmontools will show you 50 GBW more. In my opinion this is pretty accurate. Correct my if I'm wrong 🤔
 
  • Like
Reactions: jdb8167
Whats the terminal code to run to check?
You need to install it. It isn't part of macOS. There is a .dmg with a package installer but it installs the x86 version. If you want the M1 version you need to get it through brew or download the source and configure, make, and make install it yourself.

The x86 installer is here:
https://sourceforge.net/projects/smartmontools/files/

It's best if you use brew though: https://brew.sh

After brew is installed:

brew install smartmontools
 
Screen Shot 2021-02-24 at 3.23.01 PM.png


Yikes! This is after like 20 days of using my MBA M1. I always have 2 applications running in the background to watch my CPU / Memory usage (iStats Menu) and my Temp (TG Pro).

I have now disable these 2 apps and not letting them run anymore (I read above there was someone talking about those apps writing logs file could be the reason why the crazy read / written?)

So I was wondering if you have High Read / Written, do you use iStats Menu and TG Pro?

P/S: I have disabled those 2 apps from starting along side my Mac and running, I will keep watch of my Read and Written and report back later
 
  • Like
Reactions: gank41
View attachment 1734473

Yikes! This is after like 20 days of using my MBA M1. I always have 2 applications running in the background to watch my CPU / Memory usage (iStats Menu) and my Temp (TG Pro).

I have now disable these 2 apps and not letting them run anymore (I read above there was someone talking about those apps writing logs file could be the reason why the crazy read / written?)

So I was wondering if you have High Read / Written, do you use iStats Menu and TG Pro?

P/S: I have disabled those 2 apps from starting along side my Mac and running, I will keep watch of my Read and Written and report back later
That doesn't seem bad. About 112 GB/day. A 1.5 PBW SSD would last for years at that rate. Much longer than the notebook will last.
 
  • Like
Reactions: saber106
That doesn't seem bad. About 112 GB/day. A 1.5 PBW SSD would last for years at that rate. Much longer than the notebook will last.



112 GB / day is fine in your opinion? :) It's like writing and deleting half of the ssd every single day. This is sick :D

I know, it should last for couple of years, but still - this is sick in my opinion :D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.