Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I did a web search, albeit a cursory one, and could not find any utilities listed to manipulate Core Storage. I saw links to lots of how to articles, and lots of links to complaints about Yosemite installing like this on many systems without giving the user any options, but no tools. Oh weil.
 
I have already TechTool pro v6 and just today upgraded to v8.

I am about purchasing one of the two: Scannerz or Disk Tools Pro. What it sounds tempting is the ability of DiskTools pro to fix bad sectors.
Anyone has personal experience with this tool? Is it true it can fix bad sectors?

Now writing these lines, I can't remember if I found anywhere any references for another tool I found on the internet:

1. DriveDx at http://binaryfruit.com

Anyone know this tool?
 
I have already TechTool pro v6 and just today upgraded to v8.

I am about purchasing one of the two: Scannerz or Disk Tools Pro. What it sounds tempting is the ability of DiskTools pro to fix bad sectors.
Anyone has personal experience with this tool? Is it true it can fix bad sectors?

Now writing these lines, I can't remember if I found anywhere any references for another tool I found on the internet:

1. DriveDx at http://binaryfruit.com

Anyone know this tool?

Fixing Bad Sectors: NO SUCH THING! Read my summary below for details!

I have TechTool Pro, DriveDX, and Scannerz. If you have TechTool Pro you don't need DriveDx. DriveDX is an interface for the command line tool, smartctl, which is part of the smartmontools package and is a SMART monitor. You can download smartctl for free because it's open source.

DriveDX puts on a nice show, meaning it looks good, but it's dependent on a third party, open source project to do all it's work. You can download the DriveDX demo for free, and if you do you can right click on the DriveDX binary, click on show package contents, double click on contents, double click on resources, and there you will find smartctl. Basically what they do is run smartctl and then grab its output. If you don't mind paying money for that, I guess it's OK, but being open source, how long will updates occur and what will happen if the developers just decide to stop supporting it?

TechTool Pro has it's own SMART monitoring and the first time it appeared was, I think, before smartmontools ever existed, so it tells me TechTool Pro has it's own original code. DriveDX has a nice interface, probably better than any of the others (I'm not familiar with DiskTools Pro). Unless you find the looks of DriveDX so compelling, I think TechTool Pro and its SMART monitoring features are good enough.

It seems to me SMART technology is kind of sketchy as far as a test technology goes. It seems to sometimes report errors that aren't critical or necessarily serious as near death, and then turns around and completely ignores imminent failures. It's just not that reliable, in my opinion.

Scannerz comes with Scannerz, Phoenix, FSE, and Performance Probe. It would make a good compliment to TechTool Pro or something like Disk Warrior because it does a lot of stuff TechTool Pro doesn't do, and TechTool Pro does a lot of stuff Scannerz doesn't do. The only thing they have in common is that they can both do drive scans. Scannerz has better drive scanning capabilities than anything I've seen. I had a bad drive and neither TechTool Pro or Drive Genius were able to start a scan on it but Scannerz plowed right through it showing me bad sectors on the drive right at its start.

Scannerz itself is more for hard core hardware testing. It can tell you if you have drive problems, cable problems, or logic board problems, but in the latter two you almost have to have a Phoenix Boot Volume (PBV), which is the Scannerz version of eDrive. You have to learn how to use Scannerz to take advantage of it. You don't just click on a button and an answer magically pops up telling you what's wrong. Scannerz can tell you if you have weak sectors, which are marginal sectors that take a lot of time to read. Although technically they're not bad, they can cause as many problems as bad sectors, even though every other test tool except Scannerz seems to ignore them.

Phoenix, which comes with Scannerz, creates a PBV by extracting the core OS from your existing installation and can then use it's own image to re-install the OS if needed. It also does very basic cloning, but it is not a hard core cloner, like Carbon Copy Cloner is. Scannerz monitors SMART but does it the same way Apple does, meaning it doesn't really tell the user parameters, it just tells you if SMART registered a failure. The primary purpose of Phoenix is to create the PBV which can be used to host Scannerz on it and do tests on the drive, interface, and logic board, without needing to even have the internal drive active. If you put it on a USB stick make sure it's at least 32GB in size. Scannerz could be improved if they added some monitors like system temperature, drive temp. etc. - again, just my opinion.

Scannerz doesn't do file system checks or recovery, whereas TechTool Pro can. Phoenix can do some types of recovery on damaged drive that TechTool Pro can't. Like I said, the two almost compliment each other. For file system operations like index correction, TechTool Pro seems decent, and the newest versions have fixed some of the quirks that were in earlier versions. Scannerz can find and isolate some hardware problems that TechTool Pro doesn't touch. Like I said before, compliments of one another.

Scannerz also comes with a tool called FSE which can track file writes to a drive. I haven't had much need for it, but a friend of mine used it to track some adware that kept causing popups. I think it was originally included to help people find MDS problems. I'm sort of guessing here.

The tool that comes with Scannerz that I just love is called Performance Probe. It's sort of like a shorthand version of Activity Monitor without taking up the whole screen. I ended up using it all the time, and I mean literally, all the time. I start it at boot. It shows memory use the old fashioned way, with pie charts, telling you at an instant what the overall CPU, memory, and I/O loading are. If you click on the Advanced Monitoring button then use the load averaging, within seconds it will make clear what applications are loading the system. 9 out of 10 times it's either MDS, the Spotlight indexer, or something related to Safari. If you've ever used Activity Monitor to monitor the CPU what happens is often the biggest CPU consumers jump around instantly making it hard to tell what's really bottlenecking the system. Load averaging, once the averaging kicks in, stabilizes this and makes it clear who the CPU hog is. I really like this thing.

….and now for sector repair - There is no such thing. The term sector repair comes from the old days of MFM when programs could actually gain direct access to a hard drive. An application could target a bad sector and try to read and re-read it until the error codes came clean. If they could recover the data they could relocate it to a new sector and map the old one out as a bad sector. If they couldn't they would just mark the sector as bad and map it out of the drives indexes. Nowadays no one has direct access to the disk surface except the drive controller, which is part of the drive. It's the drive controller's job to be able to detect bad sectors and attempt recovery and do it automatically. The drive controller has to detect them during a write operation. If they exist in an area that hasn't been scanned the controller won't be aware of them. Scannerz and I assume TechTool Pro can do that during drive scans, which is one of the reasons they get used. In any case a bad sector gets remapped, not repaired. You can force sectors to re-map by zeroing the drive and then restoring from backup, assuming there are enough spare sectors left. Some of the fsck command line variants, I believe, can try to detect and correct or remap bad sectors, but using fsck is well beyond the skill levels of most people.

I suspect using the term "sector repair" is probably just a leftover from a few decades ago. They probably mean re-map, which if done incorrectly can be a disaster.
 
DriveDX puts on a nice show, meaning it looks good, but it's dependent on a third party, open source project to do all it's work. You can download the DriveDX demo for free, and if you do you can right click on the DriveDX binary, click on show package contents, double click on contents, double click on resources, and there you will find smartctl. Basically what they do is run smartctl and then grab its output. If you don't mind paying money for that, I guess it's OK, but being open source, how long will updates occur and what will happen if the developers just decide to stop supporting it?

I've written this before in these forums, but here goes again. SMART data is SMART data. It is generated by the storage device and not the application used to access the data.

DriveDX analyzes this data using their own heuristics to predict pending failures. I've had very good results using it. It isn't perfect but then neither is SMART itself. It's just another tool to have in the toolbox.

TechTool Pro has it's own SMART monitoring and the first time it appeared was, I think, before smartmontools ever existed, so it tells me TechTool Pro has it's own original code.

So what? As mentioned above SMART data is SMART data. How, exactly, is using their own code to access this data such an advantage?

I have TechTool Pro as well and it simply shows a list of some key SMART values with a "pass" or "fail" bar. Unlike DriveDX which runs constantly, instantly alerting me to changes from my menu bar, I need to run TechTool and drill down to the smart check any time I want to see this data.

Unless you find the looks of DriveDX so compelling, I think TechTool Pro and its SMART monitoring features are good enough.

Good for you. However, let's not forget DriveDX's advantages of realtime operation and heuristic analysis in addition to its visual appeal. :)
 
Last edited:
  • Like
Reactions: beginnersview
I've written this before in these forums, but here goes again. SMART data is SMART data. It is generated by the storage device and not the application used to access the data.

I'm well aware of that. Smartctl interrogates the device to obtain SMART information.


DriveDX analyzes this data using their own heuristics to predict pending failures. I've had very good results using it. It isn't perfect but then neither is SMART itself. It's just another tool to have in the toolbox.

It can apply all the heuristics it wants to. The problem with SMART isn't smartctl or DriveDX, it's SMART itself. If the data being recorded by the device isn't that good, what will heuristics do for you?


So what? As mentioned above SMART data is SMART data. How, exactly, is using their own code to access this data such an advantage?

It's a personal opinion. First, it's relying on open source software. Developers on open source projects have a tendency to just disappear at times leaving the product hanging. Second, smartctl itself, if you download the smartmontools package is loaded with warnings about using the product and they seem to be hidden from DriveDX users. Third, if the smartctl developers find a bug and do an update, how will a DriveDx user ever know about it. As far as most are concerned it's an original product. Fourth, why would someone take someone else's work and use it as the core of a product rather than writing their own. It begs the question, "Do these people not know how to write a SMART monitor, and if they can't do that themselves, how do I know anything they're getting is being properly interpreted?"

I have TechTool Pro as well and it simply shows a list of some key SMART values with a "pass" or "fail" bar. Unlike DriveDX which runs constantly, instantly alerting me to changes from my menu bar, I need to run TechTool and drill down to the smart check any time I want to see this data.

The guy already owns TTP.

For me it really doesn't boil down to DriveDX or Smartctl, it boils down the reliability of SMART itself. If SMART can tell someone a drive is failing when it isn't, or it tells someone a drives in great shape, when it isn't, what good is it? The makers of DriveDX could write the best interrogater of SMART technologies on Earth and put it into their product all by themselves, and I'd be impressed, but that still wouldn't stop the data their reading, as implemented by the drive manufacturers from being crap.
 
Relying on an open source for a product, and I guess a few of the so called smart monitors do it, doesn't sound like a very wise idea to me. I've been stung by open source projects a few times.

The first was during a Linux upgrade and transfer to a new system using a different logic board with built in video. It was working on the old system but once we put it on the new system about the only thing it would support was text and no graphics. We found out the developer doing the video driver just gave up on it and left it hanging. We ended up having to disable the internal video card and put in third party cards to get it up and running.

In the second case some novice decided to improve the open source to a Java virtual machine. Well, if making it stop working is an improvement, then kudos and job well done! What a PIA that was. We had to uninstall and re-install an earlier version, and it took weeks for someone else to step in and clear the mess up.

You get what you pay for.
 
Scannerz totally dominates drive scanning and hardware tests, IMHO. SCSC just released their latest update of Scannerz and they said that they intend to add more SMART support than they currently have in the future. SMART monitoring, regardless of how good or bad it is, is still an indicator. Scannerz has the benefit of allowing you to do tests and observe them in real time. If there's a cable problem, it shows up in interface testing as you're watching it. If there's a logic board problem, you can see them occur as the tests are going on. If there are bad or weak sectors you can once again see them in real time during diagnostics mode tests. Scannerz is the only tool out of any on this list that can identify these problems as they occur. SMART monitoring only reports them after the fact.
 
Last edited:
"smartmontools" is in my original list. I added DriveDX and another at one point and then pulled them from the list because, in my opinion, they were minimizing or hiding the fact that they used smartctl, which is part of the smartmontools distribution, for their actual SMART analysis. I'm not entirely sure what they're doing is legal.

I'm not an attorney and the licensing of software is, in my opinion, quite confusing. I'm under the impression that to use open source software like that it needs to be very clear to users that that's what it is, and in some cases you may be required to make your own products using it open source as well.

Like I said, I'm not an attorney and I'm not interested in getting involved in a lawsuit.
 
"smartmontools" is in my original list. I added DriveDX and another at one point and then pulled them from the list because, in my opinion, they were minimizing or hiding the fact that they used smartctl, which is part of the smartmontools distribution, for their actual SMART analysis. I'm not entirely sure what they're doing is legal.\

I'm not an attorney either, but to suggest that DriveDX's devs are minimizing or hiding the fact that they use Smartmontools is wrong.

Once the app is installed, click on Help > Acknowledgements and you very clearly see:

Acknowledgements

Portions of this software may utilize the following copyrighted material, the use of which is hereby acknowledged:

smartmontools, Copyright (c) 2002-14 Bruce Allen, Christian Franke,

www.smartmontools.org.
smartctl executable binary is included with this software.

### AUTHORS ### This code was originally developed as a Senior Thesis by Michael Cornwell at the Concurrent Systems Laboratory (now part of the Storage Systems Research Center), Jack Baskin School of Engineering, University of California, Santa Cruz. http://ssrc.soe.ucsc.edu/

This package is meant to be an up-to-date replacement for the ucsc-smartsuite and smartsuite packages, and is derived from that code.

Maintainers / Developers:

Bruce Allen
Erik Inge Bolsø
Stanislav Brabec
Peter Cassidy
Casper Dik
Christian Franke
Guilhem Frézou
Douglas Gilbert
Guido Guenther
Geoff Keating
Dr. David Kirkby
Kai Mäkisara
Eduard Martinescu
Frédéric L. W. Meunier
Keiji Sawada
Manfred Schwarb
David Snyder
Sergey Svishchev
Phil Williams
Richard Zybert
Yuri Dario
Shengfeng Zhou
Praveen Chidambaram
Joerg Hering
Tomas Smetana
Jordan Hargrave
Alex Samorukov

For more details please see file 'smartctl_copyrights.txt' (in app bundle resources directory)
 
Last edited:
I'm not an attorney either, but to suggest that DriveDX's devs are minimizing or hiding the fact that they use Smartmontools is wrong.

Once the app is installed, click on Help > Acknowledgements and you very clearly see:

This is very clearly legal and many commercial apps use open source as their underpinnings. The very popular cloning app Carbon Copy Cloner includes the open source rsync binary in their app for example.
 
Never the less, SMART is still only about as reliable as flipping a coin.

Relying on an open source for a product, and I guess a few of the so called smart monitors do it, doesn't sound like a very wise idea to me. I've been stung by open source projects a few times.

The first was during a Linux upgrade and transfer to a new system using a different logic board with built in video. It was working on the old system but once we put it on the new system about the only thing it would support was text and no graphics. We found out the developer doing the video driver just gave up on it and left it hanging. We ended up having to disable the internal video card and put in third party cards to get it up and running.

In the second case some novice decided to improve the open source to a Java virtual machine. Well, if making it stop working is an improvement, then kudos and job well done! What a PIA that was. We had to uninstall and re-install an earlier version, and it took weeks for someone else to step in and clear the mess up.

Been there, done that. I remember the Java JVM problem too. About 10 years ago if I recall.
 
The makers of DriveDX could write the best interrogater of SMART technologies on Earth and put it into their product all by themselves, and I'd be impressed, but that still wouldn't stop the data their reading, as implemented by the drive manufacturers from being crap.

OK, well, again... this is your crap definition. Is SMART data a 100% accurate indicator of drive issues or pending failure? No. No one (not even the makers of DriveDX) is claiming it is.

It is not 0% effective either, which is what my definition of crap would be. I have had it accurately predict drive failures multiple times for me. Then, there were some that crashed without warnings. I was very glad to have the heads up for those that did start sending alerts.

As for your nitpicking at DriveDX for not writing their own version of Smartctl, an alternative way to look at it would be: why waste time and energy writing our own package when a perfectly functional and free open source method already exists?

Thank you, Weaselboy, for the excellent example of CCC. I don't see anyone railing at them for not writing their own version of rsync. They're far from the only ones using open source software in their commercial products. Let's be fair now, shall we?
 
I'll just tell you my own personal experience. Our shop has 2 drives that both suffered head crashes. Not enough to wipe out all the spare sectors, but enough for every SMART testing product out there to issue an "about to fail any minute now." We stuffed them in field units where we half expect the hardware to be damaged and short lived. One has since gone 6 years without failing and the other 3 years without failing. SMART reports about "imminent failures" haven't changed. The drives are just aging.

Then I have a 250GB WD 3.5" drive. It runs when it wants to. I can turn it on and it might register with the system. If it doesn't, I can rotate it on its side and it might kick on and start to work…for about 3 minutes. Guess what? SMART reporting says this drive is A-OK, in top operating condition, nope, nothing wrong with this drive.

I could have flipped a coin and gotten better results.
 
rsync: credits, capabilities and a caution

… many commercial apps use open source as their underpinnings. The very popular cloning app Carbon Copy Cloner includes the open source rsync binary in their app for example.

… I don't see anyone railing at them for not writing their own version of rsync. …

Credit where credit's due. As far as I can tell, the version 3.0.6 distribution at rsync.samba.org lacks some of the capabilities of the version 3.0.6 that is included with Carbon Copy Cloner.

So the latter is a different version. Loosely speaking, a Bombich version, or words to that effect (open source, so I'm reluctant to use the word 'own').

A glance at http://www.bombich.com/software/opensource/rsync_3.0.6-bombich_20121219.diff shows HFS compression, and more.

Capabilities of 3.1.1 (below) include HFS-compression but none of the release news pages for 3.0.7 or later mention that capability, or Bombich.

Code:
sh-3.2$ /opt/local/bin/rsync --version
rsync  version 3.1.1  protocol version 31
Copyright (C) 1996-2014 by Andrew Tridgell, Wayne Davison, and others.
Web site: [url]http://rsync.samba.org/[/url]
Capabilities:
    64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,
    socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
    append, ACLs, xattrs, iconv, symtimes, no prealloc, file-flags,
    HFS-compression

rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you
are welcome to redistribute it under certain conditions.  See the GNU
General Public Licence for details.
sh-3.2$ clear ; port info rsync





rsync @3.1.1 (net)
Variants:             universal

Description:          rsync is an open source utility that provides fast incremental file transfer. It works both locally and remote with either the custom rsyncd protocol or a remote shell like ssh.
Homepage:             [url]http://samba.org/rsync/[/url]

Library Dependencies: popt, libiconv
Platforms:            darwin, freebsd, sunos
License:              GPL-3+
Maintainers:          [email]jimjag@gmail.com[/email]
sh-3.2$ clear ; sw_vers ; /usr/bin/rsync --version





ProductName:	Mac OS X
ProductVersion:	10.9.5
BuildVersion:	13F1066
rsync  version 2.6.9  protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.
<http://rsync.samba.org/>
Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
              inplace, IPv6, 64-bit system inums, 64-bit internal inums

rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you
are welcome to redistribute it under certain conditions.  See the GNU
General Public Licence for details.
sh-3.2$

– note, no mention of HFS compression in version 2.6.9, which Apple includes with Mavericks and with OS X 10.10.

Relevance to testing (the opening post)

If you use the checksumming capabilities of rsync (with or without Carbon Copy Cloner), with either of the HFS compression options:
  • please know that in rare situations, an undisclosed bug involving HFS compression may cause dataloss
– as far as I can tell, a file system corruption that utilities such as Disk Utility can not detect.

I can't predict how rsync will respond in the presence of that corruption, if HFS compression is preserved. If you treat the checksumming capabilities of rsync as a test of integrity of data, then I should err on the side of caution:
  • do not allow rsync to preserve HFS compression.
If there's that type of corruption, previously undetected, then an attempt to decompress/read will almost certainly draw attention to the affected file (maybe with error code -22).

Keyword

AppleFSCompression

Understanding

Some of what's above is necessarily technical. If it doesn't make sense to you, please request clarification.
 
Credit where credit's due.

Here, here!!!!


As far as I can tell, the version 3.0.6 distribution at rsync.samba.org lacks some of the capabilities of the version 3.0.6 that is included with Carbon Copy Cloner.

So the latter is a different version. Loosely speaking, a Bombich version, or words to that effect (open source, so I'm reluctant to use the word 'own').

A glance at http://www.bombich.com/software/opensource/rsync_3.0.6-bombich_20121219.diff shows HFS compression, and more.

Capabilities of 3.1.1 (below) include HFS-compression but none of the release news pages for 3.0.7 or later mention that capability, or Bombich.

Code:
sh-3.2$ /opt/local/bin/rsync --version
rsync  version 3.1.1  protocol version 31
Copyright (C) 1996-2014 by Andrew Tridgell, Wayne Davison, and others.
Web site: [url]http://rsync.samba.org/[/url]
Capabilities:
    64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,
    socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
    append, ACLs, xattrs, iconv, symtimes, no prealloc, file-flags,
    HFS-compression

rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you
are welcome to redistribute it under certain conditions.  See the GNU
General Public Licence for details.
sh-3.2$ clear ; port info rsync





rsync @3.1.1 (net)
Variants:             universal

Description:          rsync is an open source utility that provides fast incremental file transfer. It works both locally and remote with either the custom rsyncd protocol or a remote shell like ssh.
Homepage:             [url]http://samba.org/rsync/[/url]

Library Dependencies: popt, libiconv
Platforms:            darwin, freebsd, sunos
License:              GPL-3+
Maintainers:          [email]jimjag@gmail.com[/email]
sh-3.2$ clear ; sw_vers ; /usr/bin/rsync --version





ProductName:	Mac OS X
ProductVersion:	10.9.5
BuildVersion:	13F1066
rsync  version 2.6.9  protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.
<http://rsync.samba.org/>
Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
              inplace, IPv6, 64-bit system inums, 64-bit internal inums

rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you
are welcome to redistribute it under certain conditions.  See the GNU
General Public Licence for details.
sh-3.2$

– note, no mention of HFS compression in version 2.6.9, which Apple includes with Mavericks and with OS X 10.10.

Relevance to testing (the opening post)

If you use the checksumming capabilities of rsync (with or without Carbon Copy Cloner), with either of the HFS compression options:
  • please know that in rare situations, an undisclosed bug involving HFS compression may cause dataloss
– as far as I can tell, a file system corruption that utilities such as Disk Utility can not detect.

I can't predict how rsync will respond in the presence of that corruption, if HFS compression is preserved. If you treat the checksumming capabilities of rsync as a test of integrity of data, then I should err on the side of caution:
  • do not allow rsync to preserve HFS compression.
If there's that type of corruption, previously undetected, then an attempt to decompress/read will almost certainly draw attention to the affected file (maybe with error code -22).

Keyword

AppleFSCompression

Understanding

Some of what's above is necessarily technical. If it doesn't make sense to you, please request clarification.

The entire purpose of using rsynch 3 is to preserve, or at least so I thought, the compression and forks. What you're saying is that may not be safe. Possibly this is why Apple doesn't include "3" in their versions.

Too often open source goes off on a tangent, with different groups developing and following different paths. You end up with a proverbial food fight. The left hand has no idea what the right hand is doing. This is why, IMHO, Linux never really gained ground as a core OS. How many distro's of it are there now, about a thousand?
 
AppleFSCompression and rsync: additional information

… The entire purpose of using rsynch 3 is to preserve, or at least so I thought, the compression and forks.

For clarification:
https://rsync.samba.org/resources.html (undated) is probably outdated; it mentions HFS but not HFS compression.

What you're saying is that may not be safe. …

No.

If AppleFSCompression-related dataloss occurs, then a subsequent attempt to copy the affected file (using rsync or whatever) is neither safe nor unsafe; it's simply impossible to copy what's lost.

When I last had an HFS Plus file system with that type of inconsistency, I did not test rsync with either option --hfs-compression or --protect-decmpfs …
 
fsck_hfs – HFS consistency checks, scans for I/O errors

Technical



Disk Utility, diskutil - Disk Utility is the tool provided with OS X to configure drives and volumes, and do limited repair work on drives. It has a command line version named "diskutil" that many people are not aware of. …

… Neither tools perform surface scans on a drive, and when one uses the "Repair Disk" option, it's working on the index files, not actual drive problems. I would strongly recommend that anyone interested open up a Terminal.app session and type "man diskutil" to see its full functionality. …

For HFS consistency checks, both Disk Utility and diskutil(8) use fsck_hfs(8).

Open source fsck_hfs in Mavericks

For posterity: the Internet Archive Wayback Machine includes the manual page for fsck_hfs(8) as it appeared on 2014-06-03. Option -S is described:

Cause fsck_hfs to scan the entire device looking for I/O errors. It will attempt to map the blocks with errors to names, similar to the -B option.​

(No such option in the page for fsck_hfs(8) in Mac OS X 10.8.)

fsck_hfs appeared in Apple open source in hfs-226.1.1 for OS X 10.9.

In a copy of that source code, https://github.com/st3fan/osx-10.9/blob/master/hfs-226.1.1/fsck_hfs/fsck_hfs.c#L533 draws attention to Apple's phrase 'Scanning entire disk for bad blocks' but that phrase is somewhat misleading:
  • the scan is of the entire device (not the entire disk).

Pros and cons of using fsck_hfs to scan for I/O errors

The utility:
  • is integral to Recovery OS for Mavericks.
The utility:
  • can not scan an entire disk
  • is not suitable for people who lack confidence with Terminal
  • probably lacks a sanity check to prevent scanning the startup volume.
Its ability to detect bad (or marginal) blocks is inferior to the routines offered by some alternative utilities, but this is not necessarily a bad thing. Realistically, most end users will have neither the time nor the patience to allow a more thorough routine.

Also, critically, end users may not realise that in some cases, a scan for bad blocks might cause dataloss.

Appeal

Still, fsck_hfs option -S is appealing. Not least because it's readily available.

A routine scan for I/O errors, before full installation of an operating system, could significantly reduce the (traditional) risk of inexplicable troubles from a disk that's truly not OK … after Disk Utility described the disk as apparently OK.

Someone with time, and relevant technical knowledge, might like to write something GUI-based, relatively simple, that can be added to the main window that appears when Recovery OS starts.

Side note

I wondered whether diskutil in Mavericks has a hidden option that can cause fsck_hfs to run with option -S … as far as I can tell, no such option (none of the printable strings in the diskutil binary have that appearance).
 
Credit where credit's due. As far as I can tell, the version 3.0.6 distribution at rsync.samba.org lacks some of the capabilities of the version 3.0.6 that is included with Carbon Copy Cloner.

So the latter is a different version. Loosely speaking, a Bombich version, or words to that effect (open source, so I'm reluctant to use the word 'own').

Fair enough. Since Bombich is acknowledging using the open source version, I can only assume that in spite of their own revisions and additions, there is code not written by them in there and this is the point. Again, not to get nitpicky about it. :p

Carbon Copy Cloner is just one excellent example of many commercial products using open source code.

Here, here!!!!

Hear, hear. :)
 
Using fsck for Mac users???

My observations on that are as follows:

  1. Most Mac users don't even know what Terminal.app is.
  2. I've seen experienced Unix admins totally screw up file systems using it. It's not an easy tool to use, and in some cases requires extraordinary patience.
  3. It triggers on an I/O error, which it assumes is a bad block. What if it's hardware related, like a bad cable or faulty logic board?

Cable problems on some MacBook Pro's are fairly well known. Picture fsck remapping and marking out blocks on what's actually a problem occurring randomly because that's the nature of that type of problem.
 
The opening poster would like a definitive list; fsck_hfs belongs in any such list.

That's exactly right. When I started this thread years ago I could update it but now I can't. People should just add descriptions of applications and/or utilities and their plusses and minuses as they see them and steer clear of "the application I use is better than yours" type of arguments.

I should also add that I was aware of the various fsck variants when I wrote the list but considered it's use beyond the scope of most users, plus it seems to be in a constant state of change.

What I don't want to see is some of the nonsense you see on the Apple Support Communities board. For example, one poster made a post about Apple officially supporting MacKeeper (or something like that) and it turned into a 40 (or more) page collection of diatribes about the evils and goods of MacKeeper.

To anyone interested, MacKeeper isn't a drive tool so don't even start. If you do I'll ask the moderators to delete the posts.

Applications using open source software for their products should clearly advertise it on their websites, not bury it in some info file that a user has to dig into by recursing the applications file hierarchy to find the information.
 
DiskWarrior 5

I read in this thread about DiskWarrior that was in slow development and still in version 4.

I just found that there is a 64bit version 5 out there.

more info at alsoft.com website.
 
I read in this thread about DiskWarrior that was in slow development and still in version 4.

I just found that there is a 64bit version 5 out there.

more info at alsoft.com website.

Thanks for the update. The original post is over two years old and I can no longer edit it. I guess MacRumors now puts a time limit on how long a post can be editable. I'm sure virtually all items in the list have since been updated, version number have changed, and I suppose additional features have been added as well, but the product links should still be OK.
 
Towards a definitive list



The opening poster would like a definitive list; fsck_hfs belongs in any such list.

I'm not trying to pick a fight with you but fsck_hfs can mask developing problems. Why? If a drive has a problem, such as misalignment of the heads due to excessive wear, it can very well end up dragging drive heads over regions of a drive that may not be in use. This leaves the user unaware of the fact that this is happening. SMART isn't aware of it because no data has been written and write operations are when SMART detects errors, and fsck won't detect it because it's only looking at the file system. Additionally, errors that fsck detects are mapped into the bad sectors region of the drive, not necessarily registered with the SMART firmware in the drive. Why? Because it's errors are detected during read, not write operations. What happens? A drive progressively gets worse with more and more files being messed up often without the user being aware of it.

I actually saw a report on this in a Linux system. I wish I had bookmarked it but I didn't. The poor guy kept checking SMART status, which would incrementally show errors growing. Then he'd run fsck on it to correct the errors. Then more errors would occur, and this cycle would repeat. Eventually the drive failed.

fsck is sort of a dated tool in my opinion, based on the old days when mapping bad sectors was part of the file system management. Now that's supposed to be done by the firmware, but once again another shortcoming there is the error detection only occurs during write operations.

Tools that do complete surface scans like Scannerz, Tech Tool Pro, and Drive Genius are the only way to properly detect these problems, and Scannerz is the only one of them that can pick up weak sectors.
 
Apple's Disk Utility just caught a dying USB stick for me.
Not with the usual "Verify Disk" method, it passed that test, but with an "Erase" with secure erase set to 1.
The stick failed to create a new file, to write in, 3X in a row.
After that, the stick became unformattable.

I wouldn't use this method on a big HDD or SSD, but it works nicely for those little memory sticks that I carry around and format Mac Journaled, DOS-FAT etc. as needed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.