Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

IceMacMac

macrumors 6502
Original poster
Jun 6, 2010
394
18
A decade or more ago it was a recommended thing to defragment your hard drives. Or at least some vendors convinced us so.

I can't recall explicitly but I seemed to have read since then that it's not necessary or very recommended practice.

When I upgraded to TechTool Pro 7 I looked at their directory optimization which includes file defragging.

What do you all think? Is it worth the trouble? Does it make any real difference?
 
It depends on the Hard Drive technology (platter based or solid state) and file system in use. If it's FAT/FAT32 or NTFS then yes, it will make a difference. If it's Ext2/3/4, HFS or other Unix based file system then not that much.

If we're talking solid state drive then nothing to worry about regardless of file system. Unlike traditional platter hard drives where information is stored on spinning disks (you need to seek the cylinder, track and sector where the data is located), solid state drives store in memory cells. A pull/push is much faster from memory cells (seek times/latencies are almost imperceptible) than from a magnetically imprinted disk.
 
If you are very low on space fragmentation will start to happen regardless of filesystem.

OS X / Unix in general does a reasonable job of avoiding fragmentation but if you have drives that are nearly full it won't be able to keep fragmentation from happening and a defrag may help.

But the proper solution to that problem when it happens is to buy more disk (or archive inactive data to another location to free up more) so the operating system's fragmentation avoidance algorithms can work properly.
 
Ya, and actually you should never let that happen... At least not with rotational media. It's a good rule to treat your drive(s) as if they're 60% of their actual size.

Pretty much anything past that 60% and that's when to consider getting bigger or more drives.
 
Some people will claim you don't need to defrag a HD drive because OS X does it automatically blah blah. Had they actually read those outdated Apple support pages they would also know they are only partially right. The OS X does defrag files under 20 MB and mostly in the metadata zone. The OS X defrag does nothing for large video and audio files or disk images. Nor does the OS efficiently put files in any specific order.
And people will claim you can get the same effect by backing up, erasing and restoring a drive. Backing up, erasing and restoring a drive does indeed get rid of file fragmentation. But it does not prioritize the files and if somebody uses a block copy option they will end up with the exact same fragmentation they had earlier.
 
Some people will claim you don't need to defrag a HD drive because OS X does it automatically blah blah. Had they actually read those outdated Apple support pages they would also know they are only partially right. The OS X does defrag files under 20 MB and mostly in the metadata zone. The OS X defrag does nothing for large video and audio files or disk images. Nor does the OS efficiently put files in any specific order.
And people will claim you can get the same effect by backing up, erasing and restoring a drive. Backing up, erasing and restoring a drive does indeed get rid of file fragmentation. But it does not prioritize the files and if somebody uses a block copy option they will end up with the exact same fragmentation they had earlier.

God, this is so sad. Who cares if their video files are fragmented or not? Your video will play just fine; a bit of fragmentation won't make the slightest bit of difference.
 
God, this is so sad. Who cares if their video files are fragmented or not? Your video will play just fine; a bit of fragmentation won't make the slightest bit of difference.

If you play the highly compressed video one file at a time. Try to stream multiple uncompressed files from a single disk and it will matter. Increase the number of high resolution, lightly compressed ones and still a factor even if spread over multiple HDDs.

If have mundane data bandwidth needs the mundane bandwidth won't matter.
 
Though I also read an article that OSX does not need defragmentation by habit I still defray my HDs if the degree of fragmentation is over 30%. I am using Drive Genius though I also have TechTool Pro But I am not making a general rule that everyone should defray. This is just me. A tech guy told me defragmentation is to lower the risk of disk corruption and not really to speed up processing.

My friend also told me another way to defrag is to do a clean reinstall of the OSX and apps though I am not sure if this is true. I did try this route and noticed the boot up time was faster and apps open faster.
 
When I upgraded to TechTool Pro 7 I looked at their directory optimization which includes file defragging.

What do you all think? Is it worth the trouble? Does it make any real difference?


For SSDs it is actually probably harmful. The defrag software doesn't really see where the data is actually stored as the SSD's flash controller only exposes logical , not real, data block addresses. Latency between disk blocks isn't a huge issue. The marginally faster logical block metadata access isn't really worth shortening the life of the physical flash blocks.


However, unlike some other comments it is a non optimal idea to run a SSD up to 90+ percentage full levels just like an HDD. Run out of room and the HFS+ defrag routines get less effective. The SSD's flash management routines have similar lower effectiveness problems if push them into a corner. They won't work as well at maintaining wear and effectively clustering data. ( and no TRIM isn't some magical panacea that reverse that. Different issue. )


HFS+ does track a specific subset of files that are important to the boot process. Those do have a higher priority. ( similar clean up operation done after an OS upgrade install ). The large majority of files on the disk though, no.


Large files would be moved by a defrag program. HFS+ is probably overdue to modify the definition of "not large" to be something bigger than just 20MB. However, the move to Flash is probably going to allow them to kick that can down the road longer (since don't want the defrag active anyway).

Apple somewhat believes it isn't necessary any more.

This article is vintage at this stage ( does cover hot files and hot bands )

http://support.apple.com/kb/HT1375

Searching support for an active article turns up this one as #1 hit.

http://support.apple.com/kb/PH5862

Apparently the modern Apple doesn't think it exists anymore except as a HFS+ metadata problem. :) Larger and larger default HDDs that users don't fill up to the brim helps eliminate the issue. More users on SSDs also eliminates the symptoms.

Non-destructive editing means large video files don't have to fragment over time if don't start out fragmented. If have lots of large files just treat the "full" level at a slightly lower percentage than 85-90% ( some reasonably high multiple of the size of the files working with.)


If do something like create lots of 21-550MB files, mutate them, then later delete and add more of them while keeping the HDD > 80% fulll all the time you'll probably see some benefits in running an occasional defrag.
 
If you play the highly compressed video one file at a time. Try to stream multiple uncompressed files from a single disk and it will matter. Increase the number of high resolution, lightly compressed ones and still a factor even if spread over multiple HDDs.

The multiple video files are on different parts of the platter regardless. The drive has to constantly switch between the files, the rotational media seek times hold things up. Fragmentation is irrelevant, or the least of the issue.
 
The multiple video files are on different parts of the platter regardless. The drive has to constantly switch between the files, the rotational media seek times hold things up.

Drives can swtich between two locations without completely collapsing. Frankly, the file metadata is going to be somewhere else anyway so with HFS+ you are always getting some data from two locations. It works OK.

Fragmentation has a multiplier effect upon the baseline different locations. It has effect of multiple user streams even when only a single user with single application running.

Highly compressed files help fill the gaps (since when decompressed there is actualy more data to stream) that the latency brings up. A regular pattern of read two sequential files can be made to work with modest read-ahead disk cache RAM to minimize the latency. Just a matter of giving the disk a set of commands so it can pick up on the extremely simple pattern.
 
Some people will claim you don't need to defrag a HD drive because OS X does it automatically blah blah. Had they actually read those outdated Apple support pages they would also know they are only partially right. The OS X does defrag files under 20 MB and mostly in the metadata zone. The OS X defrag does nothing for large video and audio files or disk images. Nor does the OS efficiently put files in any specific order.

Again, if you have sufficient free space, fragmentation doesn't become an issue as there will be enough big free chunks.

And that isn't OS X specific, it is valid for pretty much anything from DOS through Windows NTFS, Linux, OS X or a Netapp filer or EMC SAN (edit: err. maybe not FAT or FAT32, they're pretty damn retarded at writing efficiently).

Fill your disk too much, space gets tight and files end up written to tiny little free chunks scattered all over the disk. No matter what a filesystem programmer does, he has to deal with files being deleted and creating "holes" in the free space. Keeping plenty of free space means there's much more likely to be big free chunks available to avoid fragmentation.

----------

Ya, and actually you should never let that happen... At least not with rotational media. It's a good rule to treat your drive(s) as if they're 60% of their actual size.

Pretty much anything past that 60% and that's when to consider getting bigger or more drives.

SSD can potentially run into fragmentation problems too.

Sure, there's not head movement latency to deal with, however I have run into an issue with a very old server before which was low on free space, but actually could not write to disk at all, even though there was "enough" space free.

Why?

The free space was SO BADLY FRAGMENTED that the file I was trying to write to disk would have been spread over too many fragments for the system to handle.

So although fragmentation is less of a performance problem, SSD isn't totally immune to fragmentation either. Try and keep some free space :)

Given the wear a defrag on an SSD will incur, it's probably best to avoid the issue by maintaining that free space ;)


edit:
And yes, HFS/HFS+ is well overdue to be thrown away. There are plenty of way more advanced filesystems available (ZFS/Hammer spring to mind) which can actually do inline compression, optimise themselves properly for RAID-style arrays, etc. Plus silent data corruption due to bit errors in data that HFS will not detect, etc.
 
Last edited:
My SSD boot drive was probably not sufficiently sized for my needs. And I've been running it at 90-99% full.

So I'm going to defrag it and try to be even more clever about culling data off of it.

Hopefully one defrag won't shorten it's life.
 
My SSD boot drive was probably not sufficiently sized for my needs. And I've been running it at 90-99% full.

So I'm going to defrag it and try to be even more clever about culling data off of it.

Hopefully one defrag won't shorten it's life.

What? Don't defrag an SSD. It shortens the life and does not help.

The whole point of defragging a rotational hard drive is that a badly fragmented file forces the drive's head to physically move around, seeking each piece of the file. This physical movement of seeking really slows things down.

There is no such physical mechanism in the SSD as everything is retrieved electronically by address. Hence there is no benefit to defrag.

Secondly, as it's already been pointed out in this thread, where the OS thinks the pieces are is not where they actually are. So your defragging program isn't even really defragging in the first place. The SSD controller is really putting the pieces wherever it damn well feels like due to features like reserved space, wear leveling, and data compression.
 
A decade or more ago it was a recommended thing to defragment your hard drives. Or at least some vendors convinced us so.

I can't recall explicitly but I seemed to have read since then that it's not necessary or very recommended practice.

When I upgraded to TechTool Pro 7 I looked at their directory optimization which includes file defragging.

What do you all think? Is it worth the trouble? Does it make any real difference?


Good to see I'm not the only Old Fart on the forum.

Going back even further...

My first HD was 10 MB (yes megabyte) and cost me $1200.00. It was mandatory to manually "Park Your Heads" via command line before turning off the power to the drive. If you forgot you could cause a head crash and destroy your drive.

:eek:
 
Good to see I'm not the only Old Fart on the forum.

Going back even further...

My first HD was 10 MB (yes megabyte) and cost me $1200.00. It was mandatory to manually "Park Your Heads" via command line before turning off the power to the drive. If you forgot you could cause a head crash and destroy your drive.

:eek:

i still have some of those old 10MB MFM/RLL drives
 
What? Don't defrag an SSD. It shortens the life and does not help.

The whole point of defragging a rotational hard drive is that a badly fragmented file forces the drive's head to physically move around, seeking each piece of the file. This physical movement of seeking really slows things down.

Debatable - fragmentation can get so bad that you can't actually allocate space, if the free space is too badly fragmented - irrespective of where the blocks are on the physical media - space allocation on the filesystem at the OS level essentially breaks.

I've seen it happen, and this was a drive in a virtual machien that was stored on a SAN, so it was several layers of abstraction away from physical block allocation knowledge (VM, NFS, LBA on the drive, RAID striping, etc).

And whether or the physical location vs logical location is different or not, the filesystem still needs to keep track of, allocate, and re-assemble fragments.

Sure, in theory SSD life is affected, and yes doing a defrag on a regular basis is pretty stupid on one.

But if it is badly fragmented due to lack of space, freeing space and either reinstalling or defragmenting is not a such bad thing to get the file system in order.

Modern SSDs are rated at very, very high data rates, every day for several years worth of life.

A defrag might shorten your SSDs projected life by a couple of days, tops. I.e., even if you do a defrag once per year, you'd still need to buy an SSD the week/month it actually fails.

Hard drives ALSO have an MTBF and doing a defrag on a hard drive can theoretically kill it too.


Modern SSDs from a reputable vendor are not terrible any more. I know of people running consumer grade SSD as write cache in big storage arrays (essentially to bundle heaps of small writes into efficient full stripe writes to disk), where every single write to the storage on multi-terabyte arrays is funnelled through the SSD. They're doing hundreds of gigs of writes every day.

Reliability still isn't that bad.
 
Last edited:
Good to see I'm not the only Old Fart on the forum.

Going back even further...

My first HD was 10 MB (yes megabyte) and cost me $1200.00. It was mandatory to manually "Park Your Heads" via command line before turning off the power to the drive. If you forgot you could cause a head crash and destroy your drive.

:eek:

Wow, you must be older than me, and I'm 74. My first Hard Drive was an external Conner. 100MB, I thought I had the world:eek: That drive cost me $800.00. No head parking.

Lou
 
SSD can potentially run into fragmentation problems too.

Sure, there's not head movement latency to deal with, however I have run into an issue with a very old server before which was low on free space, but actually could not write to disk at all, even though there was "enough" space free.

I build and test SSDs for a livin and what folks seem to gloss over is that fragmentation in rotational media impacts system performance mostly because the seek times are slowing things down. The OS also slows down but the drive performance was orders of magnitude more significant than the corresponding OS slow downs. Enter the SSDs, which have eliminated seek times. That peels the onion layer to uncover that the OS slow downs can be significant.

What you saw probably had nothing to do with the drive, but how the OS prevents and handles fragmentation. Many OSs run into trouble when you try to squeeze that last byte of memory out of the drive, as there is no wiggle room.

It could also be bad/buggy drive firmware combined with bad gates and perhaps not enough time for garbage collection that just pushed things too far.

There are a couple types of gates used in SSDs, MLS where you want to squeeze as much capacity into a small space and are willing to give up endurance and SLC gates which are durable (and expensive) but are so large its tuff squeezing 60GB into a 2.5 in drive. There is also NAND and NOR architectures and combinations to throw into the stew.

You can easily wear out a MLC consumer grade SSD, I've done it often. Servers that use MLC grade SSDs over provision the gates to gain running time... i.e. if 50GB is needed, the buy 500GB drives and wearing out 50GB at a time... lasts 10 times longer in the system.

Anyway its a bit more complex of a relationship than some have implied here...
 
What you saw probably had nothing to do with the drive, but how the OS prevents and handles fragmentation...

Correct, and that's my point.

The OS filesystem layer, even if the drive below doesn't care, can be a problem under severe fragmentation, because the filesystem needs to keep track of where all those fragments are, and some filesystems can only handle a file with so many fragments. I'm not a filesystem programmer, but I have personally experienced an inability to write a file of a few meg to disk on a machine with several hundred megs of free space (8 GB partition). Defrag fixed it.

That particular filesystem was NTFS (backed by a VMDK on top of NFS, on a Netapp), but the fact remains that if under severe fragmentation with sufficient free space pressure, there's not a lot that ANY filesystem can do - even if the underlying storage is quite happy. HFS is better than NTFS at avoiding fragmentation, but it's not a silver bullet. There is no silver bullet. Even enterprise filesystems like ZFS and WAFL suffer under similar circumstances.

Throwing hands up and saying "it's SSD! it doesn't matter!" is a bit naieve. there are certain scenarios where fragmentation can be a problem, no matter what the underlying storage hardware is.
 
Last edited:
Correct, and that's my point.

The OS filesystem layer, even if the drive below doesn't care, can be a problem under severe fragmentation, because the filesystem needs to keep track of where all those fragments are, and some filesystems can only handle a file with so many fragments.

I think what both of you are missing is that the file system keeps track how? It keeps track with more 'files'. The data the file system uses is also stored on disk ( metadata). The more complicated the free space problem becomes the more complicated the free space tracking model will get if want to maintain very fast block allocation times. Likewise any file metadata compression tricks where can more effectively represent continous blocks breaks down as fragment files. The metadata can start to fragment too if it grows too large ( it is going into the same free space unless the file system defrags the metadata).

The metadata tends to be relatively small but this is what Flash doesn't really do well. Very small updates of relatively small files make Flash controllers rewirite lots more data than is requested.

It is not fragmentation on the SSD per se but the fragmentation is coupled to issue can make the SSD present.

The garbage collectors ( along with the file system) need some room to work well. Modest overprovisioning means that room won't fall below a minimum but protracted stays on minimum can lead to problem. That's why more than just the minimal is needed for long term health. Park unmodified data on 95% of the drive and moving toward that minimum.

Pragmatically file systems tend to have defacto overprovision marker set up also just shy of 100% full. The closer get to that the more they'll balk at file allocation requests. Some folks think operating just under that balk zone is a good idea. IMHO, it isn't. typically there are processes creating files, logs, etc that folks either forget about or just don't know about. Operationally it is far safer just to stay away from the line and let the garbage collector and/or file system do their thing with sufficient resources.

There is still such things as delete/trashcan. Archive and delete stuff not using can open up free space. That is a "defragmenter" don't have to even pay for.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.