Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacFin

macrumors member
Original poster
Oct 17, 2015
92
325
Hey I’m having a bit of a dilemma here regarding my external hard drives getting very slow after awhile. I’m a photographer and I use G-Drives as well as Lacie d2 professional hard drives for storing images and editing from them (Lightroom + Photoshop combo). What I have noticed is that after my external hard drive reaches a certain “fullness level” they tend to get very sluggish and that is pretty much permanent “feature” from that point onwards.

For example if I have 10TB G-Drive it will work just fine until it gets filled to let’s say around 7-8TB mark. Then the hard drive becomes very slow on importing images, loading images, editing images etc. The difference is noticeable - especially on Lightroom. I’m using M1 Mac mini for editing and while it’s not the fastest Mac anymore, it used to run Lightroom just fine on these hard drives before they drives became sluggish.

I though I could potentially fix this by simply doing back-ups of the files and reformatting the drive again but the sluggishness still remains - no matter how many times I reformat the drives. Disk Utility can't find anything wrong with the drives nor the software that the hard drive manufacturer might have so I’m clueless why my external hard drives can become so slow. The only solution that works is to buy a brand new drive but that’s just wasteful when there should not be nothing wrong with the drive itself.

I have been using a mix of APFS and HFS formatting across my hard drives (both with encryption though) and weirdly enough some drives don’t seem to mind if I fill them close to the maximum capacity - they can still transfer the files as fast as they always did. But for some drives it can take more than 10mins to even transfer a small 200Mt file which isn't normal when it used to go there in 5 seconds.

So the bottom line is that some drives just decide after they reach 70-80% of the full capacity that from now on all speeds will be reduced by 40-80% and then there's no way getting the drive back up to the original speeds. Formatting or deleting files doesn't work.

Has anyone else experience something like this with their external hard drives? Does anyone know what might be causing this and if there are any solutions or fixes to this all? I have been struggling with this issue ever since Apple started to push APFS forwards and I don't really know anymore what is the problem. Any suggestions?
 

MacFin

macrumors member
Original poster
Oct 17, 2015
92
325
I'm not sure APFS is such a good file system for classical hard drives. Is your problem rarer when you use HFS+ (or NTFS, or Exfat, etc.)?
Well in the past I never really had any issues with HFS+ formatted external hard drives, things started go downhill when Apple introduced APFS. I know APFS was made for SSD's in mind and that it was not at first recommended for spinning drives. But over the years Apple has made it very difficult to go with any other option and generally speaking APFS has worked fine on spinning drives - up to that point when the drive think it's "too full" and just slows down for good.

I do have few APFS formatted external hard drives such as Western Digital Passports and they still work super fast even when they have less than 50GB of space. So there are some anomalies but I do think this kind of situation is quite rare. So generally speaking the slowness is more happening with drives with APFS rather than HFS+. Now I actually kind of regret going with APFS with many of my drives if that's the reason for this all.

So I think I might have to start thinking about reformatting everything back to HFS+. It's just a big hurdle since I don't think you can revert back to HFS+ from APFS without having to delete all data in the process. The other way you could go with few clicks which is why so many of my drives are in APFS format.

If anyone else has any other suggestions I'm all ears.
 

Fishrrman

macrumors Penryn
Feb 20, 2009
29,242
13,315
The problem:
"External Hard Drives running very slow if they get too full - Any fixes?"

The solution:
Move stuff OFF OF the drive to increase free space -- then it will run better.

One other thing (as mentioned above):
If these are:
- platter-based hard drives
and
- IF they are used for "data-only" storage (i.e., NOT as bootable drives or for time machine)
then
- format them to HFS+ instead of APFS.

APFS will create over-fragmentation on platter-based drives, and it can make them "thrash" as well.
 

MacFin

macrumors member
Original poster
Oct 17, 2015
92
325
The solution:
Move stuff OFF OF the drive to increase free space -- then it will run better.

One other thing (as mentioned above):
If these are:
- platter-based hard drives
and
- IF they are used for "data-only" storage (i.e., NOT as bootable drives or for time machine)
then
- format them to HFS+ instead of APFS.

APFS will create over-fragmentation on platter-based drives, and it can make them "thrash" as well.

I have tried making space for the external hard drive but once it considers the drive "too full" (like 7TB used on a 10TB drive) it just doesn't matter are there now 1TB or 9TB of stuff - it still runs slow. So removing stuff doesn't fix the issue. I have formatted those drives and still, they remain slow. But I did use APFS so that could be the reason.

I do think I need to do a massive operation to simply reformat all of the drives back to HFS+ since I think the issue must lie with APFS and how it treats these drives.
 

MacCheetah3

macrumors 68020
Nov 14, 2003
2,285
1,226
Central MN
Unfortunately, it’s a limitation of traditional HDDs.


One thing I did not see/hear explained/noted in the above videos is how defragmenters also improves speed by moving files to the outer rings of the platters (i.e., longer ring allows for more data and less need for the read/write head/actuator to move tracks).

Solutions to the lower performance:

As you’ve noticed, drives behave differently. One of features of a drive that can help compensate for large amounts of data is cache/buffer (explained in the first video at 3:33). In other words, if filling the drives is inevitable, fill the drive with the larger cache/buffer first. Larger/newer drive models typically have larger cache/buffer.

To remove or reduce defragmentation, two methods come to mind:

• Copy the data to another drive, erase/format, and copy the data back — This is not a sure fire way. However, writing files to a drive without moving or deleting should result in the files staying intact (i.e. not fragmented) and free/available space should be on the innermost rings/tracks.

• Get a program such as TechTool Pro and run the File/Volume Optimization tool. Depending on the amount of files you manage per day, it would be best to defrag the HDDs once a week or so. If you wait until the drive is full, the process will probably require many hours (possibly days). Furthermore, this is not a free option — I am not aware of a free tool like this for macOS, certainly not an acclaimed no cost option. The recommendation is because I have (and currently) use TTP, and have not experienced any critical, note worthy bugs. NOTE: While macOS is better at reducing fragmentation than Windows, drives can still become significantly fragmented.

Lastly, add more external storage so you do not need to exceed ~75% of a drive’s capacity.

P.S. SSDs can also suffer performance loss when filled beyond ~75% of capacity.
 
  • Like
Reactions: Basic75 and MacFin

MacFin

macrumors member
Original poster
Oct 17, 2015
92
325
Unfortunately, it’s a limitation of traditional HDDs.

One thing I did not see/hear explained/noted in the above videos is how defragmenters also improves speed by moving files to the outer rings of the platters (i.e., longer ring allows for more data and less need for the read/write head/actuator to move tracks).

Solutions to the lower performance:

As you’ve noticed, drives behave differently. One of features of a drive that can help compensate for large amounts of data is cache/buffer (explained in the first video at 3:33). In other words, if filling the drives is inevitable, fill the drive with the larger cache/buffer first. Larger/newer drive models typically have larger cache/buffer.

To remove or reduce defragmentation, two methods come to mind:

• Copy the data to another drive, erase/format, and copy the data back — This is not a sure fire way. However, writing files to a drive without moving or deleting should result in the files staying intact (i.e. not fragmented) and free/available space should be on the innermost rings/tracks.

• Get a program such as TechTool Pro and run the File/Volume Optimization tool. Depending on the amount of files you manage per day, it would be best to defrag the HDDs once a week or so. If you wait until the drive is full, the process will probably require many hours (possibly days). Furthermore, this is not a free option — I am not aware of a free tool like this for macOS, certainly not an acclaimed no cost option. The recommendation is because I have (and currently) use TTP, and have not experienced any critical, note worthy bugs. NOTE: While macOS is better at reducing fragmentation than Windows, drives can still become significantly fragmented.

Lastly, add more external storage so you do not need to exceed ~75% of a drive’s capacity.

P.S. SSDs can also suffer performance loss when filled beyond ~75% of capacity.

First of all, thank you so much for this detailed reply, it really helped me to better understand the process that goes into writing and reading hard drives.

After watching both of those Youtube videos and your message, I feel fragmentation might actually play a bigger part here since I do delete and add stuff a lot to these hard drives. And I don't think having these drives formatted for APFS is helping here since it was originally made for SSDs rather than old-school hard drives.

I might also look into TechTool Pro as one solution. If it could fix this issue that might be a good investment.
 
  • Like
Reactions: BigMcGuire

HDFan

Contributor
Jun 30, 2007
7,290
3,341
Move stuff OFF OF the drive to increase free space -- then it will run better.

Keep 20-30% of the disk free for best performance.
Use HFS for HDs.
Defragmentation probably not necessary as handled automatically by MacOS.
Techtool will not defragment APFS.


Techtool will show/fix fragmentation on external disks. Not sure in light of above if it means anything. The disk shown below was initialized and then populated via a copy so shows low fragmentation.

Screen Shot 2022-06-27 at 3.42.32 PM.png
 
  • Like
Reactions: MacFin

kschendel

macrumors 65816
Dec 9, 2014
1,308
587
I am puzzled by your statement that reformatting the drive doesn't help. That really ought to rebuild all new filesystem structures on the drive, leaving no way for it to "remember" that it was ever full. Something strange is going on here; maybe an APFS reformat doesn't actually do that?
 
  • Like
Reactions: Basic75

MacCheetah3

macrumors 68020
Nov 14, 2003
2,285
1,226
Central MN
Techtool will not defragment APFS.
Helpful note. I was not aware until now.

I did verify in-app as well as found a footnote on the TTP product page:

Micromat said:
Optimization for APFS rotational drives is not yet possible with the current amount of APFS documentation provided by Apple, which currently provides insufficient documentation for defragmenting a disk.

Additionally, while doing some research on the topic, I stumbled upon:

In HFS+, the solution has been to defragment the used and free space on the disk, something which macOS does to a degree, and several third-party tools have long supported.

In APFS, one of its most important features, copy on write, is almost designed to cause fragmentation. It’s essential for the file system metadata, as there’s no journalling, which has proved a saving grace in HFS+. The end result is that file system metadata in APFS get hopelessly fragmented, which in turn causes performance problems on hard drives.


The author goes on to say:

It has been quite difficult to demonstrate relative performance between HFS+ and APFS on hard drives, as conventional disk benchmarking doesn’t produce a standard degree of fragmentation for the purposes of comparison. Take a freshly-formatted hard disk and run normal benchmarks on it, and you’ll probably find little difference between the two file systems.

A little while ago, though, Mike Bombich, developer of Carbon Copy Cloner, came up with a much better way of comparing performance. He enumerated a million files in the file system on a hard disk over a period of simulated modifications to the file system, under both HFS+ and APFS. In case a million files seems excessive, with files nested three folders deep, this is only 111,000 folders, and a maximum file size of 20 GB – hardly excessive by real world standards.

The time taken to enumerate the million files was consistently shortest (best performance) on HFS+, and changed little there over twenty repetitions of the test. APFS, though, was much slower to start with, and got progressively worse, regardless of whether defragmentation was enabled, and largely irrespective of snapshots.

Therefore, it would seem to me, @MacFin should stick to HFS+ on the HDDs and add a defrag routine.

@HDFan Do you know of an alternate suggestion to TTP (e.g. a product that may be available in a time-based trial)? Not that I have a problem recommending TTP, obviously. For me, the primary use is for S.M.A.R.T. status alerts/warnings.
 

MacFin

macrumors member
Original poster
Oct 17, 2015
92
325
Yeah I'm definitely going back to using HFS+ across all of my external drives after this and I might even add TechTools Pro to that mix too. Before APFS all my hard drives ran pretty much perfectly and it never mattered if the drive had 2TB of free space or 20GB. They all just worked.

It's frustrating that Apple keeps pushing people automatically to go with APFS and HFS+ no longer even shows up in Disk Utility. I did manage to find a few instructions online on how to still do it, but formatting drives to HFS+ it's very well hidden under the menus of Disk Utility these days. Apple really doesn't want anyone to go with it anymore.

All of the articles I have read about HFS+ vs APFS have never really said that going with APFS will do any harm and they all kind of implied that APFS is a "better" option since it's "the future format". But all of my external hard drives no matter what brand (G-Drive, Lacie, Seagate, Western Digital etc) all have been suffering from this slowness and the common denominator seems to be APFS.

I'm not sure if the fact that I do keep all of my external drive's passwords encrypted makes any difference in speed (in the past it didn't), but many drives I did encrypt directly from Finder when it automatically formats the drive to APFS anyway.

I had a photo shoot the other day and I had about 85GB of photos to copy to Lightroom. I think the process of copying and creating 1-1 previews took overall over 5 hours which is ridiculous. And this was with a 10TB Lacie d2 Professional drive still having about 7TB of space left - but guess what format I made it for... Of course APFS.

I will wait for the Amazon Prime Days to come since they always have external hard drives on sale. Once I get a new drive or two, will back up the drives and start a slow process of going back to HFS+ formats across all of my hard drives. I could technically start it now since everything is already backed up, but considering how much stuff I need to copy over I want a double back-ups of everything just in case.
 

Fishrrman

macrumors Penryn
Feb 20, 2009
29,242
13,315
OP wrote:
"I might also look into TechTool Pro as one solution. If it could fix this issue that might be a good investment."

I'm thinking that TechTool (and all the other 3rd-party utilities that can defragment a platter-based hard drive) will only work with HFS+ -- and NOT with APFS.

But you don't have to pay for a good defrag app.
Go here instead:

There used to be two very useful utility apps for OS X:
- iDefrag
- iPartition

Both were published by a company called Coriolis Systems that has now discontinued them, because changes in the Mac OS and the move from platter-based drives to SSDs has diminished the need for the above apps.

However, some folks may still have a need for them.
Coriolis has graciously made available FOR FREE all previous versions of their software.

Highly recommended !
 

macmesser

macrumors 6502a
Aug 13, 2012
921
198
Long Island, NY USA
Hey I’m having a bit of a dilemma here regarding my external hard drives getting very slow after awhile. I’m a photographer and I use G-Drives as well as Lacie d2 professional hard drives for storing images and editing from them (Lightroom + Photoshop combo). What I have noticed is that after my external hard drive reaches a certain “fullness level” they tend to get very sluggish and that is pretty much permanent “feature” from that point onwards.

For example if I have 10TB G-Drive it will work just fine until it gets filled to let’s say around 7-8TB mark. Then the hard drive becomes very slow on importing images, loading images, editing images etc. The difference is noticeable - especially on Lightroom. I’m using M1 Mac mini for editing and while it’s not the fastest Mac anymore, it used to run Lightroom just fine on these hard drives before they drives became sluggish.

I though I could potentially fix this by simply doing back-ups of the files and reformatting the drive again but the sluggishness still remains - no matter how many times I reformat the drives. Disk Utility can't find anything wrong with the drives nor the software that the hard drive manufacturer might have so I’m clueless why my external hard drives can become so slow. The only solution that works is to buy a brand new drive but that’s just wasteful when there should not be nothing wrong with the drive itself.

I have been using a mix of APFS and HFS formatting across my hard drives (both with encryption though) and weirdly enough some drives don’t seem to mind if I fill them close to the maximum capacity - they can still transfer the files as fast as they always did. But for some drives it can take more than 10mins to even transfer a small 200Mt file which isn't normal when it used to go there in 5 seconds.

So the bottom line is that some drives just decide after they reach 70-80% of the full capacity that from now on all speeds will be reduced by 40-80% and then there's no way getting the drive back up to the original speeds. Formatting or deleting files doesn't work.

Has anyone else experience something like this with their external hard drives? Does anyone know what might be causing this and if there are any solutions or fixes to this all? I have been struggling with this issue ever since Apple started to push APFS forwards and I don't really know anymore what is the problem. Any suggestions?
Maybe not what you're looking for but I would keep all unedited images on dedicated media used only for storage and move selected files to a work drive for any editing. I remember reading about some larger but cheaper HDs which use file allocation schemes that resulted in this behavior. The article (don't remember where) was of interest because I had just purchased some and had determined they were not suitable for boot drives.
 
  • Like
Reactions: arkitect and MacFin

HDFan

Contributor
Jun 30, 2007
7,290
3,341
@HDFan Do you know of an alternate suggestion to TTP (e.g. a product that may be available in a time-based trial)?

If you are asking for an alternate that handles APFS there aren't any. As stated above in post 10:

Optimization for APFS rotational drives is not yet possible with the current amount of APFS documentation provided by Apple, which currently provides insufficient documentation for defragmenting a disk.

Developers don't have enough information to be able to support it.

Another highly recommended HFS repair utility was DiskWarrior. Their development has stalled on M1 versions so I'm not quite sure where they are.

 
  • Like
Reactions: MacFin

Fishrrman

macrumors Penryn
Feb 20, 2009
29,242
13,315
HD Fan makes a good point in the post above this one.

If you're using a platter-based drive, going with HFS+ will enable you to use one of several "drive repair utilities" that are still "out there".

It will also enable you to defragment the drive when needed (see my reply 12 above).

Go with APFS, and I believe THE ONLY "repair option" is disk utility, which doesn't offer much at all. No way to defrag, at all.

So again, this is why I still recommend HFS+ as the SUPERIOR OS for platter-based drives.
 
Last edited:

ThrowerGB

macrumors 6502
Jun 11, 2014
253
92
Hey I’m having a bit of a dilemma here regarding my external hard drives getting very slow after awhile. I’m a photographer and I use G-Drives as well as Lacie d2 professional hard drives for storing images and editing from them (Lightroom + Photoshop combo). What I have noticed is that after my external hard drive reaches a certain “fullness level” they tend to get very sluggish and that is pretty much permanent “feature” from that point onwards.
Ok, Now reaching back in history. The problem you describe was common on original hard drives. In fact, when designing a system, it was normal to provide enough disk space to prevent the drive from getting filled more than 60-70%. Why? it's been explained above but to rehash, original hard drives had a moveable read/write head. Data was stored in concentric circles or tracks on the disk. The tracks are divided up into sectors along each track. The hardware has to move the head to the correct track and then wait for the spinning platter to become underneath the read/write head before it can do its thing. It takes time to move the head and for the part of the spin Ning platter to be in the right position. The controller creates files (or maps) of where (track/sector(s)) on the disk the files are.
When a file is written on an empty drive, it can lay down the magnetic signature of the data contiguously on one of the tracks. If there's not enough space in track, the head is moved to the adjacent track. In this case, the head only needs to move a small distance to the side. When a file is deleted, that track/sector(s) the file used are available to be reused. So the next file to be written might be laid down contiguously in that empty space. If it's smaller, a few sectors might be unused. If larger, the drive has to find other space to put the additional sectors. After a while, as the disk is filled and files are deleted or changed, the disk becomes "fragmented". i.e., there will be bits of the file in sectors spread all over the disk and the head actuator has to move back and forth many times to read or write. That's what slows things down, particularly as the available space is less that 30-40%.
Disk de-fragmenting software rearranges all where all the sectors of each file are so that they're contiguous. We had to do this regularly on older PCs. These day's some operating systems can defrag hard drives on the fly.
Over time, there were refinements to reduce the need for the head(s) to move, like putting multiple heads on the actuator, making double sided platers, or stacking platters on the same axis, etc. Also solid state memory was added to the drive so that there could be a "buffer" where some data might reside temporarily. Apple used the general principle when it introduced "Fusion" drives. Fusion drives were a combination of a hard drive and an SSD where the SSD acted as a large buffer. The OS used software to make the two drives appear as one. This was an interim solution before moving completely to SSD drives because of the early cost of SSD drives.
A solid state drive (SSD) is a bunch of flash memory. With solid state memory, there are no moving parts. Any location on the SSD can be addressed with essentially equal speed (but MUCH faster than moving the drive heads and waiting for a sector on a spinning platter to become underneath the heads) by using memory addressing on the chip(s) making up the SSD. This scheme also eliminates the slowdown you're experiencing at ~70% full when your hard disk is likely quite fragmented.
Apple then realized (I don't know who invented this.) that the disk formatting schemes designed for hard drives was inefficient for SSDs. So they introduced a new format (APFS) that didn't rely on moving heads and spinning platters. It's inherently much faster than schemes designed for hard disks because there are no moving parts. There is a whole bunch of other things they did with APFS to take advantage of solid state memory addressing. But APFS isn't great for hard drives because the memory addressing of APFS has to translated into track/sector addressing with moving heads and spinning platters of the hard drive, if the hard drive is formatted for APFS.
Just for comparison, in the days before spinning hard drives, computers relied on magnetic tape drives. The data was laid out linearly on the tape (think of a tape recorder or cassette tape). Tape drives could move the tape back and forth, but it took a VERY long time (comparatively) for the tape drive to wind the tape to the location of the data. Think of fast forward or backward to find a song on a cassette tape.
Hope this ramble helps.
 
Last edited:
  • Like
Reactions: MacFin

MacFin

macrumors member
Original poster
Oct 17, 2015
92
325
All of the advice and tips mentioned here have really clarified what is wrong with my current external hard drive setup. So thank you everyone for taking the time to write your replies, I really appreciate all of this!

I will start the reformatting process to HFS+ shortly and hopefully, that will bring my hard drives back to what they used to be before APFS.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,448
Europe
Ok, Now reaching back in history. [...]
You've got some solid points in there, but not everything is accurate/correct.

Early file systems just dumped the data in the next available free space on the HDD. This, as you correctly explained, can and will lead to fragmentation. With MS-DOS FAT16 and Amiga FFS it was essential to occasionally defragment your HDD. Later file systems, in particular HFS+, got smart enough to integrate at least some form of defragmentation into their every day operation. This makes it much less necessary to periodically defragment the HDD with a dedicated tool.

The thing with file systems for SSDs is that they don't need to care about keeping data in adjacently numbered sectors of the drive because with SSDs, for read performance, (not write!) it doesn't matter a lot. Also, because of the write issues with SSDs, which mostly stem from the fact that they can only be erased in chunks that are much larger than a unit of allocation on the file system level and the fact that modern SSDs can only be erased/written a pitiful number of times, adjacently numbered sectors will not be physically adjacent in an SSD anyhow, the SSD controller moves data around the SSD and keeps a mapping table from logical addresses to physical addresses.

So APFS is less smart, it doesn't care about where to write data because on an SSD it doesn't matter. That's why APFS is slower on an HDD, it doesn't even try to keep data that belongs together in adjacently numbered sectors. Translating sector numbers has absolutely nothing to do with it: HDDs have used linear addresses (LBA) on the (S)ATA interface for a very long time. And even if the translation were done by the file system, the time taken by the CPU to perform a couple of simple arithmetic operations would be completely irrelevant in the context of the I/O operation that it belongs to and that takes many orders of magnitude more time.

One other thing to keep in mind is that an SSD's performance also suffers when it is nearly full because the wear levelling has less free space to play with. For this reason many SSDs don't even advertise their actual full capacity, i.e. an SSD with say 480GB advertised usable space might actually have 512GB with the difference being reserved by the drive controller to never run out of free space. This is also why I always emphasise the importance of TRIM for SSDs. Without TRIM the SSD will not know which data belongs to deleted files. In the absence of TRIM, sectors with deleted data (that was not overwritten by some fresh data) will not be treated as free space by the SSD controller! It will be moved around by the SSD controller wasting time, energy and write cycles because it doesn't know that it's garbage.
 

Danarman

macrumors newbie
May 3, 2023
1
0
Suddenly, my external SSD Time Machine becomes very slow after one Ventura Update. I tried everything. APS format, encrypted. I connected it and Finder took long time to read it, looks like hung for several minutes.

Had to use Finder, right-click on my backup drive name, choose "Decrypt"... this may take several hours/days. My 500GB SSD with 80% space occupied, tooks 3 days (connecting the disk 8 hours a day, I didn't want to keep it working overnight because it turns hot without. After that, Time Machine didn't accept my disk because "previous specified disk was encrypted, and current disk is not", so I removed from Time Machine and added again. Works very fast.
But I don't want a backup disk unencrypted so I use Finder, right-click on my backup disk and use the "Encrypt" option (this requires to specify a secure password, can't forget). Process begins and it looks almost 3 days (at eight hours a day) to encrypt. Finally, Time Machine again didn't accept my disk (because encrypted) so I removed from Time Machine, added again, and now, AT LAST, connecting my external SSD will not hangs Finder and backups now runs very faster. Also Finder is now responsive just after connecting disk.

MacBook Pro M1 now with Ventura 13.3.1
 

toke lahti

macrumors 68040
Apr 23, 2007
3,293
509
Helsinki, Finland
Defragmentation probably not necessary as handled automatically by MacOS.
Hfs+'s defragmentation from system is ridiculous in 2023. It was ridiculous even in 2013.
Maybe that's why apfs has defragmentation switch (which I just learned and it is - of course - disabled by default. Because soldered ssd would look even worse with it..).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.