Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have a Samsung T7 2 TB external SSD (it’s about the size of a credit card) and a 512 GB internal SSD in my M1 iMac, and it has been rock solid. Good quality cables and drives mean you’re much less likely to have issues with bad connectors, glitches and so on. The external drive gets about 1 GB/second read and write, which is plenty fast for what I use it for.

My strategy in buying drives is not to get the cheapest, but to buy a well-known mainstream brand like Samsung or Sandisk. It still works out a lot cheaper than buying internal storage, I paid about 150 euros for my 2 TB drive.
I have 2 TB and 4 TB Kingston XS2000 USB-C external SSD's, and use USB-IF certified* cables from Cable Matters. They still regularly disconnect.

*2.6' cables, USB-IF certified for 40 Gbps USB 4, and thus over-spec'd for my application.

I also tried a 2 TB SanDisk Extreme, and it had even worse disconnection issues, so I returned it in favor of the Kingstons, which at least don't disconnect as frequently.

Amusingly, the SanDisk Extreme is sold on apple.com, and explicitly listed as compatible with my model of Mac:

 
Last edited:
I suggest you try troubleshooting the rest of your setup, because I know a number of people who use this strategy of using external USB-C drives to cut costs, and none of them have ever complained to me about frequent drive disconnections. Perhaps your system logs might reveal something about the precise error?
 
I suggest you try troubleshooting the rest of your setup, because I know a number of people who use this strategy of using external USB-C drives to cut costs, and none of them have ever complained to me about frequent drive disconnections. Perhaps your system logs might reveal something about the precise error?
If you google 'external drive spontaneous disconnecting from mac', you'll get lots of hits with people complaining about this.

There are a few standard fixes (e.g., try better/shorter cable; connect drive directly to Mac instead of going through hub; or keep computer from going to sleep), which sometimes work and sometimes don't.

So bottom line: Some people experience this problem, and others (like you and those you know) don't. Unfortunately, neither of us has statistics on the relative sizes of these respective groups.
 
  • Like
Reactions: HobeSoundDarryl
Workaround is to connect the SSD to a TB3 dock, which will have its own USB controller - not a TB4 dock as they ‘tunnel’ the Mac’s USB control to the SSD.
 
Will just drop this here…


Apple intelligence is literally going to be running local models on device.

Not just LLMs but other AI models that will require a lot of GPU, NPU and RAM to hold the working set.

The majority of Apple users will never use Apple Intelligence to that extent, if at all.
 
Workaround is to connect the SSD to a TB3 dock, which will have its own USB controller - not a TB4 dock as they ‘tunnel’ the Mac’s USB control to the SSD.
Interesting—do you have a reference for this? And is this applicable when your SSD is a USB device rather than a TB device?

Alas, even if this would work, I've already spent the money for a TB4 dock (Sonnet Echo 11), so I'm not willing spend the additional funds on a TB3 dock.
 
No reference. It just works and it’s logical.
If Apple’s USB controller doesn’t work then connect the USB SSD to a generic USB controller from the same sort of manufacturer as is in the SSD enclosure.
A TB3 dock is the way to do this.
And on AS Macs you most likely get faster USB 3.1 Gen 2 speeds than connecting directly to the Mac.

Edited to add that this doesn't sort out dodgy, badly performing USB cables...
 
Last edited:
The majority of Apple users will never use Apple Intelligence to that extent, if at all.

That’s exactly what Apple Intelligence is. How do you think all these Mail and Messages ML models work? The moment you turn Apple Intelligence on, you are running an on-device LLM.
 
  • Like
Reactions: throAU
No reference. It just works and it’s logical.
If Apple’s USB controller doesn’t work then connect the USB SSD to a generic USB controller from the same sort of manufacturer as is in the SSD enclosure.
A TB3 dock is the way to do this.
And on AS Macs you most likely get faster USB 3.1 Gen 2 speeds than connecting directly to the Mac.

Edited to add that this doesn't sort out dodgy, badly performing USB cables...
Sure, it makes logical sense to try interfacing the external SSD with a different USB controller.

But what's not a priori obvious—at least to me—is that a TB3 dock enables this, but a TB4 dock doesn't because it tunnels the connection. That's the part I'm curious about—what is it about TB3 that enables this but TB4 that doesn't?

I took a look a this discussion of TB3 vs. TB4, and it doesn't explain this:
 
If you google 'external drive spontaneous disconnecting from mac', you'll get lots of hits with people complaining about this.

There are a few standard fixes (e.g., try better/shorter cable; connect drive directly to Mac instead of going through hub; or keep computer from going to sleep), which sometimes work and sometimes don't.

So bottom line: Some people experience this problem, and others (like you and those you know) don't. Unfortunately, neither of us has statistics on the relative sizes of these respective groups.

I use the manufacturer’s supplied cable, which is quite short, to connect the drive directly to my iMac via USB-C. But I’ve had this drive connected to my iMac for years, the computer has slept many times in that period because I rarely turn it off, and it has never once spontaneously disconnected.

It only takes a few vocal people with a problem to generate a whole bunch of hits on support forums, but I think if this was a widespread problem Apple would have acted on it.

My take on it is, there is a reason for this kind of behaviour. Somewhere the error will have been logged by the system.
 
@theorist9 "...google 'external drive spontaneous disconnecting from Mac'"

@PaulD-UK "Workaround is to connect the SSD to a TB3 dock, which will have its own USB controller - not a TB4 dock as they ‘tunnel’ the Mac’s USB control to the SSD."

@theorist9 "But what's not a priori obvious—at least to me—is that a TB3 dock enables this, but a TB4 dock doesn't because it tunnels the connection. That's the part I'm curious about—what is it about TB3 that enables this but TB4 that doesn't?"

OK, here are a couple of links:

Quote @joevt in this webpage:
"USB in Thunderbolt 4 docks (using Goshen Ridge) when connected to a Thunderbolt 4 host (such as an M1 Mac) is controlled by the USB controller of the host which uses USB tunnelling to a four port USB hub in the dock (3 downstream Thunderbolt ports and one USB port). I think the hub is part of Goshen Ridge. When connected to a Thunderbolt 3 host, PCIe tunnelling is used to communicate with a USB controller..."

Quote @joevt. Link to this MR post:
"Get an old Thunderbolt 3 dock - one that uses Alpine Ridge and therefore only supports DisplayPort 1.2. These will usually have multiple USB controllers. For example, the CalDigit TS3+ has four USB controllers (two FL1100, one ASM1142, and the Thunderbolt USB controller for the downstream Thunderbolt port. The four USB controllers can use up the full bandwidth of Thunderbolt 3..."

"A Thunderbolt 3 dock that uses Titan Ridge usually has only one USB controller and all the USB ports are connected by a hub. The total limit is 9.7 Gbps for USB 3.x "

"A Thunderbolt 4 dock that uses Goshen Ridge has one USB controller but it is not used when connected to a Thunderbolt 4 computer (USB tunnelling is used). All the USB ports are connected by a hub. You can activate the USB controller of the Thunderbolt 4 dock by connecting it after a Thunderbolt 3 device. The total limit is 9.7 Gbps for USB 3.x and 480 Mbps for USB 2.0.

"While Titan Ridge and Goshen Ridge docks don't have multiple USB controllers, they may have USB 10 Gbps hubs that allow all the ports to support 9.7 Gbps USB."
 
  • Love
Reactions: theorist9
Over time hardware and software engineers design to optimize using more RAM because RAM is a great way to compute, and we buy computers to compute with.
What actually happens is that software engineers must meet tight deadlines and have to spend most of their time on flashy new features instead of optimising the performance and memory footprint of new and old features. That's how we get "bloated" software that needs many GHz and many GB of RAM to do what much slower computers with much less RAM could do a couple of decades ago. Add to that the bloated craziness of the modern internet...
 
Last edited:
I never understood this type of argument. Sure, you can get more RAM, but you are stuck with the same slow CPU and same slow RAM interface. If your computational demands grow over time, or the newer OS is optimized for faster graphics or new CPU capabilities, more RAM won't help you. RAM is far from a panacea many PC users make it out to be.
Here's my anecdotal evidence. Back in the days many people were surprised how fast my 486 was compared to their Pentiums (Pentium as in P54C). That was because I had 32MB of RAM and they had only 8MB.
RAM does not make your computer faster. RAM can only ensure that your computer does not get slower if your working set increases. If your working set does not increase, you don't need more RAM.
Yes. We also wouldn't need more RAM if software were optimised better.
UMA has nothing to do with the topic.
Yeah, not sure why everybody keeps bringing this up.
 
What actually happens is that software engineers must meet tight deadlines and have to spend most of their time on flashy new features instead of optimising the performance and memory footprint of new and old features. That's how we get "bloated" software that needs many GHz and many GB of RAM to do what much slower computers with much less RAM could do a couple of decades ago. Add to that the bloated craziness of the modern internet...

Also, what happens is that features in software (and sizes of data both due to larger user data and requirement for larger memory buffers) expand over time.

edit: below is not aimed at the quoted poster above who gets, it - but plenty of others whining about modern software "bloat".


You might be doing most of the same basic tasks on your machine as before, but graphics are now 4k HDR instead of 256 colour 480p, sound is 24 bit at 96khz and/or dolby surround instead of 22khz, your internet connection is 100 megabit or faster instead of dialup and hence needs MUCH larger network buffers, your disk is much faster but optimised for large reads and writes and needs huge memory buffers, etc.

Video codecs are now far more efficient in terms of compression ratio, but far heavier on CPU and memory, ditto for audio codecs, etc.

If you were to re-write a modern OS in a low level language and highly optimise it many things would happen:

  • a lot of the OS features would simply not be feasible to implement at all. they'd never make it to the real world
  • the OS would cost a heap more due to the far higher development effort required
  • many applications would have less features or maybe not even exist as the barrier to entry for writing them would be too high. They'd also cost more.
  • there would be a lot more security problems because memory management at a low level is hard and people suck at it. a lot of what people consider "bloat" is a side effect of abstracting the complexity away to make these problems manageable. Managing it is a case of throwing a little more cheap resources at the problem (CPU/RAM).
  • Writing direct to hardware in low level and not using OS provided high level libraries - your apps would simply be unable to take advantage of new hardware as it is released. You use the apple OS provided libraries like metal for stuff - you instantly get the benefit of improved hardware features.

Its all well and good to whine about inefficient code and bloat (which honestly, isn't the big problem most people think it is) but you need to understand its the only way the modern internet could ever exist. A modern browser for example is a massive, massive code-base and there's simply no way it would ever come into being written in low level code and highly optimised.

Programmers and programmer time is expensive and a rare resource. CPU and memory is cheap and gets exponentially cheaper over time. This is what we were taught in computer science 30+ years ago and it's still the exact same trend today. Hardware catches up. Build the new thing that wasn't possible before, don't focus on optimising the software equivalent of the wheel, that already exists and works well enough on a reasonable machine.

It mostly comes down to:
  • Stop being cheap and buy a machine capable of running modern software, if that's the software you want to run. The requirements are what they are - if you think you can do better, try it yourself. Its hard.
  • If you want to run "optimised" (in reality: low feature set and limited capability) old school software from 1995 or whatever - do so and live within the feature set that software has. Because you sure as hell aren't getting the modern feature set for free! Enjoy your 240p crap RealVideo internet!

Seriously the glasses really are rose tinted. Go back and actually try to live for a week with a machine from 1995 or whatever. Almost nothing you want to do with the thing today will work, especially if it involves the internet.
 
Last edited:
I've always wondered if the doubling of RAM gives developers a getout from doing optimisation. Imagine an app is written and it is decided to target computers with 8 GB of RAM as that covers the majority of computers sold in the last 3-4 years. After completing the app, it is found that it requires a computer with 13-14 GB to run. A round of optimisation reduces this to requiring 12 GB. Further rounds of optimisation are then required to get this down to 8 GB and the app can be shipped. A few years later 16 GB is now the standard. If that exact same app was built (with no increase in asset sizes/quality), what would be the motivation to spend developer time optimising from the 13-14 GB requirement down to 8 GB requirement?

You can apply this to each increase in RAM - increases in "standard" RAM sizes have always been a doubling (my family's first PC came with 4 GB RAM and every "standard" increase from there has been a double.
 
I've always wondered if the doubling of RAM gives developers a getout from doing optimisation. Imagine an app is written and it is decided to target computers with 8 GB of RAM as that covers the majority of computers sold in the last 3-4 years. After completing the app, it is found that it requires a computer with 13-14 GB to run. A round of optimisation reduces this to requiring 12 GB. Further rounds of optimisation are then required to get this down to 8 GB and the app can be shipped. A few years later 16 GB is now the standard. If that exact same app was built (with no increase in asset sizes/quality), what would be the motivation to spend developer time optimising from the 13-14 GB requirement down to 8 GB requirement?

You can apply this to each increase in RAM - increases in "standard" RAM sizes have always been a doubling (my family's first PC came with 4 GB RAM and every "standard" increase from there has been a double.

Optimising isn't just about shaving RAM.

There are different things to optimise for, RAM is one of the resources, processor time and network/disk throughput/capacity are others.

Generally, shaving RAM requirements can actually hurt CPU performance, IO performance, etc.

RAM doubles over time because it gets cheaper to make. Software evolves over time to make it easier for developers to write programs, make sure their programs are secure and take advantage of new hardware features. Increasing RAM capacities helps make this possible. That's some of it.

But really the big thing causing memory consumption now is simply the increased size of the data we're working with. Be it data you create, or assets within the OS, apps, etc.

e.g. Take a photo on your phone today and in some cases it's larger than the entire hard disk capacity of my first PC from the early 1990s.
 
If you google 'external drive spontaneous disconnecting from mac', you'll get lots of hits with people complaining about this.

I don't think the point was to say this is a you thing, rather that it probably isn't that widespread. A bug as serious as external drives randomly disconnecting would likely get priority treatment if it were widespread. I've used a lot of external storage in many shapes and sizes for years and can't recall suffering a random disconnection even once that wasn't related to drive failure.

On the other hand, I do have my own set of weird issues that I know I'm not alone in suffering as a quick online search will reveal, but yet I almost never meet anyone on a person to person basis who've experienced it. My best guess is that I have some combo of software I'm running that makes these issues more likely.
 
Last edited:
It only takes a few vocal people with a problem to generate a whole bunch of hits on support forums, but I think if this was a widespread problem Apple would have acted on it.
I don't think the point was to say this is a you thing, rather that it probably isn't that widespread. A bug as serious as external drives randomly disconnecting would likely get priority treatment if it were widespread.
Sure, but there's a middle ground between an issue that just affects a handful of users, and one that's widespread (like, say the butterfly keyboard).

While this may not be widespread, it affects more than just a few internet posters, as evidenced by the fact that you can find support articles about this from drive manufacturers. Companies don't write support articles for just a handful of one-offs. And this is an issue on both Mac and Windows computers, and goes back to the time of HD's. For instance:


"Your drive may still eject if you manually put your Mac to sleep even after changing these settings. There is no workaround for this currently, other than to avoid putting your Mac to sleep altogether or safely ejecting your drive before you put the Mac to sleep."

And that's just one example. [That case could be due to a bad interaction between the Mac and the drive rather than just the Mac per se.]

So I still think you are more likely to have a robust system if your primary storage is internal.

Note that I'm referring to reliability for intensive daily operation, not the relative likelikhood of catastrophic failure. For the latter, it's best to assume any drive can fail catastrophically, and thus have adequate backups.
 
Last edited:
I’d keep the M4 Pro (upgrading to 1TB of SSD) if I knew that 24GB of RAM will be plenty down the road. Of course, I know that depends on the tasks I perform on it, but the alternative is the regular M4 with 32GB of RAM. I’m not sure what equipment will be better down the line, especially if I want to play with local LLMs, where more RAM is better, but on the other hand more GPU cores (16 vs 10) are also beneficial…

Those are my two alternatives for my next Mac: either the M4 Pro with “just” 24GB or the regular M4 with 32GB of RAM.

And speaking of longevity, I have the hunch that the M4 Pro machines will get at least one year extra of software support.
 
I’d keep the M4 Pro (upgrading to 1TB of SSD) if I knew that 24GB of RAM will be plenty down the road. Of course, I know that depends on the tasks I perform on it, but the alternative is the regular M4 with 32GB of RAM. I’m not sure what equipment will be better down the line, especially if I want to play with local LLMs, where more RAM is better, but on the other hand more GPU cores (16 vs 10) are also beneficial…

Apple is about giving you choices

Sophie's choice
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.