Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Apple Corps

macrumors 68030
Original poster
Apr 26, 2003
2,575
542
California
Wasn't sure how to best "title" this one :confused:

Just installed an OWC 6G SSD 240GB in my MP 4,1 w/ 16GB memory - easy and all works well so far. The plan is to use the SSD for OS X and applications.

Now I am wondering where the speed gain (from a system perspective) comes from in that the SSD will launch the OS and apps super fast BUT things slow down as the system is directed to go out to the slower HD to fetch data and load it into memory. My guess is that the SSD provides way faster interaction among the OS / application / memory - but does that account for all the raves about system performance improvements or is there something else?

Also - are there any practical limits as to how full one should load an SSD before you would experience a noticeable slowdown in response vis a vis HD at 90% capacity?

Thanks...
 
Something that might help me better understand this topic is for someone to explain how an SSD scratch disk works to speed things up (beyond the obvious speed of an SSD) - especially if ones has sufficient memory installed.
 
Wasn't sure how to best "title" this one :confused:


Also - are there any practical limits as to how full one should load an SSD before you would experience a noticeable slowdown in response vis a vis HD at 90% capacity?

Thanks...


I asked this same question last week, over 70 people read the thread in one day, not one person responded. Should have asked about the MBP heat or Lion bugs first to get a response!

Anyway, some research I did on various other sites lead me to conclude about 80% is as full as you want to let your SSD get.
 
To understand SSDs and their limitations, you need to look at how NAND memory is written, rewritten (after it's been used) and read. The important part is really how the SSD controller handles rewriting NAND blocks that have been previously used that may still contain some valid data and need more added, or are free (as a result of trimming or garbage collection) to be completely reused as if they were new. There are issues with the structure of NAND that impact rewriting performance, and then (often proprietary) elements with how the drive manages free space and garbage collection. The issues are completely different from those related to mechanical drives (eg fragmentation).

Check out AnandTech.com early SSD reviews for better insight than what I can type here on my iPad.

Edit: I highly recommend this article: http://www.anandtech.com/show/2738/1

The bottom line is that you will benefit most, obviously, the more of your workflow is on solid state storage. You can compromise by splitting storage duties between HDs and SSDs, but it is... A compromise. Get the biggest SSD you can afford and load it up with as much of your workflow as possible. If you can get a drive, or RAID0 array of smaller drives, and cover your apps, data, and scratch needs with it, thats ideal. The slowest SSD is at least an order of magnitude faster than the fastest HD at random reads which is the metric that matters most in typical desktop workloads.

You do want to leav some free space on your SSDs for the garbage collection to be most effective and maintain rewrite performance. I think 10% is reasonable - when you get to 10% free space it's time to clean up or get a bigger SSD. However, it really depends on how much writing/rewriting youre trying to do over that free space. If you've got a 40GB drive with 4GB free and constantly juggling files that are multiple GB in size, it's going to impact performance more than if the drive only needs to write a few KB every time you load a new web page. But at any rate, even a full drive will always provide max read performance (assuming its not trying to write at the same time).
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.