Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Third party ram never has TRIM enabled already correct?

Gotta get me some TRIM

----------



Would love me some PROMISE gear, but so expensive. They all in the same ballpark? In search of 256-500gb SSD, with TB1 minimum for cheap as possible.


The J2 is only $500. Which isn't bad for an external RAID 0 array.

It's also the size of a deck of cards....very tiny.
 
well, if so the writing is poor. they speak of 2 SATAIII drives. there is no mention that i see of *two* SATAIII connections. they do say the TWO thunderbolt ports are used together.

Your mental model of how SATA works is flawed. At the individual device interface level SATA is a point-to-point link. If there are two SATA devices inside of a enclosure and they are connected/accessible, then there are at least the same number of SATA cables/connections in there. Four devices present. Four SATA paths. They don't elaborate on that because it can't possibly be any different. Number of devices equals number of active connections.

Those initial paths are aggregated by a SATA controller. One controller can send ATA data/commands and get data from multiple SATA devices. That data flow is aggregated at the controller and sent back along the PCI-e (via Thunderbbolt) connection. That's where things funnel down to just one.

If there are two or four SATA 6Gb/s lanes hooked to a SATA controller you don't necessarily get 12 or 24 Gb/s of bandwidth back through the SATA controller. Typically get more than just 6 Gb/s , but not necessarily a linear increase. In other systems if it is just a SATA port multiplier then that is nothing more than a switch and just get 6Gb/s but multiple destinations one at a time (but can rapidly switch between them).

Most SATA 6Gb/s devices can't sustain 6Gb/s. SATA is 6Gb/s far more so because at 6Gb/s can split/share the bandwidth between 2-3 (or more slower) devices without much throttling. Three 3Gb/s devices can fill a controller than can handle 9Gb/s.


The two TB ports is so this device can be placed on a daisy chain. Another TB (or DisplayPort which ends the chain) device get placed after. That has nothing to do with the SATA devices inside the drive enclosure other than the SATA controller inside will be sharing bandwidth back to the host with the other device(s).

If inferring some conclusion that this the two TB ports are speeding things up then that is flawed. TB v1 is capable of transporting the data from one of these closures at 10Gb/s along the TB network. That's 1,000+ MB/s. In other words, it is faster than the enclosure's SATA controller. Using the TB devices and another TB devices after it ( on the second port) probably means you are not loosing anything in speed to these drives. So it isn't that the TB ports "speed up" the device. It far more so that it doesn't necessarily slow it down. The notion that these devices are external to your host computer so you "lost all the speed" in the connection is not correct. In most contexts, you have lost nothing in bandwidth. That is what the two TB ports are enabling.

"Pry open the company's new DriveStation Mini Thunderbolt SSD and you would find not one, but two 2.5-inch SSDs toting a SATA III 6Gb/s interface tucked away inside.

there are two SATA III 6Gb/s links because there are two drive devices. The implication is it is more likely can actually can sustain 6Gb/s transfers to the overall device by using both drives.



That's alongside two Thunderbolt ports that, when used together, can provide read/write speeds of around 615MB/s and 760MB/s respectively, according to Buffalo."

What is marginally 'poor' is that "used together" is the TB port(s) and the internal SATA III 6Gb/s. Not that the TB ports are bonded ("used together"). The "alongside" is indicative that TB is being used with something else (the two drives mentioned earlier. )

They aren't trying to diagram how it works... just that being used in conjunction.
 
Last edited:
The J2 is only $500. Which isn't bad for an external RAID 0 array.

It's also the size of a deck of cards....very tiny.

Unfortunately very true. Just hate the idea of spending on top of my already pricey nMP.
 
Hopefully the OWC 2.5" TB On The Go Drive comes in relatively cheap for the bare plastic case.
 
I know some people disagree, but I love Lacie drives. If I were to pay a little more it would definitely be for a LaCie. I know Promise has got a good rep, but I've always had fast speeds and dependability with LaCie. I just hope it's not too insanely priced.

my guess is $1k or more.

it is not dissimilar from an OWC accelsior (their 960GB version is 2x480 RAIDed) + a TB2 enclosure.

whether that qualifies as "not too insanely priced" is up to the individual.
 
Hmm guys now you made me nervous with all this ssd degeneration talk here.. I upgraded the nMP to 512gb ssd just in order to use it as scratch for my after effects and ps.. I don't have any other use for an internal ssd this big.

Currently I have my scratch together with the footage on a Pegasus r6. I know it's not recommended to have both things on one volume but I thought the speeds of the r6 are really fast enough.

Afx scratch is up to 100gb. I just don't know how much of it will be written on a daily basis. Plus I'm now really confused about how bad this will degenerate the ssd.

How big will the impact be over a course of 3 years? I always have time machine on anyways.. In case the ssd would fail it would be a case for apple care..
 
Plus: Is this a general issue with any ssd? Or are you just saying that it's not good to have a ssd perform on many different tasks simultaneously. Like OS and then scratch?

Cause it's a general issue, what benefit would an external ssd have? If it also fails after a while?
 
Plus: Is this a general issue with any ssd? Or are you just saying that it's not good to have a ssd perform on many different tasks simultaneously. Like OS and then scratch?

Cause it's a general issue, what benefit would an external ssd have? If it also fails after a while?

External scratch used specifically for rewrites of wavs and videos. Masters and sources kept on the internal. Hence "scratch"
If that fails then no biggie. If your internal goes then, sucks.

If there is any noticeable degeneration I wouldn't want it to be where my main internal drive is where my files and os is sitting.
 
Hmm guys now you made me nervous with all this ssd degeneration talk here.. I upgraded the nMP to 512gb ssd just in order to use it as scratch for my after effects and ps.. I don't have any other use for an internal ssd this big.

Currently I have my scratch together with the footage on a Pegasus r6. I know it's not recommended to have both things on one volume but I thought the speeds of the r6 are really fast enough.

Afx scratch is up to 100gb. I just don't know how much of it will be written on a daily basis. Plus I'm now really confused about how bad this will degenerate the ssd.

How big will the impact be over a course of 3 years? I always have time machine on anyways.. In case the ssd would fail it would be a case for apple care..

Plus: Is this a general issue with any ssd? Or are you just saying that it's not good to have a ssd perform on many different tasks simultaneously. Like OS and then scratch?

Cause it's a general issue, what benefit would an external ssd have? If it also fails after a while?

Please review post 10 and stop worrying. ;)
 
Hmm guys now you made me nervous with all this ssd degeneration talk here..
.....

Afx scratch is up to 100gb. I just don't know how much of it will be written on a daily basis. Plus I'm now really confused about how bad this will degenerate the ssd.

If you don't know how much scratch you are using then it is going to be very easy to be "confused". You can go measure how much for some individual projects/files and do some approximate math to get to a daily number. You don't need numbers down to the last byte, but you need ballpark ranges. You may have 100GB allocated by only 5GB is used if have sufficient RAM to get most project/work done in RAM.

Intel has some enterprise SSDs that are meant for contexts where the SSD will be "used and abused". http://www.anandtech.com/show/7581/ocz-releases-intrepid-3000-first-inhouse-enterprise-ssd

The write endurance for those range from 184TB to 7000TB.

Your scratch is 100GB. If use half of that a day then 50GB. Even at the low end of that range, 184,000/50 is 3,680 days worth (or about 10 years). Those are probably overkill for that workload.

Upper end mainstream drives are closer to 90-100TB endurance ranges. 100,000/50 is 2,000 days ( or about 5 years. Typically around the length of those drive's warranty's).


Under normal ( what the drive was designed for) usages the drives typically don't wear out. Larger drives typically last longer because there are more Flash dies to spread the wear around to. Smaller capacities typically have shorter lifetimes if throw heavy loads at them. (e.g., it is good to by enough capacity so that do NOT fill the drive to the brim. 10-20% empty helps both HDDs and SSDs in long term health. )


How big will the impact be over a course of 3 years? I always have time machine on anyways..

The machine being on is completely immaterial to wear. ( you may get some corruption on power outages or other non-wear failures, but wear isn't an issue with just being on ). It is how much gets written.


In case the ssd would fail it would be a case for apple care..

As a sole justification for AppleCare it is rather weak. As mentioned before simply spreading out the write workload onto a second drive increases lifetime without much, if any drama. If going to "using and abusing" the whole Mac Pro with high workloads then AppleCare is a factor because not only the SSD, but the rest of the system will get a healthy workout.
 
Thanks a lot for the thorough information! :)
I think under my workload it's pretty save to have the afx scratch disk on the internal drive. Especially since I won't fill it up to the max.
 
Please review post 10 and stop worrying. ;)

TRIM is not effective on most scratch/swap file usage. If open a file and rewrite the contents of the file muliple times. ( exactly what happens with scratch as it is interim/incremental state of data being worked on) then the file (container) the scratch data isn't being deleted. It is being rewritten. TRIM isn't going to do diddly squat for that. Nothing. It doesn't manage any decrease in degradation at all.

On a dedicated scratch drive it is marginally useful since the even when delete the files the file system is relatively going to quickly go right back to writing to the same set of logical addresses when (if) nuke the scratch file to create another one. Most often the scratch files are going to go ask file system for the same kind of block groupings as the old files had. If the file system isn't clueless it will tend to grab from the same "free pool" that it put the old cluster of blocks it put the old files into. Hence those low level blocks will get recycled and rewritten to. Again once in a rewrite context TRIM isn't doing much more that what the SSD does outside the scope of the 'clues' TRIM gives it.

As far as Samsung endurance being a myth it is a matter of myth. It is a matter of workload. Throw a 100GB/day workload at them and they will fail earlier. They aren't magical. Neither is TRIM.
 
TRIM is not effective on most scratch/swap file usage. If open a file and rewrite the contents of the file muliple times. ( exactly what happens with scratch as it is interim/incremental state of data being worked on) then the file (container) the scratch data isn't being deleted. It is being rewritten. TRIM isn't going to do diddly squat for that. Nothing. It doesn't manage any decrease in degradation at all.

On a dedicated scratch drive it is marginally useful since the even when delete the files the file system is relatively going to quickly go right back to writing to the same set of logical addresses when (if) nuke the scratch file to create another one. Most often the scratch files are going to go ask file system for the same kind of block groupings as the old files had. If the file system isn't clueless it will tend to grab from the same "free pool" that it put the old cluster of blocks it put the old files into. Hence those low level blocks will get recycled and rewritten to. Again once in a rewrite context TRIM isn't doing much more that what the SSD does outside the scope of the 'clues' TRIM gives it.

As far as Samsung endurance being a myth it is a matter of myth. It is a matter of workload. Throw a 100GB/day workload at them and they will fail earlier. They aren't magical. Neither is TRIM.

Photoshop (for example) creates large temporary scratch files that it will delete when it's done. The delete triggers a TRIM of the drive thus freeing those blocks so that performance degradation doesn't occur. So you're wrong in suggesting that TRIM is doing "diddly squat" on a scratch drive.

Excessive writes will kill a drive of course, but as evidenced by endurance testing, a 10GB/day workload will kill a Samsung drive in about 200 years. If your workload is 100GB/day, then it would be dead after 20 years. Personally, I wouldn't worry about the drive wearing out. Honestly, I can't believe we're having this debate... this is such an old myth, I can't believe anyone still clings to it. :confused:
 
Photoshop (for example) creates large temporary scratch files that it will delete when it's done.

And when does the bulk of the writing/rewriting happen in a scratch file: before it is done or after it is done? If TRIM has no impact before it is done than in that context it isn't doing squat.



Excessive writes will kill a drive of course, but as evidenced by endurance testing, a 10GB/day workload will kill a Samsung drive in about 200 years. If your workload is 100GB/day, then it would be dead after 20 years. Personally, I wouldn't worry about the drive wearing out. Honestly, I can't believe we're having this debate... this is such an old myth, I can't believe anyone still clings to it. :confused:

200 years? TRIM magically cures all ills... It isn't even a debate. The perpetrators of the myths are clear.

SSDs are not fragile but do need to take into account the workload will be throwing at them ( as with any other drive ). Different drives are meant for different workloads.
 
There's two aspects to SSD performance and life span that are getting confused here...

First, there's performance degradation which, in the absence of TRIM, is caused by the drive not being aware of which NAND blocks it can reuse, so it ends up filling the drive unnecessarily to the point where every write incurs a time consuming read/write penalty and garbage collection becomes bogged down. TRIM prevents this kind of degradation by letting the drive controller know what NAND blocks are freed from a file deletion so it doesn't need to unnecessarily preserve that data.

Next, there's the life span of the NAND itself which will degrade over time with heavy writes. TRIM does little to help this (that's not it's purpose). Modern drives are conservatively rated for 1000 write/erase cycles but endurance testing has proven that they last much longer than this (typically up to 3000 write/erase cycles).

And when does the bulk of the writing/rewriting happen in a scratch file: before it is done or after it is done? If TRIM has no impact before it is done than in that context it isn't doing squat.

If the app (such as Photoshop) is rewriting over the same temp file time and time again during a session, then it's using the same NAND blocks every time and no performance degradation will occur. If it continues to create more and more temp files, then it will consume more and more NAND of course. If the SSD has enough capacity for this, no problem. If not, you will see "Scratch Drive Full" errors just like you would with a mechanical drive, and you buy a bigger drive. At any rate, with the SSD, once its done with the temp files and deletes them, OS X will issue a TRIM. No one knows exactly how OS X implements TRIM, but best practices would seem to indicate that it's done during an idle period as it can tie up the drive for a short time. At that point, any NAND used by the scratch temp files will be freed up so the drive can use them for other things (or your next work session).

200 years? TRIM magically cures all ills... It isn't even a debate. The perpetrators of the myths are clear.

SSDs are not fragile but do need to take into account the workload will be throwing at them ( as with any other drive ). Different drives are meant for different workloads.

As I said above, TRIM doesn't do much for drive life span - so it's no magical cure all.

And yes, 200 years is correct. The Samsung 840 SSD tested sustained over 700TiB of writes before it reallocated a single sector and 888TiB written before it died. So for someone that writes 10GB/day, that's around 200 years. 20 years for someone that writes 100GB/day. Even if the drive had only lasted as long as it's rather conservative 1000 write/erase cycles, that's still a life spans of 70 and 7 years respectively.
 
You cannot rewrite a NAND page, it can only be written to once

If the app (such as Photoshop) is rewriting over the same temp file time and time again during a session, then it's using the same NAND blocks every time and no performance degradation will occur.

You cannot reuse a NAND page by writing new data into it - the page must be in the "erased" for it to be written.

You cannot erase a NAND page (typically 4 KiB) - you can only erase NAND blocks (typically 256 KiB or more).

The idea that rewriting the same file will rewrite the same sections of NAND just isn't so - the drive will be writing pages into erased blocks, constantly churning through the unallocated NAND.

A good description of the underlying technology is at http://en.wikipedia.org/wiki/Write_amplification - including a description of TRIM will extend the life of the drive.
 
Last edited:
Unfortunately, that's not how SSDs work at all. When you "re-write a block", you are most likely going to be using a different cell due to the wear leveling mechanisms of the SSD device itself - totally transparent to the operating system. TRIM was implemented for situations like file deletion where a catalog (directory entry) is marked as deleted consequently causing (x) number of blocks that were used by the file to be marked as freed by the SSD device. Without this mechanism, the device has no idea the blocks (cells) can be re-used by the device.

You cannot reuse a NAND page by writing new data into it - the page must be in the "erased" for it to be written.

You cannot erase a NAND page (typically 4 KiB) - you can only erase NAND blocks (typically 256 KiB or more).

The idea that rewriting the same file will rewrite the same sections of NAND just isn't so - the drive will be writing pages into erased blocks, constantly churning through the unallocated NAND.

A good description of the underlying technology is at http://en.wikipedia.org/wiki/Write_amplification - including a description of TRIM will extend the life of the drive.

Yeah, I understand all this... I suppose I over simplified things, if not mis-stated them in respect to what's going on with a scratch volume on an SSD. However, I still think the moral of this story is that using an SSD as a scratch volume is not going to either degrade the performance of your SSD (unless its close to full) OR reduce it's life span in any significant way. In fact, an SSD is the ideal scratch volume due to the random I/O nature of scratch which is the forte of SSDs.
 
So to get things a bit back on track I'm curious which is the better path for a scratch disk. (mari, PS, AE etc)

I am in the fortunate situation that my setup will be a nMP with a 1TB internal SSD.

And a TB2 connection to a Promise2 R4 diskless which I plan to populate with 4 x 2TB (8TB) seagate 7200 rpm HDD's.

I figure I need somewhere between 300-500GB for scratch.

The promise obviously has more space, and will allow me to swap out drives as/if they fail with the Raid5 config. Not sure if the promise also spreads the wear and tare across 4 read/write heads vs the ssd.

Discuss :)
 
Last edited:
So to get things a bit back on track I'm curious which is the better path for a scratch disk. (mari, PS, AE etc)

I am in the fortunate situation that my setup will be a nMP with a 1TB internal SSD.

And a TB2 connection to a Promise2 R4 diskless which I plan to populate with 4 x 2TB (8TB) seagate 7200 rpm HDD's.

I figure I need somewhere between 300-500GB for scratch.

The promise obviously has more space, and will allow me to swap out drives as/if they fail with the Raid5 config. Not sure if the promise also spreads the wear and tare across 4 read/write heads vs the ssd.

Discuss :)

My rather predictable response is to use the SSD, but you could just try it both ways and decide then.
 
And a TB2 connection to a Promise2 R4 diskless which I plan to populate with 4 x 2TB (8TB) seagate 7200 rpm HDD's.

Be sure to get enterprise drives approved by Promise - desktop drives are a bad idea in RAID arrays.

I figure I need somewhere between 300-500GB for scratch.

The promise obviously has more space, and will allow me to swap out drives as/if they fail with the Raid5 config.

You realize that RAID-5 will be painfully slow compared to RAID-0, right? And they're scratch drives - why would you be concerned about long-term reliability with a scratch drive?

The manual for the Promise lets you specify the redundancy per volume, not per array. A good solution would be to create a 500 GB RAID-0 volume, and then create the 7500 GB RAID-5 volume on the same disks. (If it permits that.)


Not sure if the promise also spreads the wear and tare across 4 read/write heads vs the ssd.

There's no "wear and tear" on the heads comparable to the erase cycle problem with SSDs. A spinning disk doesn't have a given number of write cycles.
_________

If the Promise doesn't let you put RAID-0 volumes and RAID-5 volumes on the same disk arrary, consider getting three 3TB or 4TB drives in RAID-5, and a JBOD SSD for scratch. (and the three disk RAID-5 would probably have better write performance than a four drive volume.)
 
Last edited:
Be sure to get enterprise drives approved by Promise - desktop drives are a bad idea in RAID arrays.



You realize that RAID-5 will be painfully slow compared to RAID-0, right? And they're scratch drives - why would you be concerned about long-term reliability with a scratch drive?

The manual for the Promise lets you specify the redundancy per volume, not per array. A good solution would be to create a 500 GB RAID-0 volume, and then create the 7500 GB RAID-5 volume on the same disks. (If it permits that.)




There's no "wear and tear" on the heads comparable to the erase cycle problem with SSDs. A spinning disk doesn't have a given number of write cycles.
_________

If the Promise doesn't let you put RAID-0 volumes and RAID-5 volumes on the same disk arrary, consider getting three 3TB or 4TB drives in RAID-5, and a JBOD SSD for scratch. (and the three disk RAID-5 would probably have better write performance than a four drive volume.)

The 7.5GB raid5 and 0.5 GB Raid-0 is a cool idea if its possible. Ive been looking at the Promise drive compatibility chart, the most recent one I have found is 6 months old at least.. doesn't mention osx 10.9 .. just 10.8.4..

As for Raid5 being painfully slow compared to Raid0, perhaps we have different pain thresholds :) I don't plan to use the 8tb's all for scratch. A good part of it will be media library and projects (hence raid5).. and i figure even raid5 will be 10x the read/write I have now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.