Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

NoManIsland

macrumors regular
Original poster
Feb 17, 2010
207
0
I can only assume this has been done to death, but I've found nothing but contradiction in my travels, so I thought I'd put it out there: what are the real world advantages to having two SSDs in RAID 0?

Specifically, what would my practical gains be were I to choose to use two 120GB OWC SSDs in software RAID 0 as compared to buying a single 240GB OWC SSD? From HDDs I would have expected double the speed, but I have heard that because of the heavily parallelism within SSDs, the result isn't anywhere near that.

Anyone with first hand experience or useful links for this?
 
I can only assume this has been done to death, but I've found nothing but contradiction in my travels, so I thought I'd put it out there: what are the real world advantages to having two SSDs in RAID 0?

Specifically, what would my practical gains be were I to choose to use two 120GB OWC SSDs in software RAID 0 as compared to buying a single 240GB OWC SSD? From HDDs I would have expected double the speed, but I have heard that because of the heavily parallelism within SSDs, the result isn't anywhere near that.

Anyone with first hand experience or useful links for this?

I've just done the same thing. Though my understanding is the standard Mercury Pro SSD's don't work so well in RAID. I just placed an order for two 100GB Mercury Pro RE's. All the avenues I checked lead me to believe that the normal Mercury Pro's don't work as well in RAID.

I can't vouch for performance yet as they are still being shipped but apparently it's a lot faster....I look forward to seeing the results. I found the macperformanceguide.com to be a really GREAT help. Though do read around elsewhere also as he does seem to be affiliated with a certain Vendor.
 
I can only assume this has been done to death, but I've found nothing but contradiction in my travels, so I thought I'd put it out there: what are the real world advantages to having two SSDs in RAID 0?

Real world being what? Scratch disc? Boot disc?

Striping SSDs will give you n * the sequential speeds of a single drive, with n being the amount of drives you stripe.

Random speeds don't scale that well, though. I've seen performance decreases over a single drive, as well as minimal increases up to 10%.

To sum this up, striping the drives is great for discs that can benefit from high sequential speeds (like scratch), not so much for boot and apps.

As per OWC RE or not, the RE editions are exactly the same as the other drives, only with a little more capacity kept spare for wear levelling.
 
Real world being what? Scratch disc? Boot disc?

Striping SSDs will give you n * the sequential speeds of a single drive, with n being the amount of drives you stripe.

Random speeds don't scale that well, though. I've seen performance decreases over a single drive, as well as minimal increases up to 10%.

To sum this up, striping the drives is great for discs that can benefit from high sequential speeds (like scratch), not so much for boot and apps.

As per OWC RE or not, the RE editions are exactly the same as the other drives, only with a little more capacity kept spare for wear levelling.

Good info - do you know if you can get away with the normal Mercury Pro's? Is the RE overkill? I was lead to believe it doesn't really work as well.

Mine is going to be used for scratch only so hopefully the difference will be justified.
 
Personally, I'd go with the non RE editions, due to the higher capacity.

Wait for nanofrog's on this issue before you buy, though. Just to be sure. ;)
 
you gain a little speed with a pair of 120gbs in raid0 not as much as double

what happens in a raid0 is the access speeds don't get twice as fast. so all the little small reads and writes don't double in speed.

but there is another factor 2 x 249.99 is 499.98 cost for a pair of 120gbs a 240gb cost 529.99.

if you have long pockets a pair of 240gbs is 1059.98 and a 480 gb is 1579.99. that is a big savings so these count as factors to consider.
 
I have both RE and non RE from OWC and in my testing their is no difference in performance ? their might be some issues down the road with longevity ? but I think even then it wont matter ?

now this is using soft base raid 0 only !! and I use mine for scratch setups and boot

I will never do raid 0 for boot ? big waste IMHO but for scratch I gained a small % and my reason for doing two in scratch was if one dies at least one will still be their so my work is not cramped
 
Real world being what? Scratch disc? Boot disc?

Striping SSDs will give you n * the sequential speeds of a single drive, with n being the amount of drives you stripe.

Random speeds don't scale that well, though. I've seen performance decreases over a single drive, as well as minimal increases up to 10%.

To sum this up, striping the drives is great for discs that can benefit from high sequential speeds (like scratch), not so much for boot and apps.

As per OWC RE or not, the RE editions are exactly the same as the other drives, only with a little more capacity kept spare for wear levelling.

I was looking at two pairs of SSDs, each pair in RAID 0. One was going to be OS/Apps, to help Logic access its resources quickly, and the other pair was to hold my MIDI Library (the Vienna Symphonic Library) to allow it to be streamed from disk really effectively. I've found that on conventional disks, I get hiccups and overloads with the number of instruments I use at a time (a full orchestra with many expressions of each instrument and multiple reverb etc effects). I don't know whether these uses would fall under sequential or random reads/writes.
 
Personally, I'd go with the non RE editions, due to the higher capacity.

Wait for nanofrog's on this issue before you buy, though. Just to be sure. ;)

That's a good point: the RE editions are touted as being superior for RAIDs, so if I am using OWC SSDs in this sort of arrangement, what are the actual advantages of the RE? I know they have more over-provisioning, so I'm assuming that this gives better performance in a RAID? Otherwise it doesn't seem to make sense to pay more for less capacity :confused: I'm not sure I understand what improved wear-leveling translates to.

From what I'm hearing from you guys, it would be a waste to use a RAID 0 for a boot drive, but would I see improvements for my MIDI libraries?
 
Striping SSDs will give you n * the sequential speeds of a single drive, with n being the amount of drives you stripe.

Random speeds don't scale that well, though. I've seen performance decreases over a single drive, as well as minimal increases up to 10%.

To sum this up, striping the drives is great for discs that can benefit from high sequential speeds (like scratch), not so much for boot and apps.
Exactly.

The only instance where using a stripe set for an OS/applications location is valid, is if you can get the desired capacity for less money and have the additional SATA port/s to spare.

As per OWC RE or not, the RE editions are exactly the same as the other drives, only with a little more capacity kept spare for wear leveling.
This is the primary difference from what I've seen, but I think the controller used is a bit different (same family and maker, but not the same P/N). The firmware will also be a bit different.

Now whether or not this will make a difference in RAID (stability), I'm not sure (we need a guinea pig or two in order to test out both versions on a RAID card, as both do work on the ICH).

but there is another factor 2 x 249.99 is 499.98 cost for a pair of 120gbs a 240gb cost 529.99.
This is the one case where a stripe set can make sense for use as an OS/applications disk IMO.

I have both RE and non RE from OWC and in my testing their is no difference in performance ? their might be some issues down the road with longevity ? but I think even then it wont matter ?
Are you willing to strap both to your RAID card, make stripe sets out of them, and do some guinea pig work? :eek: :D

I'm not sure I understand what improved wear-leveling translates to.
There's more unused cells to take over for bad ones. Which allows it to last longer before requiring replacement (BTW, this is only valid for writes).

An OS/applications disk is primarily read, not written (writes occur when updating/loading new applications and the OS).
 
Okay, so I'm going to use a single, non-RE OWC SSD for my OS/Apps, but is there value to using 2x120GB OWC SSDs for my MIDI library over using 1x240GB? Like the OS/App disk, the MIDI libraries are going to be seeing MANY reads, but very few writes, so does a non-RE RAID 0 make sense, or is RAID 0 a waste in this usage? I don't know how big the individual samples are, but I would guess they are relatively small, if that makes a difference, but Logic will be pulling thousands at a time.
 
Okay, so I'm going to use a single, non-RE OWC SSD for my OS/Apps, but is there value to using 2x120GB OWC SSDs for my MIDI library over using 1x240GB? Like the OS/App disk, the MIDI libraries are going to be seeing MANY reads, but very few writes, so does a non-RE RAID 0 make sense, or is RAID 0 a waste in this usage? I don't know how big the individual samples are, but I would guess they are relatively small, if that makes a difference, but Logic will be pulling thousands at a time.
The MIDI files will need random access performance, just as an OS/applications disk (if you've sufficient capacity, you can place them on the same disk). Otherwise, get a second and split it (MIDI on one disk, OS/applications on the other).
 
Are you willing to strap both to your RAID card, make stripe sets out of them, and do some guinea pig work? :eek: :D



I did pop them in my raid setup when I go them and on the HCI and was not to much difference if I remember ? close enough it did not make it worth noting ;)
 
I did pop them in my raid setup when I go them and on the ICH and was not to much difference if I remember ? close enough it did not make it worth noting ;)
I meant in terms of stability, not performance (there's only 256MB of cache on the 1222, which won't make a notable difference in performance at all).
 
The only instance where using a stripe set for an OS/applications location is valid, is if you can get the desired capacity for less money and have the additional SATA port/s to spare.

True, but in case of the Mac Pro, I'd rather spend a few more bucks for a single drive in order to avoid getting limited by the ICH's bandwidth limit.

The other option would be a software RAID0 for the boot drive (remains the ability to sleep the machine), and the scratch discs on an additional controller (either hardware or software). That leaves about 160MB/s for mechanical drives on the ICH.
 
I meant in terms of stability, not performance (there's only 256MB of cache on the 1222, which won't make a notable difference in performance at all).

yeah my raid has my regular spinning HDD on them so no long term ideas :) on the stability short term they seemed to work :) but then again I just did a few numbers and moved on :)
 
yeah my raid has my regular spinning HDD on them so no long term ideas :) on the stability short term they seemed to work :) but then again I just did a few numbers and moved on :)
Hint: you need a bigger card (more ports) for guinea pig operations err.... future growth and performance. :D :p

An ARC-1880ix24 should do the trick. ;)
 
Went with 2 RE's in the end...just thought why not.

Can't wait. so there will be the 2x100GB RE's for boot/apps/scratch in my optical drive bays and a 12tb Hitachi Deckstar Raid0 Array for data.

Sweet.
 
Can't wait. so there will be the 2x100GB RE's for boot/apps/scratch in my optical drive bays and a 12tb Hitachi Deckstar Raid0 Array for data.

Hopefully you have a near line backup. That is a huge amount of data to put at risk in a Raid 0 array, especially with consumer-grade drives.
 
Went with 2 RE's in the end...just thought why not.

Can't wait. so there will be the 2x100GB RE's for boot/apps/scratch in my optical drive bays and a 12tb Hitachi Deckstar Raid0 Array for data.

Sweet.

for PS and LR scratch cache on same drive as your boot is not as good as dedicated ? :)
not sure about audio or video though ?
 
Hopefully you have a near line backup. That is a huge amount of data to put at risk in a Raid 0 array, especially with consumer-grade drives.

Thanks, I do. I have a system that I use that works with Hazel:
http://www.noodlesoft.com/hazel.php

When i save a file it sorts and saves a copy to 2 other external locations.

An eSata RAID array, and also to one these:
http://www.storagedepot.co.uk/External-Hard-Drives/Hard-Drive-Docks/sc883/p877.aspx

The docks saves to 1tb bare Sata drives I daily rotate to offsite.

It's a really good system and works well for me.
 
for PS and LR scratch cache on same drive as your boot is not as good as dedicated ? :)
not sure about audio or video though ?

Oh. I had understood that a boot/apps + scratch on a RAID0 was best. Can you please tell me why it's best as dedicated?
 
Oh. I had understood that a boot/apps + scratch on a RAID0 was best. Can you please tell me why it's best as dedicated?
Separation gets around the bottleneck that will occur when the application and scratch are running simultaneously (disk is trying to read say PS, and write the scratch data at the same time; the disk's available bandwidth is shared this way). Even if it's a stripe set, as random access performance (what an OS/applications disk needs), doesn't scale as sequential throughputs do (in fact, there's very little gain, whether it's one disk or n in the case of random access performance). Stripe sets do scale to ~ n disks * performance of a single disk in the set for sequential access only.

Thus better performance is possible, particularly for scratch, when it has it's very own location that's not shared with anything else. ;)

Another alternative for scratch, is a single, inexpensive SSD. It's a bit faster at sequential access than a pair of 2 or 3TB mechanical disks (even more of a difference with smaller capacity disks; based on the fact most SSD's can do ~250MB/s for sustained/sequential throughputs; ~70MB/s for random access).

The same goes for the OS/applications disk, as SSD's have the best random access performance of any drive technology (mechanical is doing good to reach ~40MB/s in this area). It's just using the advantage of it's random access performance for this particular usage instead.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.