Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

rdsii64

macrumors regular
Original poster
May 14, 2008
237
8
Now that my new(to me) mac is in my possession, I am sitting here working out what upgrades will actually do me some good and what I can do with out. An SSD boot drive is on the must have list. I was just about to decide on a 480 gig from OWC then I figured why not a pair of 240's in raid zero. If I'm willing to run with software raid the price difference isn't enough to be bothered. Now for the million dollar question. ( as you can tell I don't know squat about raid)
when choosing between software raid and a physical controller card what should I be concerned with and why?
 
Now that my new(to me) mac is in my possession, I am sitting here working out what upgrades will actually do me some good and what I can do with out. An SSD boot drive is on the must have list. I was just about to decide on a 480 gig from OWC then I figured why not a pair of 240's in raid zero. If I'm willing to run with software raid the price difference isn't enough to be bothered. Now for the million dollar question. ( as you can tell I don't know squat about raid)
when choosing between software raid and a physical controller card what should I be concerned with and why?

Do you feel that you need to have your boot drive set up as RAID for your intended usage? Not sure the performance difference would be noticeable over and above a single drive for boot time/app launch time...? Have you considered a single, smaller SSD (would you need more than 60-120 GB for OS and apps?), another dedicated SSD for a scratch disk if your intended usage requires a scratch disk, and yet another drive for working files (OWC Accelsior or Sonnet's Tempo Pro?)?
 
Now that my new(to me) mac is in my possession, I am sitting here working out what upgrades will actually do me some good and what I can do with out. An SSD boot drive is on the must have list. I was just about to decide on a 480 gig from OWC then I figured why not a pair of 240's in raid zero. If I'm willing to run with software raid the price difference isn't enough to be bothered. Now for the million dollar question. ( as you can tell I don't know squat about raid)
when choosing between software raid and a physical controller card what should I be concerned with and why?

Software RAID0 works great. It's not taxing on the CPU. It's easy to setup in OS X Disk Utility and it will double your sustained transfer rate. The problem is, as others above me have pointed out, it won't really impact your day-to-day performance. OS/Apps I/O is largely random in nature and most benchmarks show little to no improvement with RAID0 under this kind of workload. Large file transfers are a different story and can definitely benefit.

At any rate, if you have a an extra vacant drive bay and SATA connection and maintain a current backup, there's no reason not to run RAID0. The only potential downside is not being able to easily move your drive to another machine or external if you want... something you can do with a single drive, but not with a RAID array. Another thing to consider is you can't partition a RAID0 array to run Bootcamp/Windows. You would need a separate drive for Windows if you go the RAID0 route for OS X.

Hardware RAID is only really required for RAID5 or 6 where parity bit calculations can be intensive and best offloaded from the CPU to maintain performance. The built-in cache on a hardware RAID card can also help with RAID0 performance, but this is less important with SSDs than it was with HDs.
 
At any rate, if you have a an extra vacant drive bay and SATA connection and maintain a current backup, there's no reason not to run RAID0.

Since this is a boot volume, I think it's also important to at least consider the effects of a disk failure. For example, let's say a 1Gb Ethernet NAS is used for backups. A full restore of (worst case) 480GB of data will take ~12 hours. Since this is a boot volume, that means 12 hours of downtime, if there are paying customers and a deadline involved it's worth considering imo.

Adding redundancy while maintaining the performance would take 4 drives for the same capacity, not really worth it in this case imo.

I'm not saying that he should not do it, but at least be aware of potential downsides.
 
Since this is a boot volume, I think it's also important to at least consider the effects of a disk failure. For example, let's say a 1Gb Ethernet NAS is used for backups. A full restore of (worst case) 480GB of data will take ~12 hours. Since this is a boot volume, that means 12 hours of downtime, if there are paying customers and a deadline involved it's worth considering imo.

Adding redundancy while maintaining the performance would take 4 drives for the same capacity, not really worth it in this case imo.

I'm not saying that he should not do it, but at least be aware of potential downsides.

Good point.
 
Few things to consider:
  1. RAID 0 (aka stripe set) is less reliable than a single disk.
  2. You won't see much of a performance increase, if at all, for boot/application loading on that machine, as the SSD's do not double the random access performance of a single SSD (based on a 2 disk set; otherwise, "double" would be n drives). It's actually been shown to be reduced in multiple cases here on MR when users' tried it.
  3. Even for sequential throughputs, the SSD's will be throttled due to the bandwidth limits of the chipset (what the SATA controllers are attached to; PCIe card would be the only way to get around this). So not truly doubled for this either, but unlike random access performance on striped SSD's, there is an improvement (again, based on a 2 disk stripe set).
 
Few things to consider:
  1. RAID 0 (aka stripe set) is less reliable than a single disk.

The odds of disk failure are greater with 2 drives versus 1 drive is the same if raid0 is in use or not. Raid0 is not "less reliable" than a single disk. 2 drives are more likely to fail than one. 2 drives in raid0 have the same chance of failure as 2 non raid0 drives
 
The odds of disk failure are greater with 2 drives versus 1 drive is the same if raid0 is in use or not. Raid0 is not "less reliable" than a single disk. 2 drives are more likely to fail than one. 2 drives in raid0 have the same chance of failure as 2 non raid0 drives

Yes it is less reliable, because in RAID 0 two disks are striped to become one volume. If one disk fails you lose 2 disks worth of data, you have thus increased you chance of drive failure by a factor of 2.

If you are using 2 drives you can also spread that risk onto 2 disks by using a mirror aka RAID 1, in this case both drives must fail before you lose data.
 
Here we go again ... :)

Since we all keep a backup, if a drive fails the response is exactly the same ... replace the failed drive and restore from your backup.

It doesn't matter if your single 500GB disk fails, or one of your 256GB disks in a 2 disk RAID-0 fails ... you are still going to need to restore from your backup.

The single disk is just as likely to fail as one of the RAID-0 disks ... and the outcome is the same.

The frequency that this topic is brought up on this forum would make one believe the everyone experiences a disk failure every few days or so. It's really not a big issue as most users that are reliably using RAID arrays will tell you. Simply keep a timely backup ... no matter how your main storage is configured.
 
Here we go again ... :)

Since we all keep a backup, if a drive fails the response is exactly the same ... replace the failed drive and restore from your backup.

It doesn't matter if your single 500GB disk fails, or one of your 256GB disks in a 2 disk RAID-0 fails ... you are still going to need to restore from your backup.

True, except if you are using RAID 1, 5, 6, 10, Z (2-3). The response then is to replace the failed drive and continue as if nothing happened.
 
True, except if you are using RAID 1, 5, 6, 10, Z (2-3). The response then is to replace the failed drive and continue as if nothing happened.

I think the OP was considering only a 2 disk RAID-0 ... I don't think he was looking for a complex higher end RAID system. :)

But you are correct, replace the drive and let the array re-build.
 
I am planning on getting a current gen Mac Pro. I want to get 4 OWC 480GB Mercury Electra ssd's and use OS X's Disk utility to set them up as a pair of Raid 0's. One for the OS and apps and the second for vm's running in Fusion. Since the speed of the connection is only Sata II, will that be the bottleneck as the ssd's will be able to perform at their rated speeds? I was hoping to get decent performance since there will be high I/O on both sets of Raids.

TIA,
Will
 
I am planning on getting a current gen Mac Pro. I want to get 4 OWC 480GB Mercury Electra ssd's and use OS X's Disk utility to set them up as a pair of Raid 0's. One for the OS and apps and the second for vm's running in Fusion. Since the speed of the connection is only Sata II, will that be the bottleneck as the ssd's will be able to perform at their rated speeds? I was hoping to get decent performance since there will be high I/O on both sets of Raids.

TIA,
Will

In normal OS X usage, I don't see any significant difference in even a single SSD on the SATA-II Mac Pro backplane with the random, smaller files encountered which won't even saturate the SATA-II interface channels. The RAID-0 will perform great, and will afford better access speeds to larger sequential files as well, such as photo libraries, video, and most benchmark programs.

This benchmark was using a pair of Samsung 840 Pro SSDs in RAID-0 mounted in standard Mac Pro disk trays. Although the same pair of drives will benchmark at higher speeds when mounted on a SATA-III PCIe card, the benchmark program really doesn't represent the true benefit you will enjoy with random small file access.

You could even RAID-0 all 4 SSDs for even better performance on large sequential files, but I doubt you would notice the difference with your intended usage.

-howard
 

Attachments

  • 840ProRaid0MacProTrays.png
    840ProRaid0MacProTrays.png
    719.3 KB · Views: 127
The odds of disk failure are greater with 2 drives versus 1 drive is the same if raid0 is in use or not. Raid0 is not "less reliable" than a single disk. 2 drives are more likely to fail than one. 2 drives in raid0 have the same chance of failure as 2 non raid0 drives
Keep in mind, when there are multiple disks in the same striped volume, the loss of any single member causes the entire volume to fail. So my statement is entirely correct in this instance.

I see your point, but it's a bit disingenuous, as you're trying to look at all members (tied together/one affects them all), even if they're independent of one another (single disk operation, regardless of the member count). The reason I say this, is the data on any particular volume isn't dependent on any other (i.e. disk dies, the others won't lose their data as a result). BIG difference here vs. RAID or even to an extent, concatenation (non-dead members can be recovered, but there is effort involved to do so), and why failure modes are only examined as single disk cases in this configuration, not against non-affected volumes attached to the same system.
 
Keep in mind, when there are multiple disks in the same striped volume, the loss of any single member causes the entire volume to fail. So my statement is entirely correct in this instance.

I see your point, but it's a bit disingenuous, as you're trying to look at all members (tied together/one affects them all), even if they're independent of one another (single disk operation, regardless of the member count). The reason I say this, is the data on any particular volume isn't dependent on any other (i.e. disk dies, the others won't lose their data as a result). BIG difference here vs. RAID or even to an extent, concatenation (non-dead members can be recovered, but there is effort involved to do so), and why failure modes are only examined as single disk cases in this configuration, not against non-affected volumes attached to the same system.

Disingenuous? You state that two disks are less reliable than one is disingenuous. It's comparing apples to oranges. That is what disingenuous is.
 
1 logical volume spanning 2 heads is twice as likely to fail as 1 logical volume on 1 head. 2 logical volumes on 2 heads has the same failure rate as 1 logical on 1 head. It's the spanning of data across multiple disks with no parity that creates the added possibility of failure. If 1 fails from the 2 logical you are left with 1 that works. If 1 fails on the RAID you have none that work. :p
 
Not real sure there is much point in this excersize

Now that my new(to me) mac is in my possession, I am sitting here working out what upgrades will actually do me some good and what I can do with out. An SSD boot drive is on the must have list. I was just about to decide on a 480 gig from OWC then I figured why not a pair of 240's in raid zero. If I'm willing to run with software raid the price difference isn't enough to be bothered. Now for the million dollar question. ( as you can tell I don't know squat about raid)
when choosing between software raid and a physical controller card what should I be concerned with and why?

Don't see much gain and a lot to lose.
Maybe a raid 10, but still seems like a wash.

As for hardware vs software raid, hands down Hardware raid.
 
I just skimmed this thread because it seems like such a rehash of an old topic all regular readers have read over and over and over and... But here's a few irrefutably true comments:

  • RIAD0 using SSDs is a fantastic benefit for both OS X installs and also when used as a data/project volume.
  • RAID0 using SSDs for booting is nearly twice as fast as using a single SSD - which is already very fast anyway.
  • RAID0 using SSDs is practically speaking, just as safe as using a single SSD. Saying otherwise is an assault on common sense, proven statistics, and mathematics. It is a tired old argument and it's almost completely untrue - as in it's safely ignored BS and a half!
  • Almost all other RAID levels mentioned besides RAID1, require more than two drives to function and do less than just having a backup! A dead drive in RAID5 or 6 for example, cause the same amount of unusable "downtime" as restoring a RAID0 from a fast backup.
  • Some recent posts from PCIe card RAID0 users have shown benchmarks which tell of speeds 30% to 40% faster than SSD drives in a MacPro native RAID0 for small file I/O. So, yes, it's faster. You have to decide if that cost is worth it to you or not.
  • The downsides of using a RAID0 volume as a boot partition are several fold. Probably the biggest issue is that there's no Recovery Partition created during the install.
 
RAID0 using SSDs is practically speaking, just as safe as using a single SSD. Saying otherwise is an assault on common sense, proven statistics, and mathematics. It is a tired old argument and it's almost completely untrue - as in it's safely ignored BS and a half!

For a 2 drive RAID 0, maybe the common sense and proven statistics part of your argument holds as the chance of 1 drive failure is not that high to begin with. For a larger RAID it is a real concern, for example a 10 drive RAID 0 could have anywhere from 10-40TB of data, and the chance of 1 in 10 drives to fail over a period of years of constant use is not insignificant.

A dead drive in RAID5 or 6 for example, cause the same amount of unusable "downtime" as restoring a RAID0 from a fast backup.

No, restoring RAID 0 from backup means all data on all drives, restoring 1 drive in a RAID with parity only needs that 1 drive restored. It may also be done in the background, while the RAID is used, that is true for RAIDZ.
 
For a 2 drive RAID 0, maybe the common sense and proven statistics part of your argument holds as the chance of 1 drive failure is not that high to begin with. For a larger RAID it is a real concern, for example a 10 drive RAID 0 could have anywhere from 10-40TB of data, and the chance of 1 in 10 drives to fail over a period of years of constant use is not insignificant.

That's true. Engineering groups around the world have concluded the critical number is seven. Thus one finds in all sorts of literature and documentation that RAID sets of larger than 7 drives is not recommended. Seven however, is still a relatively safe number. ;) And this is for rotational media. There is some question concerning the difference between solid state and rotational media when it comes to catastrophic failure rates though.



No, restoring RAID 0 from backup means all data on all drives, restoring 1 drive in a RAID with parity only needs that 1 drive restored. It may also be done in the background, while the RAID is used, that is true for RAIDZ.

It's true for most redundant levels but I guess you haven't had to rebuild one. If you had you would know that it does so very slowly - slower or about the same as restoring from a backup volume. And during the rebuild the entire RAID set operates at a fraction of the speed making it useless for almost any kind of work one creates a raid for. Just like a back-up you can still access files or see a directory listing but you can't actually use it. This of course becomes less true with larger array sets though - of course.
 
No, restoring RAID 0 from backup means all data on all drives, restoring 1 drive in a RAID with parity only needs that 1 drive restored. It may also be done in the background, while the RAID is used, that is true for RAIDZ.

I suspect he's thinking of the typical hardware classes discussed on here, which are not likely to be stable during a rebuild. The rebuild itself requires it to read a lot of data. It's not really an insignificant event.
 
Here we go again ... :)

Since we all keep a backup, if a drive fails the response is exactly the same ... replace the failed drive and restore from your backup.

It doesn't matter if your single 500GB disk fails, or one of your 256GB disks in a 2 disk RAID-0 fails ... you are still going to need to restore from your backup.

The single disk is just as likely to fail as one of the RAID-0 disks ... and the outcome is the same.

The frequency that this topic is brought up on this forum would make one believe the everyone experiences a disk failure every few days or so. It's really not a big issue as most users that are reliably using RAID arrays will tell you. Simply keep a timely backup ... no matter how your main storage is configured.
Nice posting. Couldn´t agree more. I´m using an internal 2TB drive for TimeMachine, as well as a 2TB Cloudbox for my pictures/musik.
I still sleep well at night, and can´t remember when I last were hit by a defective drive.
 
I used to have 2 x 500 GB HDDs in Soft RAID 0 as my main drive in the Mac Pro.
Later I replaced the 2 drives for a single 1 TB HDD (no RAID config)

The newer 1 TB HDD was faster than the older 500 GB drives. The single drive performs faster than the old RAID 0.

Regarding the "chance of failure":
I agree with hfg. Just make a backup regularly. Large 3.5" HDD are usually very reliable. HDDs inside a laptop: not so ;-)
 
I suspect he's thinking of the typical hardware classes discussed on here, which are not likely to be stable during a rebuild. The rebuild itself requires it to read a lot of data. It's not really an insignificant event.

Well, it needs to read all blocks from all drives and XOR them to get the missing block. As far as ZFS goes it's suppose be able to handle it on regular hardware.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.