Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

whwang

macrumors regular
Original poster
Dec 18, 2009
180
92
Hi,

I was looking for throughput test results for HDs, but it seems most of
the hardware websites are no longer interested in HDs any more.
The benchmark I can find all focus on SSDs. So I am looking for opinion here.

Currently I have 5 WD RE4 2TB in my MacPro in RAID0. (I have an OWC
PCI SSD as system drive.) I am close to filling up the 10TB space, so I
am looking for larger drives. Here are the requirements, from high priority:
1. stable in RAID0 (so WD black and green are out)
2. throughput as high as possible (I am dealing with large files.)
3. 4TB, but 3TB is also acceptable.

I chose WD RE4 2TB 2 years ago because they were the fastest 2TB
drives at that time and they can be used in RAID. Now I can no longer
find HD tests. Should I just go for WD RE4 4TB? Or there are other
better options? Any suggestions?

Thank you.
 
I was looking for throughput test results for HDs, but it seems most of
the hardware websites are no longer interested in HDs any more.

We're about six months into 2013 and StorageReview ( http://www.storagereview.com/reviews/enterprise/hdd ) has six reviews on enterprise HDDs. That is a pretty healthy pace.


What you may not have noticed is that the number of HDD companies has gotten smaller. That means there aren't going to be a half dozen (or more) reviews of the same product class. Similarly everyone and their mother is slapping their brand on top of a much smaller subset of flash controllers to blanket the market with similar SSDs. (there is a shake-out coming in SSD space. The pace will probably pick up this year. ). In short, comparing number of SSD reviews to HDD reviews is Apples-to-Oranges more so than "lack of interest".


Currently I have 5 WD RE4 2TB in my MacPro in RAID0. (I have an OWC
PCI SSD as system drive.) I am close to filling up the 10TB space,

One solution is to reduce the 10TB footprint. It is highly likely there are 100's GB of that 10TB space that you haven't touched in years. Move your archive material somewhere else. Keeping files that you don't read (or write) spun up for a year in a RAID-0 set-up is a waste of resources.

WD Red would work for archiving in top of a RAID (5 or 6 ) solution. That probably would be in another box, but the single box solution for both archiving and active working files where both are in significant TB range is a stretch.
 
Hi,

I was looking for throughput test results for HDs, but it seems most of
the hardware websites are no longer interested in HDs any more.
The benchmark I can find all focus on SSDs. So I am looking for opinion here.

Currently I have 5 WD RE4 2TB in my MacPro in RAID0. (I have an OWC
PCI SSD as system drive.) I am close to filling up the 10TB space, so I
am looking for larger drives. Here are the requirements, from high priority:
1. stable in RAID0 (so WD black and green are out)
2. throughput as high as possible (I am dealing with large files.)
3. 4TB, but 3TB is also acceptable.

I chose WD RE4 2TB 2 years ago because they were the fastest 2TB
drives at that time and they can be used in RAID. Now I can no longer
find HD tests. Should I just go for WD RE4 4TB? Or there are other
better options? Any suggestions?

Thank you.

Here is some through-put comparisions
 
WD 3TB RED's are a good option, they are fast, reliable (but not as much as RE4) and are a much cheaper option then RE4, might want to look at getting a Raid card that supports Mini SAS so you can use the 4 hot swap bays with SATA 3 6 Gbit/S.
 
Out of curiosity, why are the caviar blacks out for a RAID 0? I was going to pick z couple up for an edit...
 
I went thru this awhile back. The Seagate 3TB and 4TB drives are the current speed kings in those sizes. 1TB platter density, three platters, at 7200RPM on the 3TB model is just tits! They have single platter 1TB, dual platter 2TB, triple platter 3TB and four platter 4TB models - all of which are the fastest in their size class. I selected the 3TB models because they were a little faster (210MB/s 7200rpm) than the 4TB (180MB/s 5900rpm) and also because I always like to double up on my raid member sizes - where 6TB models were already in the works and looking to be pretty fast but 8TB drives don't seem to be on the horizon yet.

The Seagate Drives have a Nonrecoverable Read Errors per Bits Read (NRE) rating of 1 per 10E14 which is excellent, and load cycle rating of 300,000. As long as you don't go too much over the 2,400 hour rating these should be perfect candidates for 3 and 4 drive RAID members.

Just check them once a month or once a week with something like Smart Utility (Best!), replace them when their time is up, and you'll never have a problem. I've been letting mine go until the fist Reallocated Bad Sector (RBS) occurs and I'm currently at 5,900 hours with no RBS - across 6 total drives. I've been doing this since HDDs first came on the scene and RAIDing since RAID was invented and these drives are unmatched both in performance and Reliability/Data-Integrity by anything I've used in the past. So that's a pretty good testament me thinks. :)

The specific models to look for are:
ST4000DM000 - 4TB (~ $180)
ST3000DM001 - 3TB (~ $120)
ST2000DM001 - 2TB (~ $80)
ST1000DM003 - 1TB (~ $65)
 
Last edited:
My 4 3tb seagate fried in less than 6 months. I've always used seagate drives - 15+ years. I'll never go seagate again. WD reds for me.
 
Never ever ever do RAID 0 ever. Your chances of losing dating grows as you add more drives. The only time RAID 0 remotely makes any sense is if it's a video recording drive and you could care less if it dies.

Do RAID 5 - you will thank me when you have a drive failure.

Hi,

I was looking for throughput test results for HDs, but it seems most of
the hardware websites are no longer interested in HDs any more.
The benchmark I can find all focus on SSDs. So I am looking for opinion here.

Currently I have 5 WD RE4 2TB in my MacPro in RAID0. (I have an OWC
PCI SSD as system drive.) I am close to filling up the 10TB space, so I
am looking for larger drives. Here are the requirements, from high priority:
1. stable in RAID0 (so WD black and green are out)
2. throughput as high as possible (I am dealing with large files.)
3. 4TB, but 3TB is also acceptable.

I chose WD RE4 2TB 2 years ago because they were the fastest 2TB
drives at that time and they can be used in RAID. Now I can no longer
find HD tests. Should I just go for WD RE4 4TB? Or there are other
better options? Any suggestions?

Thank you.
 
My 4 3tb seagate fried in less than 6 months. I've always used seagate drives - 15+ years. I'll never go seagate again. WD reds for me.

Anecdotals like above are a dime a dozen for every maker and every device on the planet.

Also I can't believe the myth about RAID0 is still being propagated like in the previous post. It's silly! The simple truth is: Either you have a back-up or you don't. That's it. That's all there is to it. The rest is urban legend, mythology, and just complete BS. :) Unless you're a data-center RAID5 doesn't do anything for you. Nor does RAID6 for that matter - which is MUCH better than RAID5.
 
Last edited:
Really?

You're telling me that RAID 5 is worst than RAID 0 when it comes to RAID? You do know what RAID means right? Redundant what? What does redundant mean? Raid 0 isn't even RAID.

I can give you a few case examples of where RAID 5 is far superior to RAID 0 in redundancy.

Case1:
HP DL360 Gen 5 with 4 Drives. One setup as Spare. The the rest as RAID 5. The server is used as a credit car processor driver that processes credit cards, encrypts it, and sends it to the bank. If this server goes down the entire site cannot process credit cards.

Drive 3 Fails - Spare drive begins to rebuild

24 hours later
Drive 4 fails - System still operational but performance degraded - site can still process credit cards. No revenue loss

Your Scenario with RAID 0
HP DL 360 Gen 8 with 6 drives in RAID 0. Unable to setup spare as RAID 0 is not RAID 0 meaning there's no parity information being stored across drives for redundancy. The server is used as a primary storage. It houses legal documents, videos of the cases, evidence, and other state required files that need to be saved for 7 years.

Drive 4 fails - All data lost. Not recoverable. Site sued for losing data. Backups only as good as a snapshot in time.

Ya you try convincing an engineer that RAID 0 is better than RAID 5 or 1 even. I'd rather do mirroring then do raid 0. Who cares about speed if you lose your data.

Anecdotals like above are a dime a dozen for every maker and every device on the planet.

Also I can't believe the myth about RAID0 is still being propagated like in the previous post. It's silly! The simple truth is: Either you have a back-up or you don't. That's it. That's all there is to it. The rest is urban legend, mythology, and just complete BS. :)
 
Really?

You're telling me that RAID 5 is worst than RAID 0 when it comes to RAID? You do know what RAID means right? Redundant what? What does redundant mean? Raid 0 isn't even RAID.

I can give you a few case examples of where RAID 5 is far superior to RAID 0 in redundancy.

Ya you try convincing an engineer that RAID 0 is better than RAID 5 or 1 even. I'd rather do mirroring then do raid 0. Who cares about speed if you lose your data.

Wow, the rhetoric from the product vendors really has you spun. It's not your fault tho - it's sometimes difficult to assess the facts and form an original opinion. I would be interested to know why you think RAID0 isn't actually RAID tho. LOL (Just because there's no actual redundancy? So we should call it "IAD0" :p )

So let's break it down (again for like the billionth time). What benefit is RAID5 in a CONSUMER or SOHO environment? Remember I already said it was good for large system or data-center types. First we need to clarify that the only scenario that counts in a MacPro is single drive failure. If two drives fail on you in a MacPro environment you're a freak of nature and should apply to the Guinness book of world records. You also need RAID 6 to protect against a two drive failure - RAID5 can't do it!

So on to the breakdown:

RAID5 Needs above RAID0:
  • Enterprise Class drives. (Many or most controllers will not allow desktop grade drives!)
  • Two extra do nothing enterpri$e class drives. (one running in the RAID Group and one to replace whatever might break)
  • A RAID Controller Card. (RAID5 is not natively supported in MacPro) - As a result your system heat and fan noise is increased too.
  • Many controller cards are power hungry as well and about the equivalent of running an extra 75 to 100 watt bulb for every hour your machine is on. So this is about an extra $50 a month or $600 a year just to operate it. And if we look at the power per terabyte it's even worse. And on top of both those there's another 10 to 25 watts added for the extra do-nothing drive.
Well, this all looks very bad doesn't it? Golly, there must be something good about RAID5. If nothing else simply because so many people preach it's use - like you just did. So what are they?

Well sadly no, there are almost no advantages and the disadvantages far far outweigh any advantages that might exist - as I've just demonstrated above. The real advantages of RAID5 only come into play in data-center like environments. Those SE's have the right to hype RAID5. For them it saves them time - which is money, when they can maintain the data uptime and carry out the repair simply by replacing a hot-swapped drive while falling back only one of several levels of redundancy.

Certainly that's the case then on MacPro too right? Nope, unfortunately not, without the multiple levels of redundancy it doesn't apply. And without one member of the singular RAID5 array, operation is too slow to actually continue working with it in most cases. You really need to insert that extra drive you had to purchase and allow it a few hours to rebuild. Now wait a minute, that's the SAME as RAID0_with_a_backup - can't really use the data till the array is restored. So gee-wizz, you were sacrificing 1/3 speed and spending over twice as much for nearly no practical benefit at all? Well, the one other advantage that may exist is booting. IF your controller card allows booting and IF you installed your OS on the RAID for some reason (and not an SSD or SSHD) then and only then you have the advantage of still being able to boot from the RAID5 array which wouldn't be possible with RAID0 if there was no RAID0 bootable backup. Wow, that single advantage is almost meaningless then? Indeed.

Yup, that's right. So here the RAID5 list sits with mostly disadvantages and almost no advantages at all. How about RAID0 compared to RAID5, How does that list out?

RAID0 advantages over RAID5?
  • Is natively supported on MacPro (no need to shop for or purcha$e a RAID controller),
  • Can use faster per $ Desktop grade drives,
  • Can use larger Desktop grade drives,
  • More speed per number of drives used - better benefit,
  • Can configure 2-drive arrays (not possible in RAID5),
  • Uses less power from the mains ($),
  • Uses less power per terabyte (lower system heat and fan noise),
Wow that's great! But we've all read the scary rhetoric put out by vendors peddling RAID controllers... isn't there any truth to those? Well, no, not really. A simple Time-machine volume eliminates every single one of them. Well damn, then why do engineers say RAID5 is so great? They don't!!! Not for small systems like the MacPro and they haven't for the past 4 or 5 years - at lease not the ones who can add and subtract. What? What are you talking about, what's changed? Well, basically the very math which used to show some advantage to RAID5 has shown that there is none any longer - for small system implementation:


Sure, if you're running 3 or more individual arrays of 3 to 7 drives each then yes, RAID5 is a so-so good solution. RAID6 is much better however! But if you're running the 4 to 6 internal drives and maybe one other external array enclosure then RAID5 is really not for you. Both data integrity mathematically speaking, and as a matter of price|performance ratio RAID5 is inferior to RAID0.


EDIT:
On RAID1, this is really only useful if you are expecting a single drive to blow up. Maybe you have some ancient drive which you're replacing with an identical model. Set them up in a RAID1 array until the ancient one pops and then just throw it out (NOT! remaking the RAID1 array at all).

A backup like TimeMachine is superior in every way (when all things are considered) to any form of RAID redundancy. The only time RAID redundancy makes any sense at all on a MacPro like system is if you have repurposed it as a highly trafficked server system where the critical mission being sought is data availability. And then as mentioned, multiple levels of redundancy are what's needed. This is generally outside the interests of typical MacPro users/owners as they seem to profile here on MacRumors.
 
Last edited:
Thanks to everyone for the info. Now I think I can narrow down to two:
WD RE4 4TB
HGST Utrastar 4TB

I know WD RE4 is stable, and I am using them. I have 4 1-year old Seagate Barracuda 3TB, and two of them decently died. So I won't consider Seagate again for a while. Anyone knows what's HGST's general reputation?

WD doesn't recommend using black and green in RAID. I am not an expert and I don't know the details exactly. But you may google with the keyword "TLER."

I am aware of the potential risk of RAID0. Because of this, I use time machine for hourly backup. I also make daily full backup to a remote site, in case the computer and the external backup disk are lost at the same time. One of my WD RE4 failed last year, and I recovered with time machine quickly. So I think the risk is manageable.

My company promised to build an archive system for everyone to put less frequently used files. But the system is not there yet. With the budget that's controllably by myself, it is easier to just expand the disk space inside my computer.

Thanks again, please let me know if you have more suggestions on those 3TB or 4TB drives.
 
WD doesn't recommend using black and green in RAID. I am not an expert and I don't know the details exactly. But you may google with the keyword "TLER."

You can use either in a MP system RAID configuration if you want. In fact I think Black is better than RED IIRC as they're cheaper and faster. TLER is just the time the drives firmware allows it to try and recover an error before it gives up. If it's long it has a better chance to recover the error and move it to one of the drives reserved sectors - but if it takes too long some controllers will drop it from the RAID set. On the outside chance (and I mean really really outside) a MP system RAID drops a member for this reason you'll have to wait some and then reboot before you can use the RAID set again. Most RAID controller cards are very intolerant so while still quite rare, it's more of an issue. But then again many if not most, RAID controller cards won't allow you to use those drives in the first place.

A shorter TLER means the error might not be recovered at all and you will have to replace or reformat that member but it might not get dropped either so you may be able to save your work to another volume and back up before formatting or replacing the bad drive.

You don't really need to consider TLER when selecting drives for a MP system RAID.
 
Last edited:
Anecdotals like above are a dime a dozen for every maker and every device on the planet.

Also I can't believe the myth about RAID0 is still being propagated like in the previous post. It's silly! The simple truth is: Either you have a back-up or you don't. That's it. That's all there is to it. The rest is urban legend, mythology, and just complete BS. :) Unless you're a data-center RAID5 doesn't do anything for you. Nor does RAID6 for that matter - which is MUCH better than RAID5.

Anecdotal or not - a set of drives should not have 10000+ bad sectors. The previous set of seagates were 3 years old and only two of the four had any bad sectors. The worst of those had 49 bad sectors.

The only way anecdotal information can be consensus is by people sharing their experiences or the manufacturer giving out their statistics. The latter will never happen. You can chalk my anecdotal experience up to bad luck if you wish but it did happen, it shouldn't've, and it caused lack of faith in the seagate brand for me. I won't even try warranty replacement units. It's not worth the hassle for me.
 
Anecdotal or not - a set of drives should not have 10000+ bad sectors. The previous set of seagates were 3 years old and only two of the four had any bad sectors. The worst of those had 49 bad sectors.

The only way anecdotal information can be consensus is by people sharing their experiences or the manufacturer giving out their statistics. The latter will never happen. You can chalk my anecdotal experience up to bad luck if you wish but it did happen, it shouldn't've, and it caused lack of faith in the seagate brand for me. I won't even try warranty replacement units. It's not worth the hassle for me.

You might think so. But a "consensus by people" would only be accurate to any degree at all if it included a huge majority! People on this site alone constitute a tiny tiny subculture minority and ruin any chance of even a site-wide poll having much if any meaning.

I sustain a video on demand system for a chain of hotels as a side business and go thru a great number of drives per year. I like to vary brands as much as possible, am always looking for "deals", and I can show you WD, Samsung, Fujitsu, Hitachi, and Seagate drives all just as you describe and much worse. The worst by far being Samsung in my experience and the best being Seagate and Hitachi/IBM, WD pulls up somewhere in the middle. But even my experiences processing 50 to 100 drives a year for that and 6 to 12 drives a year personally is basically meaningless. I know we all like to pee on companies that supply us with something that breaks and inversely praise companies we have decided to buy something from or that we have a product from that we feel good about for some odd reason but when ya get right down to the statistical analysis maths of it our singular anecdotal opinions and experiences mean pretty much nothing at all!

When one researches drive failure of any sort the three main variables which present as most significant are age, usage, and temperature - not manufacturer to spite our brains being wired to this because of the brand aware consumer culture in which we live. Although models do profile differently the manufacturer classification doesn't hold much significance. And even model may not be a significant classification determinate for the simple fact that different models are selected specifically for different operating environments and uses. There is a forth factor which is probably an overall control factor but impossible to measure and that's environmental vibration. Bumps, jiggles, fan or nearby motor vibrations, human handling, and so on are probably of the largest significance when attempting to determine the cause of drive failure in a home or small office environment. For example placing your MacPro on a solid cement slab type floor will more than likely reduce the chance of drive failure or error very significantly compared to the identical system placed on a plywood or even planked hardwood floor.

Here are some graphs of data from stats collected from 2001 to 2006 which show how age, usage, and temperature profile across hundreds of thousands of consumer grade drives consisting of a combination of serial and parallel ATA units, ranging in speed from 5400 to 7200 rpm, and in size from 80 to 400 GB as used in a more stable environment (than the home) such as a datacenter and in use 24/7 throughout their respective lives:


AFR_Drive_Age.png


AFR_By_Drive_Usage.png


AFR_Distro_By_Temp.png


AFR_Aver_Drive_Temp.png



 
Last edited:
I've had a few of every brand fail over the past 30 years or so. Seagate RLL/MFM drives being the most common. Early IDE drives of every brand seemed to improve things. I've never really noticed much of a reliability difference between the 2 brands (SE/WD). I did have serious reliability issues with Maxtor drives, but that could have just been the luck of the draw.

I firmly believe that shock(including continuos vibrations)/heat are the largest contributing factor. Another factor not mentioned that seems to (in my personal experience) reduce failures is a quality UPS. Which reduces system wide failures as far as I can tell.

I like RAID0 and time machine. I mean seriously! Time machine back ups are a no brainer, it couldn't be more effortless. I have ALWAYS backed up my data, and I've NEVER lost any data due to a hardware failure. I have lost data in the early days but that was due to stupid human error on my part.

Currently I'm using 1 Seagate, 1 WD Black, 2 Hitachi (Raid0) - Externals 2 Seagate & 2 WD.

I aslo have a couple of retired Raptors that are slower than my Barracuda... I plan to buy 4 more (lager) Barracudas to replace all the drives in my drive bays, and an SSD/PCIe card system to boot from. I'll be done with storage for a while after that.
 
The truth is being spoken here. We do large scale (petabyte) configurations at work with RAID 1. 50% capacity loss is acceptable in that environment; data loss is not. RAID-5 isn't acceptable.

The data volume in my 'Pro has been a RAID 0 for years. Yes, I've lost a drive. Then I just do a backup restore with a new drive and continue on my way. Performance of RAID-0, even on "cheap" consumer drives, is great. Get your backup strategy setup correctly and you don't have to fret the math.
 
You might think so. But a "consensus by people" would only be accurate to any degree at all if it included a huge majority! People on this site alone constitute a tiny tiny subculture minority and ruin any chance of even a site-wide poll having much if any meaning.

I sustain a video on demand system for a chain of hotels as a side business and go thru a great number of drives per year. I like to vary brands as much as possible, am always looking for "deals", and I can show you WD, Samsung, Fujitsu, Hitachi, and Seagate drives all just as you describe and much worse. The worst by far being Samsung in my experience and the best being Seagate and Hitachi/IBM, WD pulls up somewhere in the middle. But even my experiences processing 50 to 100 drives a year for that and 6 to 12 drives a year personally is basically meaningless. I know we all like to pee on companies that supply us with something that breaks and inversely praise companies we have decided to buy something from or that we have a product from that we feel good about for some odd reason but when ya get right down to the statistical analysis maths of it our singular anecdotal opinions and experiences mean pretty much nothing at all!

When one researches drive failure of any sort the three main variables which present as most significant are age, usage, and temperature - not manufacturer to spite our brains being wired to this because of the brand aware consumer culture in which we live. Although models do profile differently the manufacturer classification doesn't hold much significance. And even model may not be a significant classification determinate for the simple fact that different models are selected specifically for different operating environments and uses. There is a forth factor which is probably an overall control factor but impossible to measure and that's environmental vibration. Bumps, jiggles, fan or nearby motor vibrations, human handling, and so on are probably of the largest significance when attempting to determine the cause of drive failure in a home or small office environment. For example placing your MacPro on a solid cement slab type floor will more than likely reduce the chance of drive failure or error very significantly compared to the identical system placed on a plywood or even planked hardwood floor.

Here are some graphs of data from stats collected from 2001 to 2006 which show how age, usage, and temperature profile across hundreds of thousands of consumer grade drives consisting of a combination of serial and parallel ATA units, ranging in speed from 5400 to 7200 rpm, and in size from 80 to 400 GB as used in a more stable environment (than the home) such as a datacenter and in use 24/7 throughout their respective lives:



You do realize your entire post is "anecdotal", right? There is no way to back up ( prove ) any of the data you present. Seagate sucked for me in less than six months - I won't buy them again. I'm telling who will read it. Argue it's anecdotal all you want. Drives no matter what brand shouldn't fail in 6 months
 
Statistics and probabilities are very interesting topics, but there are some short comings with using some numbers with RAID configurations.

I have 6 drives in my Pro.

Just by probability, I am 6 times more likely to lose a drive than someone running one drive.
Same holds true if I had two drives, two times more likely to lose one.

With or without RAID0, this is true.

RAID0 in and of itself, does not cause or increase failure rates, or loss of data.

The saving grace and most important part is backing up my data.

One thing is for sure, one day, no matter the numbers, RAID or not, I will lose at least one drive.
If it's backed up, all I lose is the drive.

If you buy one lottery ticket, and your friend buys two, who has a better probability of winning?

How much difference, statistically speaking, does it make?
 
Last edited:
With or without RAID0, this is true.

RAID0 in and of itself, does not cause or increase failure rates, or loss of data.

How much difference, statistically speaking, does it make?

I agree that RAID0 does not cause failures or increase failure rates of individual drives, but disagree completely that it doesn't effect probability of data loss for the span. Play with this calculator, which is based off of the research of drive failures that Google published. With a single new drive, survival probability over the span of 3 years is 83%. A 6-drive RAID0 has only a 32% survival probability.

But yes, a rigorous and tested backup strategy can take the pain out of probabilities. Once you have backups, you only need to weigh the cost of downtime to restore the data.
 
I agree that RAID0 does not cause failures or increase failure rates of individual drives, but disagree completely that it doesn't effect probability of data loss for the span. Play with this calculator, which is based off of the research of drive failures that Google published. With a single new drive, survival probability over the span of 3 years is 83%. A 6-drive RAID0 has only a 32% survival probability.

But yes, a rigorous and tested backup strategy can take the pain out of probabilities. Once you have backups, you only need to weigh the cost of downtime to restore the data.

I don't buy the 32% survival rate.
I also do not have a 6 drive array.
I have two SSD's in RAID0, the rest are data and back up. All the tests I have done suggest two drives bring the best performance per drive.
Obviously the failure rate on two drives would be lower, than 6, but only because each has it's own failure probability.
The 83% seems high too, maybe I have just been lucky over the last 18 years. But then I replace drives after about 3 years too. That just seems to be when performance has degraded and newer drives have improved enough to warrant upgrading.
 
No I'm an engineer and I've seen it in person.

1 drive failure in RAID 0 means all your data is gone.

Wow, the rhetoric from the product vendors really has you spun. It's not your fault tho - it's sometimes difficult to assess the facts and form an original opinion. I would be interested to know why you think RAID0 isn't actually RAID tho. LOL (Just because there's no actual redundancy? So we should call it "IAD0" :p )

So let's break it down (again for like the billionth time). What benefit is RAID5 in a CONSUMER or SOHO environment? Remember I already said it was good for large system or data-center types. First we need to clarify that the only scenario that counts in a MacPro is single drive failure. If two drives fail on you in a MacPro environment you're a freak of nature and should apply to the Guinness book of world records. You also need RAID 6 to protect against a two drive failure - RAID5 can't do it!

So on to the breakdown:

RAID5 Needs above RAID0:
  • Enterprise Class drives. (Many or most controllers will not allow desktop grade drives!)
  • Two extra do nothing enterpri$e class drives. (one running in the RAID Group and one to replace whatever might break)
  • A RAID Controller Card. (RAID5 is not natively supported in MacPro) - As a result your system heat and fan noise is increased too.
  • Many controller cards are power hungry as well and about the equivalent of running an extra 75 to 100 watt bulb for every hour your machine is on. So this is about an extra $50 a month or $600 a year just to operate it. And if we look at the power per terabyte it's even worse. And on top of both those there's another 10 to 25 watts added for the extra do-nothing drive.
Well, this all looks very bad doesn't it? Golly, there must be something good about RAID5. If nothing else simply because so many people preach it's use - like you just did. So what are they?

Well sadly no, there are almost no advantages and the disadvantages far far outweigh any advantages that might exist - as I've just demonstrated above. The real advantages of RAID5 only come into play in data-center like environments. Those SE's have the right to hype RAID5. For them it saves them time - which is money, when they can maintain the data uptime and carry out the repair simply by replacing a hot-swapped drive while falling back only one of several levels of redundancy.

Certainly that's the case then on MacPro too right? Nope, unfortunately not, without the multiple levels of redundancy it doesn't apply. And without one member of the singular RAID5 array, operation is too slow to actually continue working with it in most cases. You really need to insert that extra drive you had to purchase and allow it a few hours to rebuild. Now wait a minute, that's the SAME as RAID0_with_a_backup - can't really use the data till the array is restored. So gee-wizz, you were sacrificing 1/3 speed and spending over twice as much for nearly no practical benefit at all? Well, the one other advantage that may exist is booting. IF your controller card allows booting and IF you installed your OS on the RAID for some reason (and not an SSD or SSHD) then and only then you have the advantage of still being able to boot from the RAID5 array which wouldn't be possible with RAID0 if there was no RAID0 bootable backup. Wow, that single advantage is almost meaningless then? Indeed.

Yup, that's right. So here the RAID5 list sits with mostly disadvantages and almost no advantages at all. How about RAID0 compared to RAID5, How does that list out?

RAID0 advantages over RAID5?
  • Is natively supported on MacPro (no need to shop for or purcha$e a RAID controller),
  • Can use faster per $ Desktop grade drives,
  • Can use larger Desktop grade drives,
  • More speed per number of drives used - better benefit,
  • Can configure 2-drive arrays (not possible in RAID5),
  • Uses less power from the mains ($),
  • Uses less power per terabyte (lower system heat and fan noise),
Wow that's great! But we've all read the scary rhetoric put out by vendors peddling RAID controllers... isn't there any truth to those? Well, no, not really. A simple Time-machine volume eliminates every single one of them. Well damn, then why do engineers say RAID5 is so great? They don't!!! Not for small systems like the MacPro and they haven't for the past 4 or 5 years - at lease not the ones who can add and subtract. What? What are you talking about, what's changed? Well, basically the very math which used to show some advantage to RAID5 has shown that there is none any longer - for small system implementation:


Sure, if you're running 3 or more individual arrays of 3 to 7 drives each then yes, RAID5 is a so-so good solution. RAID6 is much better however! But if you're running the 4 to 6 internal drives and maybe one other external array enclosure then RAID5 is really not for you. Both data integrity mathematically speaking, and as a matter of price|performance ratio RAID5 is inferior to RAID0.


EDIT:
On RAID1, this is really only useful if you are expecting a single drive to blow up. Maybe you have some ancient drive which you're replacing with an identical model. Set them up in a RAID1 array until the ancient one pops and then just throw it out (NOT! remaking the RAID1 array at all).

A backup like TimeMachine is superior in every way (when all things are considered) to any form of RAID redundancy. The only time RAID redundancy makes any sense at all on a MacPro like system is if you have repurposed it as a highly trafficked server system where the critical mission being sought is data availability. And then as mentioned, multiple levels of redundancy are what's needed. This is generally outside the interests of typical MacPro users/owners as they seem to profile here on MacRumors.


----------

6 Drives in RAID 0 has 6 times the failure rate.

1 drive failure = no data.

I don't buy the 32% survival rate.
I also do not have a 6 drive array.
I have two SSD's in RAID0, the rest are data and back up. All the tests I have done suggest two drives bring the best performance per drive.
Obviously the failure rate on two drives would be lower, than 6, but only because each has it's own failure probability.
The 83% seems high too, maybe I have just been lucky over the last 18 years. But then I replace drives after about 3 years too. That just seems to be when performance has degraded and newer drives have improved enough to warrant upgrading.


----------

If you want speed and true RAID go with RAID 50.

I agree that RAID0 does not cause failures or increase failure rates of individual drives, but disagree completely that it doesn't effect probability of data loss for the span. Play with this calculator, which is based off of the research of drive failures that Google published. With a single new drive, survival probability over the span of 3 years is 83%. A 6-drive RAID0 has only a 32% survival probability.

But yes, a rigorous and tested backup strategy can take the pain out of probabilities. Once you have backups, you only need to weigh the cost of downtime to restore the data.
 
No I'm an engineer and I've seen it in person.
Same here.
1 drive failure in RAID 0 means all your data is gone.
It would be rather foolish (especially of an engineer) not to have a backup. So no, all the data would NOT be gone.
6 Drives in RAID 0 has 6 times the failure rate.
This is true no matter how the drives are configured.
1 drive failure = no data.
Again, completely untrue. These are the scare tactics of vendors peddling their wares and trying to widen to a customer base who normally would have no need of them.
If you want speed and true RAID go with RAID 50.
That would be good for someone not on a budget of any kind. It certainly does NOT deliver the best bang for the buck tho! Anyone who can add, subtract, and flip a software switch to turn on TimeMachine knows this. For use on most people's MacPro systems that's an economically unsound configuration. Just read this thread if you don't believe me. I'm a CS major, if I were employed as an SE or SA there's not more than maybe one from among the currently 70+ posters there who I would recommend RAID 5 or 50 to.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.