Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Good News, upgrading time again;

I am looking to fill my array with all 8-slots; which leads me to two questions,

1) now that I have extra drives I would like to increase the redundancy, just in case one drive fails, I don't want to be out of commission for very long.

Right now I have a RAID 5, so if one drives fails, can I still use the drive until I get a backup? or is it in limbo until the drive is replaced?

When I upgrade to 8 drives, would RAID5 still be a sound solution? obviously would mean an extra 2TB of valuable storage space compared to a R6.

Or would a RAID6 be better? if one drive of a RAID6 fails, does it just run as a RAID5?

Or should I run it as a 7-Drive RAID5 with a hot-swap drive in the extra slot? would that give me better performance over a 8-drive RAID6?

Also I see building a RAID50 is an option; how would the performance be on that configuration?


2) You had mentioned Online Expansion of the drives was slower (for obvious reasons) but how much slower? It seems like it would take quite a long time to do it manually, delete the whole array, create a new one, and then transfer 3.5TB of data from backup; that seems like that would take more then a day, even if online expansion is slower, it may be the way for me to go so I am not totally out of commission for so long, or does it make things so slow that it is even worse?
 
Right now I have a RAID 5, so if one drives fails, can I still use the drive until I get a backup? or is it in limbo until the drive is replaced?
The RAID 5 array would still work with a single disk failure (called a degraded state). It runs slower, but the data is still in tact at this point.

It's when a second failure occurs before the array is rebuilt that the data would be lost.

When I upgrade to 8 drives, would RAID5 still be a sound solution? obviously would mean an extra 2TB of valuable storage space compared to a R6.
Balancing this is very difficult to do. In regard to 8 members, it's possible to have a stable RAID 5 array, but when you consider the disks are 2+TB, then the risk goes up (additional failure occur during a rebuild, causing lost data).

Or would a RAID6 be better? if one drive of a RAID6 fails, does it just run as a RAID5?
In regard to redundancy, a level 6 would be better, but you do lose speed as the compromise.

As per data rates in a degraded state, No, a RAID 6 does not become a RAID 5 in any fashion (fundamentals are the same <read parity data, reconstruct data, then send it on to the system>, but the parity data is spread out differently due to the additional redundancy <= more of it>; more decryption/recreation of data going on, so it's slower than a level 5 when in a degraded state, let alone healthy level 5).

Or should I run it as a 7-Drive RAID5 with a hot-swap drive in the extra slot? would that give me better performance over a 8-drive RAID6?
Performance wise, the RAID 5 with or without hot swap member will be faster than an 8x member RAID 6.

It's a reasonable compromise, but you do have to be careful with not only the member count, but the capacity of each drive.

For example, 8x 2TB disks are riskier than 8x 500GB disks due to the additional data per drive. Yet 8x 500GB disks would be riskier than 1x 2TB.

Also I see building a RAID50 is an option; how would the performance be on that configuration?
Double that of one RAID 5 it's built from.

2) You had mentioned Online Expansion of the drives was slower (for obvious reasons) but how much slower? It seems like it would take quite a long time to do it manually, delete the whole array, create a new one, and then transfer 3.5TB of data from backup; that seems like that would take more then a day, even if online expansion is slower, it may be the way for me to go so I am not totally out of commission for so long, or does it make things so slow that it is even worse?
It's a lot slower, but it doesn't require the hands-on attention of doing things manually.

As per manually, it's actually faster, as the set isn't running in a degraded state (means the controller is trying to do more than one thing simultaneously). It's healthy, deleted, then a new one is created. Once the initialization process is finished (where the time goes as you now know), the data is restored from the backup that was made just prior to deletion.

Best to do this over a weekend on a work machine, just in case it takes more time than you expect (more than 24 hrs).
 
When I replaced a failing drive in my RAID 6, I think it took 4 hours to rebuild to normal state, and was fast enough to continue to edit native DSLR footage (H.264) without rendering it. If you end up doing something different, you should test that scenario and post your results for comparison! :)
 
Balancing this is very difficult to do. In regard to 8 members, it's possible to have a stable RAID 5 array, but when you consider the disks are 2+TB, then the risk goes up (additional failure occur during a rebuild, causing lost data).


In regard to redundancy, a level 6 would be better, but you do lose speed as the compromise.

As per data rates in a degraded state, No, a RAID 6 does not become a RAID 5 in any fashion (fundamentals are the same <read parity data, reconstruct data, then send it on to the system>, but the parity data is spread out differently due to the additional redundancy <= more of it>; more decryption/recreation of data going on, so it's slower than a level 5 when in a degraded state, let alone healthy level 5).


Performance wise, the RAID 5 with or without hot swap member will be faster than an 8x member RAID 6.

It's a reasonable compromise, but you do have to be careful with not only the member count, but the capacity of each drive.

For example, 8x 2TB disks are riskier than 8x 500GB disks due to the additional data per drive. Yet 8x 500GB disks would be riskier than 1x 2TB.

So it seems then, it would be better if I did a 7x 2TB RAID5, with a spare 2TB hot swap, over a 8x 2TB RAID6? Either way I have a total of about 12TB, and I would still have protection against two drive failures, (as long as the second one doesn't fail while the RAID5 is rebuilding with the hotswap) Is this accurate or am I misunderstanding something?

Double that of one RAID 5 it's built from.

Well if I had a 8x2TB RAID50, effectively 2x 4xTB RAID5s I would still get redundancy, it seems that the speed would be faster but is a config like this dangerous?


Best to do this over a weekend on a work machine, just in case it takes more time than you expect (more than 24 hrs).

That is probably what I will end up doing. Less down time. Plus with battery backup it should keep going even if we lose power.


When I replaced a failing drive in my RAID 6, I think it took 4 hours to rebuild to normal state, and was fast enough to continue to edit native DSLR footage (H.264) without rendering it. If you end up doing something different, you should test that scenario and post your results for comparison! :)


I think my concern is here, if I run a RAID6, if a drive fails, I will have to wait for a new one to get shipped to me before I can rebuild the array, even though it won't take the long once it arrives.
 
Last edited:
So it seems then, it would be better if I did a 7x 2TB RAID5, with a spare 2TB hot swap, over a 8x 2TB RAID6?
It depends on your needs as to which is better.

For example, in regard to redundancy alone, 6 has the upper hand (need 3 disks to fail before data loss, while this occurs with 2 drives in RAID 5, hot spare or not).

Performance wise (data rate), RAID 5 will be faster for the same member count (it's even possible for the count to be one short <active members> and it still beat level 6).

Either way I have a total of about 12TB, and I would still have protection against two drive failures, (as long as the second one doesn't fail while the RAID5 is rebuilding with the hotswap) Is this accurate or am I misunderstanding something?
This is correct.

Well if I had a 8x2TB RAID50, effectively 2x 4xTB RAID5s I would still get redundancy, it seems that the speed would be faster but is a config like this dangerous?
Yes, redundancy still exists in each RAID 5. Now I think I understand what has you concerned, as they're stripped together.

But what actually happens, is this:
Disk fails in one of the RAID 5 sets, so it runs in a degraded state. And even though degraded, it's still active, so the stripe is still usable and the data in tact.

Now if there's a second failure in that set before it's stable again, you're data is toast.

Where it *can* seem a bit more redundant, is if one disk fails in each set (2 failures), both sets are degraded, but the data is still in tact. Loss in such a case occurs in the event of a 3rd failure. But make sure you keep in mind, this is a specific condition, and not to be what you base any planning upon. Always figure on worst case with nested sets.

I think my concern is here, if I run a RAID6, if a drive fails, I will have to wait for a new one to get shipped to me before I can rebuild the array, even though it won't take the long once it arrives.
It will run, but Yes, it will be in a degraded state until the bad disk is replaced and the set completes the rebuild process.

However, there is a very simple strategy to help with this. Which is to buy another disk and keep it on-hand. Not exactly a hot spare as there's no location for it, but it eliminates the need to wait for shipping. See the error message, pull the bad one, toss in the replacement, and the card does the rest. You're back up to Active/Stable in a lot less time (24hrs or less should be possible).
 
so after reading this article
http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805
and the related article written three years prior,
it pretty much scared me into RAID6 is my only option, a 4 drive array is one thing, but when you double the disks, the unrecoverable read error is much higher, especially with large drives. So even though it is pretty unlikely 2 drives will fail at once, the probability of an unrecoverable read error during rebuild is more likely than not. So it seems RAID6 would be my best option, and I should still see a performance increase to my current 4 drive RAID5, right? or I could just make a RAID10, obviously its more secure, but would the r/w speeds be comparable? it maybe worth losing that 4TB of storage for peice of mind, or I guess I could always have 2 4TB RAID0 arrays and have them backup every evening. Basically I want the best read/write speeds while protecting my data... but then again doesn't everyone?
 
so after reading this article
http://www.zdnet.com/blog/storage/why-raid-6-stops-working-in-2019/805
and the related article written three years prior,
it pretty much scared me into RAID6 is my only option, a 4 drive array is one thing, but when you double the disks, the unrecoverable read error is much higher, especially with large drives. So even though it is pretty unlikely 2 drives will fail at once, the probability of an unrecoverable read error during rebuild is more likely than not. So it seems RAID6 would be my best option, and I should still see a performance increase to my current 4 drive RAID5, right? or I could just make a RAID10, obviously its more secure, but would the r/w speeds be comparable? it maybe worth losing that 4TB of storage for piece of mind, or I guess I could always have 2 4TB RAID0 arrays and have them backup every evening. Basically I want the best read/write speeds while protecting my data... but then again doesn't everyone?
Keep in mind, that it's based on a UBE of 1E14, which is a typical consumer drive.

Enterprise drives are either 1E15 or 1E16 (some SAS; not yet seen a SATA disk that's reported this level of UBE), and IIRC, you went with WD RE4 models, which are 1E15. This order of magnitude makes a difference, and you'd be OK (becomes 120TB rather than 12TB using his example of a seven member RAID 5 w/ one of the members already failed <math of another UBE during a rebuild>). Thus escaping the disaster he was pointing out based on consumer drives (which tend to have a UBE = E14).

Yes, 6 is safer, and if you can take the performance hit, worth doing, particularly if you want to expand the set later on (get an expander and another enclosure in order to add more disks).

As per 10, it's slower than 5, and given the disk count, should also be slower than 6 on that particular card. Then top it off with a notable hit on usable capacity (half the total capacity would be usable vs. n-1 or n-2 * capacity of a single member in the set; assumes all drives have the same capacity, otherwise it's based off of the smallest member).

Now you perhaps understand a bit more about my insistence on using Enterprise grade HDD's. ;) :p
 
Keep in mind, that it's based on a UBE of 1E14, which is a typical consumer drive.

Enterprise drives are either 1E15 or 1E16 (some SAS; not yet seen a SATA disk that's reported this level of UBE), and IIRC, you went with WD RE4 models, which are 1E15. This order of magnitude makes a difference, and you'd be OK (becomes 120TB rather than 12TB using his example of a seven member RAID 5 w/ one of the members already failed <math of another UBE during a rebuild>). Thus escaping the disaster he was pointing out based on consumer drives (which tend to have a UBE = E14).

Yes, 6 is safer, and if you can take the performance hit, worth doing, particularly if you want to expand the set later on (get an expander and another enclosure in order to add more disks).

As per 10, it's slower than 5, and given the disk count, should also be slower than 6 on that particular card. Then top it off with a notable hit on usable capacity (half the total capacity would be usable vs. n-1 or n-2 * capacity of a single member in the set; assumes all drives have the same capacity, otherwise it's based off of the smallest member).

Now you perhaps understand a bit more about my insistence on using Enterprise grade HDD's. ;) :p

wow, well that makes a huge difference, since I am currently running a 4 drive RAID5, even if the RAID6 is a performance hit compared to the RAID5, it should still be a significant increase compared to what I am running now right? The new array should be running somewhere around 500-600 r/w speeds right? I think at that point, it won't matter as much and my processor won't be able to render that fast anyway, however I could be wrong.
 
wow, well that makes a huge difference, since I am currently running a 4 drive RAID5, even if the RAID6 is a performance hit compared to the RAID5, it should still be a significant increase compared to what I am running now right? The new array should be running somewhere around 500-600 r/w speeds right? I think at that point, it won't matter as much and my processor won't be able to render that fast anyway, however I could be wrong.
By doubling up on the member count, Yes on both counts.

It will be a lot faster than a 4x member RAID 5, and should be able to hit your target of 500 - 600MB/s (faster is actually realistic if you are using the 2TB WD RE4's). 650MB/s+ should actually be possible with under 50% of the usable capacity filled.

Keep in mind, that the throughputs decrease as you fill up the capacity.

If you're not sure which way to go, I highly urge you to install the disks and test each level using AJA before making a choice and installing real data on it (test data is fine, as it's not critical, and you can use real data, just make sure you have a master copy).

If the throughputs of 6 are acceptable for your use, then stick with it, as it is more redundant than 5. And I suspect it will be. ;)
 
So i got the drives and will be doing this at the end of the day over the weekend, I will be doing a RAID6, just to be safe, and doing an online expansion so I don't have to worry about copying all the data back onto the new array from backup, the computer will be uninterrupted for about 64 hours, I am sure that will be enough time? Even if it's not I can still access my data Monday morning if online expansion is running. Also should I make sure my computer does not fall asleep while this is happening? Will that screw it up?
 
So i got the drives and will be doing this at the end of the day over the weekend, I will be doing a RAID6, just to be safe, and doing an online expansion so I don't have to worry about copying all the data back onto the new array from backup, the computer will be uninterrupted for about 64 hours, I am sure that will be enough time? Even if it's not I can still access my data Monday morning if online expansion is running. Also should I make sure my computer does not fall asleep while this is happening? Will that screw it up?
Yes, turn off Sleep Settings.
 
Going from 4x2TB RAID 5 to 8x2TB RAID 6 via online expansion will take a whole lot longer than it will to do a normal initialization of the 8x disks in RAID 6, then transferring the data over from your backups. My RAID 6 build took 5 hours to complete, if I remember right.
 
I'm running a areca based raid-6 array now. I've had to replace (enterprise) hard drives twice. I have a 6 x 2TB array and it took as I recall over 24 hours to reinitialize the array.

I can tell you during that whole time my worry level was high, and if I was running raid-5 and knew that another failure or even drive drop out would have wiped my data, I'd have been in near panic mode.

It's one thing to discuss on a board, it's nother thing to realize that all your photographs and video data are sitting on an array that is rebuilding and could fail at any moment.

Raid-6 for me (and I've got an extra drive on hand) ANY time vs. raid-5
 
I'm running a areca based raid-6 array now. I've had to replace (enterprise) hard drives twice. I have a 6 x 2TB array and it took as I recall over 24 hours to reinitialize the array.

I can tell you during that whole time my worry level was high, and if I was running raid-5 and knew that another failure or even drive drop out would have wiped my data, I'd have been in near panic mode.

It's one thing to discuss on a board, it's nother thing to realize that all your photographs and video data are sitting on an array that is rebuilding and could fail at any moment.

Raid-6 for me (and I've got an extra drive on hand) ANY time vs. raid-5
Huh, that's interesting that it took 24 hours. I wonder if it's related to cache memory. My Areca 1880ix-12 has only 1GB. I thought about upping it to 4GB.

I pulled two drives from my 8x2TB RAID 6, and I think it took 10 hours... five for each disk pulled. My array was half filled, 6 of 12TB. (I had one disk being iffy on cold bootup, and then pulled the wrong one when I went to replace it, thus two disks down by accident. Thankfully I was in RAID 6 and not 5!)

A full 8-disk format to RAID 6 took only 5 hours, and maybe another five to reload the data? I forget.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.