Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

VirtualRain

macrumors 603
Original poster
Aug 1, 2008
6,304
118
Vancouver, BC
I'm curious who's using large RAID arrays (12+TB) of HDs?

What RAID level are you using? RAID5, 6, 10, ???

Have you ever had to rebuild the array? Has that been problematic?

Of course, the common wisdom is that RAID5 is dead (too risky with large arrays) but I'm curious if anyone's defying this in practise.

In my case, my media collection has out grown my current collection of JBOD drives and it would be a lot of effort to rebuild if I lost a disk, so I'm going to implement an external RAID array for my Mac Mini Plex media center and I think I'm pretty much settled on RAID10 due to the issues with parity RAID on large arrays, but interested in other's experiences.
 
I have a 16TB set of disks, 8x2TB WD RE-4 (WD2003FYYS) in RAID 6, giving me 12TB of space.

I've had to rebuild the array three times in the last five years, and it takes about five hours each time. That is with about 7TB of data in the array.

Each rebuild was due to a disk dropping out of the RAID. Each time, I've gone online to Western Digital's support website, and set up an Advance RMA, which has been really nice. I just wait a few days for the new replacement drive to show up, then swap out the bad for the good, and send the bad one back in their box. The enterprise disks are 5-year warranty, so I've not had to pay for any of these replacements yet. I just had a disk drop from the RAID yesterday, in fact, and the new one is already being shipped, so I'll have rebuild #4 to do soon! :p

I get 715-815MB/sec read/write speeds in RAID 6 with an Areca 1880ix-12. I like that any two disks can fail without any loss of data. Once, I mistakenly pulled the wrong drive from the array in a failure, and I was very happy that the array was still intact! I have regular backups to rebuild it with, but still, it was nice not having to use the backups.

I added four more drive slots to my array for a total of 12 slots, but have not yet added any new disks to the slots. I'm wondering how fast it would be with 12x2TB... mmm!
 

Attachments

  • R6 16GB disab.png
    R6 16GB disab.png
    63.8 KB · Views: 130
I have a 16TB set of disks, 8x2TB WD RE-4 (WD2003FYYS) in RAID 6, giving me 12TB of space.

I've had to rebuild the array three times in the last five years, and it takes about five hours each time. That is with about 7TB of data in the array.

Each rebuild was due to a disk dropping out of the RAID. Each time, I've gone online to Western Digital's support website, and set up an Advance RMA, which has been really nice. I just wait a few days for the new replacement drive to show up, then swap out the bad for the good, and send the bad one back in their box. The enterprise disks are 5-year warranty, so I've not had to pay for any of these replacements yet. I just had a disk drop from the RAID yesterday, in fact, and the new one is already being shipped, so I'll have rebuild #4 to do soon! :p

I get 715-815MB/sec read/write speeds in RAID 6 with an Areca 1880ix-12. I like that any two disks can fail without any loss of data. Once, I mistakenly pulled the wrong drive from the array in a failure, and I was very happy that the array was still intact! I have regular backups to rebuild it with, but still, it was nice not having to use the backups.

I added four more drive slots to my array for a total of 12 slots, but have not yet added any new disks to the slots. I'm wondering how fast it would be with 12x2TB... mmm!

Interesting... thanks. How old are the drives that are dropping out? Are they all about the same age and failing at very different ages? Or are they all a variety of ages? Any conclusions about the longevity of drives in an array that runs 24/7? Are the dropped drives toast or just flaky in RAID?

I need to do some more reading, but maybe someone will know the answer to this... is the risk of an URE increase with the size of the array or the size of the drives in the array? In other words, is a 16TB array of 2TB drives less risky to rebuild than the same array made up of 4TB drives?
 
Interesting... thanks. How old are the drives that are dropping out? Are they all about the same age and failing at very different ages? Or are they all a variety of ages? Any conclusions about the longevity of drives in an array that runs 24/7? Are the dropped drives toast or just flaky in RAID?

I need to do some more reading, but maybe someone will know the answer to this... is the risk of an URE increase with the size of the array or the size of the drives in the array? In other words, is a 16TB array of 2TB drives less risky to rebuild than the same array made up of 4TB drives?
My understanding is that the risk increases with the size of the drives.

None of my drive failures were toast, just flaky in the RAID. They were varying ages, and the one that failed yesterday was one of the oldest ones I bought, so I think about 4½ years old... however, I recall one failed after only a year of use.

My conclusion is that drives last longer when the RAID is running 24/7 than when it is turned on and off. It seems like all my failures have occurred after a long shutdown period, such as after a month of travel. This is only my personal observation, and should not be considered fact.
 
My understanding is that the risk increases with the size of the drives.

None of my drive failures were toast, just flaky in the RAID. They were varying ages, and the one that failed yesterday was one of the oldest ones I bought, so I think about 4½ years old... however, I recall one failed after only a year of use.

My conclusion is that drives last longer when the RAID is running 24/7 than when it is turned on and off. It seems like all my failures have occurred after a long shutdown period, such as after a month of travel. This is only my personal observation, and should not be considered fact.

That is a valid observation in my experience. I worked for a large Aerospace firm which had hundreds of engineering workstations which ran 24/7 and rarely had a disk drive failure. However, during holidays the building maintenance crews would require total shutdown of the plant electrical systems (except for test and burn-in rooms) to do facilities work, so we performed an orderly shutdown of the workstation system and network prior to the electrical outage. Upon return, when we would bring the system back up, we would experience a large number of failed drives which had to be replaced, usually from bearing failures or "sticksion" issues with the drive heads/platters. This was a recurring pattern every shutdown.
 
I'm curious who's using large RAID arrays (12+TB) of HDs?

What RAID level are you using? RAID5, 6, 10, ???

Have you ever had to rebuild the array? Has that been problematic?

Of course, the common wisdom is that RAID5 is dead (too risky with large arrays) but I'm curious if anyone's defying this in practise.

... interested in other's experiences.

Hi

I used to run a 8 disk (1tb RE3's) RAID6 array in a ProAvio box, via a Areca RAID card (MP 3,1 and 5,1) and for a while it was glitchy, which was very worrying whilst it was rebuilding, though it always rebuilt without issue. I tried everything, until i final found the solution, which was to swap the 8088 cables round (!) after that, no problems for the next couple of years.

I've now gone to a Areca 8050T2 box that effectively has the RAID card in the box, connected to nMP via TB2. I'm running that with 2Tb HGST drives, making a 12Tb volume, after reading the blog post from BackBlaze ... HGST did much better than the other brands they'd bought.

So far so good. I too had read about RAID 5 being too much of a strain during a rebuild, and thought i should err on the side of caution.

I do love the raid - i know it's not a back up plan, but just a way to mitigate a drive failure, so i have a cheap JBOD enclosure to back up the RAID, and try to keep an offsite backup too!

Good luck with it!
Cheers :)
 
14.4 TB in RAID-50 with hot spares

Fifteen 1.2TB 2.5" Seagate Enterprise SAS drives in RAID-50 with multiple hot spares. Dual controllers on dual 16Gbps Fibre Channel with dual 4 GiB flash-backed write cache.
 

Attachments

  • esx-r50.jpg
    esx-r50.jpg
    300.3 KB · Views: 203
Last edited:
Fifteen 1.2TB 2.5" Seagate Enterprise SAS drives in RAID-50 with multiple hot spares. Dual controllers on dual 16Gbps Fibre Channel with dual 4 GiB flash-backed write cache.

I thought you might have something to add to this thread :)

RAID50 is obviously more resilient than RAID5, but it requires a lot of drives. It's also not as simple a rebuild as RAID10... correct?

What would you recommend for a media library storage solution?

----------

I've now gone to a Areca 8050T2 box that effectively has the RAID card in the box, connected to nMP via TB2. I'm running that with 2Tb HGST drives, making a 12Tb volume, after reading the blog post from BackBlaze ... HGST did much better than the other brands they'd bought.

So your current system is also RAID6? Have you had to do any rebuilds on this setup? If so, how long did it take?

-----------------------------

EDIT: from what I continue to read, RAID10 is definitely the way to go... It's a bit more expensive in number of drives than RAID5 or 6 but it's a lot simpler...

Here's a good summary of the rebuild advantage of RAID10 over parity RAID...

Mirrored RAID, like RAID 10, has three advantages for rebuild performance:

1. Rebuilds are isolated to a single mirrored pair, not the entire array. So the largest rebuild is only ever the size of a single drive.
2. Rebuild have a write factor of one, so the performance is excellent.
3. No parity recalc needed on rebuild, it is just a copy. So the controller can move at maximum speed.

Parity RAID has to rebuild the entire array. So the bigger the array and the slower the disks, the slower it goes.
 
Last edited:
RAID50 is obviously more resilient than RAID5, but it requires a lot of drives. It's also not as simple a rebuild as RAID10... correct?

The array has three 5 drive RAID-5 arrays in a three-way stripe. If a disk fails (or has a severe "hiccup"), it will be thrown out of its 5 drive set, and a hot spare will immediately be grabbed to start the rebuild. It would be a standard RAID-5 rebuild - meaning that the 4 surviving drives would be read completely and the new drive would be completely rewritten.

The controller has an independent 6 Gbps SAS channel to each drive (288 Gbps bandwidth in all), so all 4 reads could be done in parallel - meaning that the time to rebuild would be the same as RAID-10.

The main advantages of RAID-50 are better write performance and less chance of a failed rebuild, at a small loss of space compared to 15 drives in RAID-5.

Better write performance because a random small write only requires reading 3 drives and writing 2 - vs. reading 13 drives and writing 2. Less chance of a failure because you're only reading 3 drives instead of 13.


What would you recommend for a media library storage solution?

Well, that HP MSA2040 that I'm using is very good, but the $33,750.00 price tag is probably outside your planned budget. ;)

For my home system, I keep media on a 4 drive RAID-5 (using raid edition drives), and copy it nightly to a 3 drive RAID-0 (using robocopy - a utility similar to rsync ). The nightly copy only adds or updates files on the mirror - if I accidentally delete something the nightly copy won't delete the mirror copy.

There are no backups per se - because media files can be recreated, re-downloaded, re-ripped or whatever. I'm protected from double-disk failures on the RAID-5 copy, and accidental deletions.

The rest of the files on the home server are also on RAID-5. They're backed up twice a day to a RAID-5 backup drive using a backup system with incrementals, deduplication and point-in-time restore capabilities.
 
For my home system, I keep media on a 4 drive RAID-5 (using raid edition drives), and copy it nightly to a 3 drive RAID-0 (using robocopy - a utility similar to rsync ). The nightly copy only adds or updates files on the mirror - if I accidentally delete something the nightly copy won't delete the mirror copy.

Interesting... Why RAID5 with a RAID0 backup? If you were doing it over, would you consider RAID10, if not why not?
 
Raid 1+0

That is my set up. I have an ARECA 1882ix with six 3TB drives on a 8 disk enclosure. One of those drives is a hot spare so I ended up with just about 10TB of drive space. I know it is a lot of waste, but at least if one of those drives fail, I will have time to go to the store and get another one to replace the failed drive.
And interestingly enough, I did have another RAID 1+0 Promise R6 enclosure fail after a 'brownie' power outage. It was bad. 4 out of the 4 drives showed as dead. I was able to force 3 out of the 4 back to life, so I will be able to rebuild the array, but the observation that power outages, intentional or not, are bad for these arrays, really holds water in my unscientific experience.:eek:
 
I have a 16TB set of disks, 8x2TB WD RE-4 (WD2003FYYS) in RAID 6, giving me 12TB of space.

I've had to rebuild the array three times in the last five years, and it takes about five hours each time. That is with about 7TB of data in the array.

Hardware RAID controllers are file-system agnostic - they will rebuild all 16 TB blissfully unaware that some of the blocks are free space.

The software RAID systems that I've used do the same. They don't track the filesystem usage (especially for a volume that's actively being modified while being rebuilt), and many filesystems and partitioning schemes reserve metadata pools in unpartitioned space. If the RAID system doesn't rebuild every single byte corruption can occur.

----------

Interesting... Why RAID5 with a RAID0 backup? If you were doing it over, would you consider RAID10, if not why not?

If money (and power and air conditioning and concern for the health of the planet) were not a concern - everything would be on hardware RAID-60.

However, as I said, "media files can be recreated, re-downloaded, re-ripped or whatever". In the unlikely event that the RAID-5 array suffers dual drive failures at the same time that the RAID-0 has a single failure, I'll need to go back to sources and rebuild the media library.

And, of course, no RAID configuration is a backup solution. RAID protects from the failure of one (or two with RAID-6/60) drives. Backups protects from drive failure plus software and wetware failures. RAID can't restore a file that was mistakenly deleted - but backups can.

My photos, on the other hand, are on RAID-5 with twice daily backups to a different RAID-5 array.

----------

...but the observation that power outages, intentional or not, are bad for these arrays, really holds water in my unscientific experience.:eek:

My home office has three 1500VA UPS systems. Did you not have UPS systems configured to automatically shut down your systems when the UPS drops to 15 minutes of runtime?
 
Last edited:
I'm curious who's using large RAID arrays (12+TB) of HDs?

In my case, my media collection has out grown my current collection of JBOD drives and it would be a lot of effort to rebuild if I lost a disk, so I'm going to implement an external RAID array for my Mac Mini Plex media center and I think I'm pretty much settled on RAID10 due to the issues with parity RAID on large arrays, but interested in other's experiences.

This isn't that big a deal anymore. Particularly with 6TB drives selling for less than $300.

i'm using Synology:

http://www.synology.com/en-us/products/overview/DS1813+

They have their own RAID (SHR) that's pretty close to what Drobo uses, but it's a touch less flexible:

https://www.synology.com/en-us/support/tutorials/492
https://www.synology.com/en-us/support/tutorials/512#t2
https://www.synology.com/en-us/support/tutorials/512

You can add/swap in drives, but the drives must always be as big or bigger than the current set of drives. Realistically, that's not really a limitation. Anyway, slap in 3 6TB drives, and you're in a good place. An amazing array of features. An app store so you can serve so many things from the box.
 
Last edited:
This isn't that big a deal anymore. Particularly with 6TB drives selling for less than $300.

i'm using Synology:

http://www.synology.com/en-us/products/overview/DS1813+

They have their own RAID (SHR) that's pretty close to what Drobo uses, but it's a touch less flexible:

https://www.synology.com/en-us/support/tutorials/492
https://www.synology.com/en-us/support/tutorials/512#t2
https://www.synology.com/en-us/support/tutorials/512

You can add/swap in drives, but the drives must always be as big or bigger than the current set of drives. Realistically, that's not really a limitation. Anyway, slap in 3 6TB drives, and you're in a good place. An amazing array of features. An app store so you can serve so many things from the box.

Sorry... what are you referring that's not a "big deal anymore"?

BTW, I looked at Synology also, but it's almost the same cost as a Mac Mini and an external enclosure... (and I already have a Mac Mini for my Plex/NAS needs) and the Mac Mini comes with an App Store too :p :D

----------

If money (and power and air conditioning and concern for the health of the planet) were not a concern - everything would be on hardware RAID-60.

However, as I said, "media files can be recreated, re-downloaded, re-ripped or whatever". In the unlikely event that the RAID-5 array suffers dual drive failures at the same time that the RAID-0 has a single failure, I'll need to go back to sources and rebuild the media library.

And, of course, no RAID configuration is a backup solution. RAID protects from the failure of one (or two with RAID-6/60) drives. Backups protects from drive failure plus software and wetware failures. RAID can't restore a file that was mistakenly deleted - but backups can.

My photos, on the other hand, are on RAID-5 with twice daily backups to a different RAID-5 array.



Interesting... you still use a lot of parity RAID.

All the reading I've done seems to indicate RAID10 is the path forward these days (esp. with 4 and 5TB drives). It's only incrementally more expensive in small arrays (it only requires an extra drive or two) and it's a lot simpler.
 
Sorry... what are you referring that's not a "big deal anymore"?

BTW, I looked at Synology also, but it's almost the same cost as a Mac Mini and an external enclosure... (and I already have a Mac Mini for my Plex/NAS needs) and the Mac Mini comes with an App Store too :p :D

----------



Interesting... you still use a lot of parity RAID.

All the reading I've done seems to indicate RAID10 is the path forward these days (esp. with 4 and 5TB drives). It's only incrementally more expensive in small arrays (it only requires an extra drive or two) and it's a lot simpler.

I mean since you can get a single 6TB drive, that's a lot of storage, and for many, kills the need for a RAID all together.

Also, Drobo RAID format and SHR, IMO, are more robust than the crud that is RAID5/6. They are significantly faster, and let up to 2 drives fail. Just significantly better solutions.

True, it costs, but they are way more robust than a Mac mini at running a RAID/NAS. Further, the apps on SYnology are not Numbers/Pages, they are server based apps. Like running your own DNS server, virus, VNC, FTP, media servers, photo servers, iTunes servers.

I also have a Mac mini, now collecting dust, but it's not really comparable to the Synology, at least for my purposes. It will depend on your usage.

Good luck with it.
 
So your current system is also RAID6? Have you had to do any rebuilds on this setup? If so, how long did it take?

-----------------------------

EDIT: from what I continue to read, RAID10 is definitely the way to go... It's a bit more expensive in number of drives than RAID5 or 6 but it's a lot simpler...

Yup, RAID 6. No rebuilds so far. The previous system had lots of rebuilds but every single one was down to the 8088 cables. The areca system seems vert stable - and they have great email support. Each rebuild was about 12 hours long. You can set the system so that you can keep working with the RAID while it's rebuilding.

RAID 10 looks interesting, but for me i need a large disc volume, and the mirroring aspect of RAID 10 would decrease it too much. Thanks for the tip though.
 
That is my set up. I have an ARECA 1882ix with six 3TB drives on a 8 disk enclosure. One of those drives is a hot spare so I ended up with just about 10TB of drive space. I know it is a lot of waste, but at least if one of those drives fail, I will have time to go to the store and get another one to replace the failed drive.
And interestingly enough, I did have another RAID 1+0 Promise R6 enclosure fail after a 'brownie' power outage. It was bad. 4 out of the 4 drives showed as dead. I was able to force 3 out of the 4 back to life, so I will be able to rebuild the array, but the observation that power outages, intentional or not, are bad for these arrays, really holds water in my unscientific experience.:eek:

Get a UPS ? I have 2x 5 drive NAS units, each with a UPS. Has been very useful when the incompetents have been digging up the street and breaking the power line 3 times in the same day.
 
I am planning to get a Pegasus2 R4 and run it in RAID6. Everything will be backed up to an external tape-based backup system, but it would be good to have maximal reliability in-house as well (because dealing with those external services is always a mess). Anyone has experience with this setup? Or would you recommend something different? The RAID array should be external and it should work with the new Mac Pro. There won't be too much traffic, its mostly for archiving data, although the computer will host a number of large databases (I am unsure yet whether they should be put on the main SSD or on the RAID array). Would be helpful for any suggestions :)
 
Interesting... you still use a lot of parity RAID.

All the reading I've done seems to indicate RAID10 is the path forward these days (esp. with 4 and 5TB drives). It's only incrementally more expensive in small arrays (it only requires an extra drive or two) and it's a lot simpler.

I use RAID-1(0) for small arrays (e.g. for system drives) and loads that have a high random write ratio.

For capacity, always 5/6/50/60. For the 18TB (14.4 usable) array that I posted, I'd need 24 disks (at $900 each) instead of 15. That's the entire chassis. I'd need to buy an expansion chassis (up to seven 25-disk expanders at $3.4k empty can be chained from the controller chassis) for the hot spares. That's quite an increment to the cost.

Much of the cost increase for RAID-10 comes when the disk count forces you to get more or larger cabinets, and controllers with more ports. Except for small arrays those costs are significant.
 
Last edited:
I am planning to get a Pegasus2 R4 and run it in RAID6. Everything will be backed up to an external tape-based backup system, but it would be good to have maximal reliability in-house as well (because dealing with those external services is always a mess). Anyone has experience with this setup? Or would you recommend something different? The RAID array should be external and it should work with the new Mac Pro. There won't be too much traffic, its mostly for archiving data, although the computer will host a number of large databases (I am unsure yet whether they should be put on the main SSD or on the RAID array). Would be helpful for any suggestions :)

RAID6 is a very odd choice in this situation. If I'm not mistaken, you have 4 bays? RAID6 will consume two drives for parity, leaving you with two drives worth of capacity. RAID10 offers the same capacity, better write performance, somewhat similar protection from drive failure, but is simpler and will rebuild a helluva lot faster if a drive is replaced.

All that said, If you have a backup, you might as well run RAID0 for the ultimate in capacity and performance, as long as you can afford the downtime in the event of a drive failure to restore. Surprisingly, the time to restore from a backup, is probably less time than it would take a large RAID6 array to rebuild.

I use RAID-1(0) for small arrays (e.g. for system drives) and loads that have a high random write ratio.

For capacity, always 5/6/50/60. For the 18TB (14.4 usable) array that I posted, I'd need 24 disks (at $900 each) instead of 15. That's the entire chassis. I'd need to buy an expansion chassis (up to seven 25-disk expanders at $3.4k empty can be chained from the controller chassis) for the hot spares. That's quite an increment to the cost.

Much of the cost increase for RAID-10 comes when the disk count forces you to get more or larger cabinets, and controllers with more ports. Except for small arrays those costs are significant.

I see... Makes sense.

In my case, I have purchased a 5-bay enclosure along with four 5TB Seagate drives. This will give me 10TB in RAID10. For slightly more, I could have purchased five 4TB RAID drives and ran RAID6 for 12TB, but the complexity and super long rebuild times didn't appeal to me to get an extra 2TB of storage. I discounted RAID5 as I will not be running backups (media files) and everything I read said a 12-16TB RAID5 would likely not rebuild without an URE... Which promoted this thread. It doesn't seem like anyone here is running RAID5 with 4TB+ consumer drives.
 
Last edited:
RAID6 is a very odd choice in this situation. If I'm not mistaken, you have 4 bays? RAID6 will consume two drives for parity, leaving you with two drives worth of capacity. RAID10 offers the same capacity, better write performance, somewhat similar protection from drive failure, but is simpler and will rebuild a helluva lot faster if a drive is replaced.

Uups, sorry, that was supposed to be an R8, with 8 bays. I am thinking of RAID6 instead of RAID10 because the capacity hit with RAID10 is quite big and I don't think it has any additional properties which would make it more attractive...
 
Just a simple related questions for you experts here about RAID-10.

Is there any advantage when using the Apple Disk Utility soft RAID as to the order in which you build the array. Is it better to build a pair of RAID-1 arrays, and then access them as RAID-0 ... or the reverse?

I am using a Pegasus J4 (4ea. HGST 1TB 7200rpm 2.5" drives) with a Mac Mini as a media server, and want to put my iTunes movie/music library on the Pegasus. I do, of course, have the library backed up multiple times for safety, it is just low-maintenance availability I was thinking here ... possibly a gross overkill in the long run.

Thanks for any suggestions...


-howard
 
Uups, sorry, that was supposed to be an R8, with 8 bays. I am thinking of RAID6 instead of RAID10 because the capacity hit with RAID10 is quite big and I don't think it has any additional properties which would make it more attractive...

Ah, I see. That makes more sense.

Just a simple related questions for you experts here about RAID-10.

Is there any advantage when using the Apple Disk Utility soft RAID as to the order in which you build the array. Is it better to build a pair of RAID-1 arrays, and then access them as RAID-0 ... or the reverse?

I am using a Pegasus J4 (4ea. HGST 1TB 7200rpm 2.5" drives) with a Mac Mini as a media server, and want to put my iTunes movie/music library on the Pegasus. I do, of course, have the library backed up multiple times for safety, it is just low-maintenance availability I was thinking here ... possibly a gross overkill in the long run.

Thanks for any suggestions...


-howard

Yes... You want to mirror pairs of drives first, and then stripe.

Instructions here... http://pietrzyk.us/raid-10-using-mac-disk-utility/

The key advantage of RAID10 vs RAID01 is that RAID10 will rebuild quicker in the event of a drive failure... the new drive simply needs to be restored from it's mirror. In RAID01, the entire striped array that the new disk is part of needs to be rebuilt. Apparently, there are also advantages to RAID10 in terms of probability of survival in two disk failure scenarios and performance when degraded, but I don't believe these differences are that significant compared to the rebuild advantage.
 
Ups

Get a UPS ? I have 2x 5 drive NAS units, each with a UPS. Has been very useful when the incompetents have been digging up the street and breaking the power line 3 times in the same day.

Exactly. Perhaps the most important thing after setting up the RAID array. Live and learn -- I will be buying a huge one for my office server. I lost 10 years of memories (priceless) at home -- but losing financial and patient data from my business? NO WAY!!!!!! :p
 
While I wait for my enclosure to show up in the mail, I've been thinking that maybe the better way to go is just a RAID0 array of my 4 new HDs with periodic (weekly?) backups to some JBOD drives. Since this is a media library, down time isn't huge, so maybe RAID10 is just throwing money at uptime (in the event of a drive failure) I don't really need.

Pros to RAID0 vs RAID10
- Twice the capacity
- Twice the performance

Cons
- Requires manual periodic backups to some JBOD drives
- Need to recopy everything in the event of a drive failure
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.