For my home media server (Music and Movies) I have 16 3TB drives split into 2 RAID 5 arrays. 1 array is used for TimeMachine and the other is the media server. I run OS X Server on a MacPro 3,1.
For my home media server (Music and Movies) I have 16 3TB drives split into 2 RAID 5 arrays. 1 array is used for TimeMachine and the other is the media server. I run OS X Server on a MacPro 3,1.
For my home media server (Music and Movies) I have 16 3TB drives split into 2 RAID 5 arrays. 1 array is used for TimeMachine and the other is the media server. I run OS X Server on a MacPro 3,1.
In addition to AidenShaw's questions above... So each RAID5 array is 8x3TB? Have you ever had a drive failure? What was the rebuild time like?
Which drives, chassis and controllers do you use?
My valuables are on four 3TB Constellation ES.3 internal to the system, with a 3ware (LSI) 9650SE with battery-backed cache. Media is on NAS drives in SanDisk PM cabinets with software RAID-5. Backups to four 2 TB Constellation ES.3 internal on the Intel chipset RAID-5.
I'm curious who's using large RAID arrays (12+TB) of HDs?
What RAID level are you using? RAID5, 6, 10, ???
Have you ever had to rebuild the array? Has that been problematic?
Of course, the common wisdom is that RAID5 is dead (too risky with large arrays) but I'm curious if anyone's defying this in practise.
1) 5x3TB in RAID5 (was about 11 TB usable)
2) 2x4TB in RAID1 (about 3.6 TB usable)
Now my current set up is this:
1) 3x3TB RAID5 (5.5 TB usable)
2) 4x4TB RAID10 (7.3 TB usable)
This is old, but I figured I'd throw my experiences out there anyway.
A for a couple of years up until a month or so ago, I was running two RAIDs in one linux workstation:
1) 5x3TB in RAID5 (was about 11 TB usable)
2) 2x4TB in RAID1 (about 3.6 TB usable)
My raid1 was the "archive" for the important stuff in the raid5, plus I of course had an off site back up. These were all Seagate drives.
Then about a month ago, I restarted with an update and got a degraded RAID warning; it was one of the drives in the RAID5, which was accumulating a lot of bad blocks, like 40K of them. So, I had a spare drive around and swapped it in and began the rebuild. When I came in to work the next morning, IT DIDN'T WORK! I had a bit of panic and after trying a few things on the rebuild it was a no go. One of my 4 remaining disks was starting to gather some bad block too. This time it was only 8, but as soon as it would try to read from those blocks, mdadm would fail the drive and the rebuild would stop.
So.... I kept the raid up in degraded mode and copied everything I could off the raid to a new set of disks. Every time it would read over a bad sector, it would fail that drive and the raid would stop working. But I'd just rebuild it, then start copying again, taking note of where it was reading when the failure happened. Luckily, it was so few sectors that I got everything off my that raid.
Now my current set up is this:
1) 3x3TB RAID5 (5.5 TB usable)
2) 4x4TB RAID10 (7.3 TB usable)
This is actually quiet a bit more space than I currently need, so I have a cron job resyncing the RAID5 to the RAID10 (not everything, but the important stuff).
Anyway, after all this, I still don't mind RAID5, so long as you have a trust worthy back up, or two. RAID6 has its own problems (other than just the extra cost per usable TB), mostly that twice the parity calculations can mean twice the slow down if you're writing lots of stuff to disk (and I ran into a few jobs were I was writing thousands of small files, during which the raid became otherwise unusable). And frankly, a lot of the risk to losing any RAID comes from problems that will effect all the drives at once, so you're going to need back ups else where anyway. This then leaves everything up to the user to figure out the best mix and match. Though I suppose I probably wouldn't exceed a 4 disk raid 5 again. Even if you don't lose data, its not worth the hassle.
I'm curious why you would bother with RAID5 and RAID10 in your situation. If you have a copy on two different arrays, RAID0 is faster, cheaper and really exposes you to no risk of loss or down time since you've essentially got a hot standby backup should a disk fail in either array.
I think that you're mixing binary and decimal notation, and including minor file system overhead.
5x3TB RAID5 = 12 TB usable (11.2 TiB usable)
2x4TB RAID1 = 4 TB usable (3.73 TiB usable)
3x3TB RAID5 = 6 TB usable (5.59 TiB usable)
4x4TB RAID10 = 8 TB usable (7.45 TiB usable)
Have you consider using OpenZFS?