Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For my home media server (Music and Movies) I have 16 3TB drives split into 2 RAID 5 arrays. 1 array is used for TimeMachine and the other is the media server. I run OS X Server on a MacPro 3,1.
 
For my home media server (Music and Movies) I have 16 3TB drives split into 2 RAID 5 arrays. 1 array is used for TimeMachine and the other is the media server. I run OS X Server on a MacPro 3,1.

Which drives, chassis and controllers do you use?

My valuables are on four 3TB Constellation ES.3 internal to the system, with a 3ware (LSI) 9650SE with battery-backed cache. Media is on NAS drives in SanDisk PM cabinets with software RAID-5. Backups to four 2 TB Constellation ES.3 internal on the Intel chipset RAID-5.
 
For my home media server (Music and Movies) I have 16 3TB drives split into 2 RAID 5 arrays. 1 array is used for TimeMachine and the other is the media server. I run OS X Server on a MacPro 3,1.

In addition to AidenShaw's questions above... So each RAID5 array is 8x3TB? Have you ever had a drive failure? What was the rebuild time like?
 
In addition to AidenShaw's questions above... So each RAID5 array is 8x3TB? Have you ever had a drive failure? What was the rebuild time like?

Or a "drive hiccup"? Not all "drive failures" are hard head crashes with loss of data.

A drive's firmware is really a small realtime multi-threaded operating system (sometimes running on multi-core controllers).

Sometimes the firmware hangs (visible to the user as system or disk offline until power cycled) or does a crash/reboot (sometimes invisible, usually shows up as an isolated, non-reproducible I/O error).

This morning at 02:42 the 3ware 9650SE kicked Constellation ES.3 #2 out of the array due to a command timeout - and then immediately put it back into the array and started rebuilding. (For the curious - this 12TB physical (9TB usable) RAID-5 array rebuilt in 80 minutes.)

This happens every 3 or 4 months. (I have some $300K EMC arrays that do the same thing - although they'll rebuild on a hot spare and put the failed drive offline.)

The lesson to be learned is that you should pay as much attention to keeping the firmware in your drives up-to-date as you pay to app updates, OS updates, and system firmware updates.
 
Nobody using ZFS?

I have a modest 6TB (4x2TB) RAIDZ setup using internal drives - ZFS feels like overkill for that few drives, but on a big setup it seems like it could be nice - you could give 2 or more parity disks, the .snapshots are handy, and you're not tied to hardware -- could migrate the disks to a new machine (even a linux box), install zfs and access the data. There's a ton of other features that I haven't tried (compression, dedup, copies=2, etc)

The write speeds seem very fast compared to what I remember with the software RAID5 setups I've toyed with in the past - it was keeping up with my SATA2 SSD when moving data from the SSD to the array.

Can't boot off the ZFS array, nor directly time machine to it (though a sparsebundle on the array will work for time machine)
 
2 arrays here.

1. Windows 7 hardware RAID
5x3TB RAID5
Have not needed to rebuild this yet either. Once I update the firmware on the RAID controller I will be able to add 3 more disks and run in a RAID6.
Windows 7. Rsync runs every 12 hours to FreeNAS.


2. ZFS in FreeNAS
6x4TB RAID6
Used for backups of primary RAID array
Crashplan jail, VirtualBox Jail in the future.
No snapshots, deduplication, or compression.
Have not had a need to rebuild yet.

Might be slight overkill, but this one became necessary when I found out about the RAID5 write hole and that my data could be gone at any minute + the amount of time it took for me to convert all my physical media to an iTunes friendly format. To me, it has been 100% worth it.
 
Which drives, chassis and controllers do you use?

My valuables are on four 3TB Constellation ES.3 internal to the system, with a 3ware (LSI) 9650SE with battery-backed cache. Media is on NAS drives in SanDisk PM cabinets with software RAID-5. Backups to four 2 TB Constellation ES.3 internal on the Intel chipset RAID-5.

I use 2 HighPoint RocketRAID 642L cards, 2 Sans Digital TowerRAID TR8M Enclosures, and 16 3TB RED NAS drives.

Each Enclosure set of 8 drives is a RAID 5 array. I've not yet had a drive failure so I haven't had to rebuild.

It would be nice to have a better performing caching raid card but the above is sufficient for my needs.

I've gone back and forth a few times with FreeNAS (BSD Linux with ZFS) and OS X Server with MacZFS and OS X Server with hardware Raid5. I keep coming back to OS X Server over Freenas due to being able to use the MacPro for other things then just a NAS. The MacPro 3,1 that is my NAS is also responsible for running my model train layout. There's no Freenas Jail for that :)
 
I'm curious who's using large RAID arrays (12+TB) of HDs?

What RAID level are you using? RAID5, 6, 10, ???

Have you ever had to rebuild the array? Has that been problematic?

Of course, the common wisdom is that RAID5 is dead (too risky with large arrays) but I'm curious if anyone's defying this in practise.

This is old, but I figured I'd throw my experiences out there anyway.

A for a couple of years up until a month or so ago, I was running two RAIDs in one linux workstation:

1) 5x3TB in RAID5 (was about 11 TB usable)

2) 2x4TB in RAID1 (about 3.6 TB usable)

My raid1 was the "archive" for the important stuff in the raid5, plus I of course had an off site back up. These were all Seagate drives.

Then about a month ago, I restarted with an update and got a degraded RAID warning; it was one of the drives in the RAID5, which was accumulating a lot of bad blocks, like 40K of them. So, I had a spare drive around and swapped it in and began the rebuild. When I came in to work the next morning, IT DIDN'T WORK! I had a bit of panic and after trying a few things on the rebuild it was a no go. One of my 4 remaining disks was starting to gather some bad block too. This time it was only 8, but as soon as it would try to read from those blocks, mdadm would fail the drive and the rebuild would stop.

So.... I kept the raid up in degraded mode and copied everything I could off the raid to a new set of disks. Every time it would read over a bad sector, it would fail that drive and the raid would stop working. But I'd just rebuild it, then start copying again, taking note of where it was reading when the failure happened. Luckily, it was so few sectors that I got everything off my that raid.

Now my current set up is this:

1) 3x3TB RAID5 (5.5 TB usable)

2) 4x4TB RAID10 (7.3 TB usable)

This is actually quiet a bit more space than I currently need, so I have a cron job resyncing the RAID5 to the RAID10 (not everything, but the important stuff).

Anyway, after all this, I still don't mind RAID5, so long as you have a trust worthy back up, or two. RAID6 has its own problems (other than just the extra cost per usable TB), mostly that twice the parity calculations can mean twice the slow down if you're writing lots of stuff to disk (and I ran into a few jobs were I was writing thousands of small files, during which the raid became otherwise unusable). And frankly, a lot of the risk to losing any RAID comes from problems that will effect all the drives at once, so you're going to need back ups else where anyway. This then leaves everything up to the user to figure out the best mix and match. Though I suppose I probably wouldn't exceed a 4 disk raid 5 again. Even if you don't lose data, its not worth the hassle.
 
1) 5x3TB in RAID5 (was about 11 TB usable)
2) 2x4TB in RAID1 (about 3.6 TB usable)

Now my current set up is this:

1) 3x3TB RAID5 (5.5 TB usable)
2) 4x4TB RAID10 (7.3 TB usable)

I think that you're mixing binary and decimal notation, and including minor file system overhead.

5x3TB RAID5 = 12 TB usable (11.2 TiB usable)
2x4TB RAID1 = 4 TB usable (3.73 TiB usable)
3x3TB RAID5 = 6 TB usable (5.59 TiB usable)
4x4TB RAID10 = 8 TB usable (7.45 TiB usable)
 
Who's using large RAID arrays of HDs?

This is old, but I figured I'd throw my experiences out there anyway.



A for a couple of years up until a month or so ago, I was running two RAIDs in one linux workstation:



1) 5x3TB in RAID5 (was about 11 TB usable)



2) 2x4TB in RAID1 (about 3.6 TB usable)



My raid1 was the "archive" for the important stuff in the raid5, plus I of course had an off site back up. These were all Seagate drives.



Then about a month ago, I restarted with an update and got a degraded RAID warning; it was one of the drives in the RAID5, which was accumulating a lot of bad blocks, like 40K of them. So, I had a spare drive around and swapped it in and began the rebuild. When I came in to work the next morning, IT DIDN'T WORK! I had a bit of panic and after trying a few things on the rebuild it was a no go. One of my 4 remaining disks was starting to gather some bad block too. This time it was only 8, but as soon as it would try to read from those blocks, mdadm would fail the drive and the rebuild would stop.



So.... I kept the raid up in degraded mode and copied everything I could off the raid to a new set of disks. Every time it would read over a bad sector, it would fail that drive and the raid would stop working. But I'd just rebuild it, then start copying again, taking note of where it was reading when the failure happened. Luckily, it was so few sectors that I got everything off my that raid.



Now my current set up is this:



1) 3x3TB RAID5 (5.5 TB usable)



2) 4x4TB RAID10 (7.3 TB usable)



This is actually quiet a bit more space than I currently need, so I have a cron job resyncing the RAID5 to the RAID10 (not everything, but the important stuff).



Anyway, after all this, I still don't mind RAID5, so long as you have a trust worthy back up, or two. RAID6 has its own problems (other than just the extra cost per usable TB), mostly that twice the parity calculations can mean twice the slow down if you're writing lots of stuff to disk (and I ran into a few jobs were I was writing thousands of small files, during which the raid became otherwise unusable). And frankly, a lot of the risk to losing any RAID comes from problems that will effect all the drives at once, so you're going to need back ups else where anyway. This then leaves everything up to the user to figure out the best mix and match. Though I suppose I probably wouldn't exceed a 4 disk raid 5 again. Even if you don't lose data, its not worth the hassle.


I'm curious why you would bother with RAID5 and RAID10 in your situation. If you have a copy on two different arrays... A pair of RAID0 arrays with one backing up the other would be simpler, faster, and cheaper and really exposes you to no risk of loss or down time since you've essentially got a hot standby backup should a disk fail in either array. And as you yourself say, the key problem beyond a single drive failure is something affecting an entire array, which you would also have covered with a pair of RAID0 arrays (one backing up the other). RAID5 and RAID10 here is just wasting drive capacity on redundancy you don't seem to need.
 
Last edited:
I'm curious why you would bother with RAID5 and RAID10 in your situation. If you have a copy on two different arrays, RAID0 is faster, cheaper and really exposes you to no risk of loss or down time since you've essentially got a hot standby backup should a disk fail in either array.

Part of the reason is because I’m not particularly stressing the drives and am unwilling to move everything over to the RAID10, at least at the moment. Some of what I have on the RAID5 is also other people’s stuff. Its also just a legacy thing at this point. Anyway, I don’t have a great answer. I thought about RAID0 for a more or less straight scratch space, then a RAID10 for the important stuff. And I still might do that. I’ve only had this configuration up for about a week and half now.

----------

I think that you're mixing binary and decimal notation, and including minor file system overhead.

5x3TB RAID5 = 12 TB usable (11.2 TiB usable)
2x4TB RAID1 = 4 TB usable (3.73 TiB usable)
3x3TB RAID5 = 6 TB usable (5.59 TiB usable)
4x4TB RAID10 = 8 TB usable (7.45 TiB usable)

Quite right. Whoops.
 
Using RAID 1 (2x3TB) in a cMP. Has been fine so far. I can easily do 4 drives since I use PCIe SSD.
 
Have you consider using OpenZFS?

I have a Mac Pro 1,1 which means Lion and OpenZFS does not officially support 32 bit. So I have been using MacZFS for my media storage. 5x3TB.

But I decided it was very problematic in OSX for what I am using it for. Access times are very long on the pool so trying to edit movies in iTunes or pulling up finder can take 10-20 seconds while it figures out where the files are. That doesn't even include when the drives are spun down and they have to spin back up sequentially which adds another 15-20 seconds.

I took my old array (6x2TB) and copied everything over, I then destroyed the ZFS pool and made it an Apple Concatenated drive. OSX accesses it MUCH quicker and this should keep the wear down on the drives. The bottleneck is the gigabit NIC, so just having the speed of one drive at a time is no big deal.

I will probably keep my 2TB drives as ZFS and use rsync or something to use it as a periodic backup.
 
my input...

i have a 4 disk 1TB RAID 5 array with RE4's no problems so far (had it for about a year (used for media library and trietary backup of important files, done once an hour for some and once daily for others).

my library is currently not enormous so the 3TB of space if more than enough.
and my RAID box is a OWC rack mount one (was the most affordable at the time of purchase) which means that as its RAID controller is a SOC i can only get about 250MB/s read/write (which is still fine as my mac mini server in LAG can only send about 250MB/s of data at the time...

i must confess though that the new LaCie TB2 8 drive boxes are very tempting... (may just have to save up some $$$)
 
It's interesting reading everyone's different setups.

I've got a 16TB Lacie Quadra for my media, stock, and old projects. It defaults to RAID 5—I'm not sure if I could change it if I wanted to. My work uses tons of Drobos and they sort of soured me on using them—the redundancy and expansion options seem to work well, but they are often slow, which computers on the network would see them was a crap shoot that changed week to week, and installing a Drobo Dashboard update somehow bricked my computer and I lost a day restoring from a backup.

I'm interested in ZFS because some of the data I'm backing up is of the non-replaceable sort, namely my photos and old animations, but even for me it just feels like too much of a hassle at this stage to implement.

Apple at some point soon is going to have to replace the filesystem, and I'm kind of holding out for that at this point. I would say I'd be waiting forever but their adoption of Swift makes me think it'll happen sooner rather than later :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.