Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cbt3

macrumors member
Original poster
Dec 14, 2011
97
0
I have some raid0s set up that I backup constantly, but I am running out of space and need a new storage solution;

I want the stability of Raid5 but I also want raid0 speeds;

I was thinking if I had two external RAID5s on eSata and then put them on a RAID0 through software raid, that would be really fast(right?), and I would still have one huge drive for all my stuff with redundancy? does that sound like a bad idea?

Or should I say, would that be stable and fast? and either way what products would you recommend? I am on a 2008 Mac Pro and have a 4 port eSata in my PCIe slot(I believe SATA II, would absolutely get a SATAIII card if its not to expensive. I don't have a specific budget, but I need to give multiple options to my higher ups and they will decide how much money I get for this.

Any help is greatly appreciated!


ps:
also I am not looking for anything rackmounted or networked
and the final array should be 6-10TB
 
Pretty sure you can't combine 2 raid 5 arrays into 1 Raid 0 array. That would completely defeat the purpose of a Raid 5 array.

Raid 5 is almost the same speed as Raid 0 for reads and you have the added security of redundancy.
 
Pretty sure you can't combine 2 raid 5 arrays into 1 Raid 0 array. That would completely defeat the purpose of a Raid 5 array.

Raid 5 is almost the same speed as Raid 0 for reads and you have the added security of redundancy.

if the raid 5s are hardware arrays, wouldn't disk utiliy just read them as one disk? then I would be able to make a raid0 array with two of them? why would this defeat the purpose of raid 5? if one of those drives crashes I could still replace them couldn't I?
 
if the raid 5s are hardware arrays, wouldn't disk utiliy just read them as one disk? then I would be able to make a raid0 array with two of them? why would this defeat the purpose of raid 5? if one of those drives crashes I could still replace them couldn't I?

I was actually wrong. What you need is Raid 50 (5+0). This stripes 2 Raid 5 sets over Raid 0. I'm not sure if it's worth it though.

This card will do it though along with other Raid levels.

http://eshop.macsales.com/item/NewerTech/MXPRMS6G1E1I/
 
that card only has one slot? how could I do what im looking to do with that?
Not sure what you mean for certain, but if you're wondering about how it connects to disks, each of those ports will connect to 4x disks (aka MiniSAS).

In order to create a 50 on that card, you would need to use an external enclosure attached to the external port (minimum of 6x disks, as 5 requires a minimum of 3).

BTW, that card will not boot OS X.
 
Not sure what you mean for certain, but if you're wondering about how it connects to disks, each of those ports will connect to 4x disks (aka MiniSAS).

In order to create a 50 on that card, you would need to use an external enclosure attached to the external port (minimum of 6x disks, as 5 requires a minimum of 3).

BTW, that card will not boot OS X.

booting is not necessary, I need these drives for scratch disks only.
edit: when I say scratch, I mean final cut pro scratch disks, the data is not temporary, and i would like redundancy in between full backups.

alright lets forget that card for a minute and focus on eSata, which I already have.
with mac osx, I can create a raid array regardless of the card, I have already done this creating a very fast raid 0 array on three 1TB eSata drives, however before I upgrade I wanted to do something more stable, no one really answered this main concern if I can nest hardware raid enclosures this way, nesting raid5s would seem to me to get the best speed with redundancy, however I may not be able to do this anyway due to the cost. As I research I have come up with another question

Which is faster? Raid 10 or Raid 5? then another question, which is faster Raid 10 in 1 enclosure? or two Raid1 enclosures on a software raid0 array?(if that is possible?)
 
Last edited:
OSX lets you make raid 0 or 1 (or spanned, but you are probably not interested in that).

To do raid 5 you'll need either additional software (often provided with the card) or hardware raid.

If you make two raid 5 sets, you can use osx software to combine them in either raid 0 or 1.

It sounds like you are getting paid to provide advise. So I assume you are researching, reading, learning, contacting vendors etc.

Speed depends as much on how it is implemented as what you choose.

Unfortunately now is not a good time to be buying disk - so you will also have to balance the cost now, with the convenience/cost of expanding later.

A simple solution would be two OWC raid 5 boxes connected to your existing esata card, using OSX software to strip them to raid 0 if your need more speed or capacity. But there are many options. And likely not many people who will be willing to do the research for you with collecting a consulting fee.
 
booting is not necessary, I need these drives for scratch disks only.
Then why do you need a 50? :confused:

I ask, since it's temporary data, a striped set will make more sense. The reasoning behind this, is threefold:
  1. if disk dies, you replace it and don't worry about restoring the data as it was temporary to begin with (any work being processed at the time of the failure has to be re-done anyway).
  2. it's faster than a 10, 5 or 50 for the same member count (or any other level for that matter).
  3. it's the cheapest way to go for sequential performance if redundancy isn't necessary (which is the case for temp data).
alright lets forget that card for a minute and focus on eSata, which I already have.
with mac osx, I can create a raid array regardless of the card, I have already done this creating a very fast raid 0 array on three 1TB eSata drives, however before I upgrade I wanted to do something more stable, no one really answered this main concern if I can nest hardware raid enclosures this way, nesting raid5s would seem to me to get the best speed with redundancy, however I may not be able to do this anyway due to the cost.
Yes, you can do a software nest of hardware based levels (i.e. 2x 5's on a hardware controller, then stripe via Disk Utility).

That said however, I don't see a need for redundancy for scratch (see above).

You could even consider a small, fast SSD for scratch (= low cost), as they can match a 2 - 3 drive mechanical stripe set for sequential performance.

And to top it all off, if you've enough memory, there shouldn't be much, if any need, to actually page data to the scratch volume in the first place (if you don't recall, it was a low-cost, stop-gap solution for insufficient memory back when RAM was very expensive). Photoshop does search for a scratch volume each time it's launched, but if you've enough memory in the system, it won't actually need to use it.

Which is faster? Raid 10 or Raid 5? then another question, which is faster Raid 10 in 1 enclosure? or two Raid1 enclosures on a software raid0 array?(if that is possible?)
First off, software implementations are usually slower than their hardware counterparts. Particularly for redundant levels, as it's not as detailed in terms of being able to leverage the parallelism for performance (algorithm design used for it's implementation).

Second, there are software based RAID 5 implementations, which you should avoid like the proverbial plague. The reason, is software implementations cannot deal with the write hole issue (you can find out what this is in Wiki). That solution requires hardware, which is why you need a proper hardware RAID controller for parity based levels.

Now in regards to which level is faster, RAID 5 has surpassed 10 about 5 or so years ago for both sequential and random access throughputs (10 used to be the rule for relational databases, but 5 has taken over since ~ 2006, as RAID cards became faster).

As per which way would be faster in creating a 10, it would depend on the enclosures used, particularly the RAID 1 boxes. Some have simple hardware controllers, while others are just software. Either way however, there won't be much difference if the drives used are identical, and they're run over the same eSATA card (keeping all other things equal, otherwise performance data will also reflect variances between disks used as well as the SATA controller).

To do raid 5 you'll need either additional software (often provided with the card) or hardware raid.
The simplest advice with a software based RAID 5 = do not go there, as you will get burnt (eventually will see corrupted writes due to the write hole).
 
Thank you for your detailed response, let me clear a few things up

Then why do you need a 50? :confused:

I ask, since it's temporary data, a striped set will make more sense. The reasoning behind this, is threefold:
  1. if disk dies, you replace it and don't worry about restoring the data as it was temporary to begin with (any work being processed at the time of the failure has to be re-done anyway).
  2. it's faster than a 10, 5 or 50 for the same member count (or any other level for that matter).
  3. it's the cheapest way to go for sequential performance if redundancy isn't necessary (which is the case for temp data).

I am a video editor, a scratch disk is where I write all my ingested video files, as well as rendering, it is not temporary data; I currently use a Raid 0 for speed, but back everything up often at fear that one drive may crash and the whole raid goes down, I would like some extra stability

Yes, you can do a software nest of hardware based levels (i.e. 2x 5's on a hardware controller, then stripe via Disk Utility).

thank you this was my main question.

You could even consider a small, fast SSD for scratch (= low cost), as they can match a 2 - 3 drive mechanical stripe set for sequential performance.
This is not an option as HD video is very large and I would like atleast 6TB in the array

And to top it all off, if you've enough memory, there shouldn't be much, if any need, to actually page data to the scratch volume in the first place (if you don't recall, it was a low-cost, stop-gap solution for insufficient memory back when RAM was very expensive). Photoshop does search for a scratch volume each time it's launched, but if you've enough memory in the system, it won't actually need to use it.

I do not use photoshop enough for this to be an issue.


First off, software implementations are usually slower than their hardware counterparts. Particularly for redundant levels, as it's not as detailed in terms of being able to leverage the parallelism for performance (algorithm design used for it's implementation).

Second, there are software based RAID 5 implementations, which you should avoid like the proverbial plague. The reason, is software implementations cannot deal with the write hole issue (you can find out what this is in Wiki). That solution requires hardware, which is why you need a proper hardware RAID controller for parity based levels.

Now in regards to which level is faster, RAID 5 has surpassed 10 about 5 or so years ago for both sequential and random access throughputs (10 used to be the rule for relational databases, but 5 has taken over since ~ 2006, as RAID cards became faster).

As per which way would be faster in creating a 10, it would depend on the enclosures used, particularly the RAID 1 boxes. Some have simple hardware controllers, while others are just software. Either way however, there won't be much difference if the drives used are identical, and they're run over the same eSATA card (keeping all other things equal, otherwise performance data will also reflect variances between disks used as well as the SATA controller).


The simplest advice with a software based RAID 5 = do not go there, as you will get burnt (eventually will see corrupted writes due to the write hole).

Thats all great info thank you! I was not even considering a software raid 5. Its good to hear that Raid 5 > 10 in terms of speed, I had been doing some research to the contrary but then again I don't know how old the information is.
 
Then why do you need a 50? :confused:

It really depends on how you define scratch space, and how long it has to last. If it lasts a few minutes, then redundancy is silly. If it lasts hours, or days, then redundancy makes more sense.

The simplest advice with a software based RAID 5 = do not go there, as you will get burnt (eventually will see corrupted writes due to the write hole).
I agree with this for currently available Mac based solutions. It is possible to negate this with a combination of hardware and software (think Netapp, or recent competitors, which is mostly software, but requires dedicated hardware). Nothing like this is available for Macs as far as I know, and not likely to be. And is
probably not needed, or affordable, by the OP. But it is possible to have software controlled raid that is very robust.
 
It sounds like you are getting paid to provide advise. So I assume you are researching, reading, learning, contacting vendors etc.
No, I am getting paid to edited videos, and I need more space, so I am researching, reading, learning, etc. And I consider asking questions on different forums to get other peoples experiences and perspectives part of that.

Unfortunately now is not a good time to be buying disk - so you will also have to balance the cost now, with the convenience/cost of expanding later.

Why is now not a good time?

A simple solution would be two OWC raid 5 boxes connected to your existing esata card, using OSX software to strip them to raid 0 if your need more speed or capacity. But there are many options. And likely not many people who will be willing to do the research for you with collecting a consulting fee.

this is what I was thinking of doing in the first place, I see there are many options, which is why im hear looking for advice in addition to my own research.
 
It really depends on how you define scratch space, and how long it has to last. If it lasts a few minutes, then redundancy is silly. If it lasts hours, or days, then redundancy makes more sense.
For me, it comes down to the member count. If it's say up to 3x disks, it shouldn't be a problem, as I don't expect temp data to remain active for more than a couple of hours (less in most cases, as you indicate).

Now if the member count is higher, and/or the temp data will remain for a longer period of time (i.e. 24+ hrs), then it makes sense to me.

But I didn't get this impression from cbt3.

For some strange reason, it really does come down to the specific details when dealing with storage implementations... :D :p

I agree with this for currently available Mac based solutions. It is possible to negate this with a combination of hardware and software (think Netapp, or recent competitors, which is mostly software, but requires dedicated hardware). Nothing like this is available for Macs as far as I know, and not likely to be. And is
probably not needed, or affordable, by the OP. But it is possible to have software controlled raid that is very robust.
You're getting into proprietary solutions though, not straight-forward software implementations of parity levels. And they also tend to be pricey for what you get.

One I do like, and would be an option for the OP's NAS needs, is ZFS (RAID-Z and RAID-Z2). It's also inexpensive to build a NAS based on such an implementation (no expensive hardware required past a system and disks), which makes it very attractive, as well as a lot of available online help from various forums (how-to's to trouble-shooting if there's a problem). :)
 
No, I am getting paid to edited videos, and I need more space, so I am researching, reading, learning, etc. And I consider asking questions on different forums to get other peoples experiences and perspectives part of that.

when you said

I don't have a specific budget, but I need to give multiple options to my higher ups and they will decide how much money I get for this.

I assumed you were putting together options for higher ups to decide. I guess I was wrong, or, confused.


Why is now not a good time?

Parts of the world that make critical parts for most HD vendors flooded.
[/QUOTE]


this is what I was thinking of doing in the first place, I see there are many options, which is why im hear looking for advice in addition to my own research.

No doubt you can do better, or worse, but it seemed like a good solution for you, which is why I suggested it. No affiliation, don't even have one of their boxes, but it seems like a good solution for you based on what you said here.

Given the new information I think you need to decide what you need to maximize your workflow. Explain it to your bosses in terms of hours saved v.s. new equipment costs. These days people almost always cost more then the computers they use. And it is the people who make money for the company. Get what you think you need to maximize your profit for the company.
 
I have some raid0s set up that I backup constantly, but I am running out of space and need a new storage solution;

I want the stability of Raid5 but I also want raid0 speeds;

I was thinking if I had two external RAID5s on eSata and then put them on a RAID0 through software raid, that would be really fast(right?), and I would still have one huge drive for all my stuff with redundancy? does that sound like a bad idea?

Or should I say, would that be stable and fast? and either way what products would you recommend? I am on a 2008 Mac Pro and have a 4 port eSata in my PCIe slot(I believe SATA II, would absolutely get a SATAIII card if its not to expensive. I don't have a specific budget, but I need to give multiple options to my higher ups and they will decide how much money I get for this.

Any help is greatly appreciated!


ps:
also I am not looking for anything rackmounted or networked
and the final array should be 6-10TB

booting is not necessary, I need these drives for scratch disks only.
edit: when I say scratch, I mean final cut pro scratch disks, the data is not temporary, and i would like redundancy in between full backups.

alright lets forget that card for a minute and focus on eSata, which I already have.
with mac osx, I can create a raid array regardless of the card, I have already done this creating a very fast raid 0 array on three 1TB eSata drives, however before I upgrade I wanted to do something more stable, no one really answered this main concern if I can nest hardware raid enclosures this way, nesting raid5s would seem to me to get the best speed with redundancy, however I may not be able to do this anyway due to the cost. As I research I have come up with another question

Which is faster? Raid 10 or Raid 5? then another question, which is faster Raid 10 in 1 enclosure? or two Raid1 enclosures on a software raid0 array?(if that is possible?)
As a fellow paid-to-edit-video guy to another, I feel I've done a lot of the same research you're now undertaking.

I know how it is to have to ask the boss for money to build a system, and usually you have to come up with two or three options that you can work with (1. bare minimum, 2. decent and 3. ideal) from which the boss chooses which he will fund. Totally been there. It seems you're scraping by with a system now, but it stresses you out and you want it to be safer... I get it.

When you say you have some RAID0 drives set up, and they're very fast, how fast are they? 200MB/sec sustained? 300MB/sec? Or is it more like 60-100MB/sec? I'm curious to see how much speed you're getting vs. what speeds you want/need.

I ran a 3-disk RAID0 internally on my 2009 Mac Pro for over a year before finally building a good hardware RAID card system. I had sustained 330MB/sec speeds, and it worked great for P2 DVCProHD footage. 3x1TB was fine for a while, but then I needed 3x2TB when I got more projects. When I started to edit Canon 5DMkII movies, problems started showing up.

I finally settled on buying an Areca 1880ix-12 RAID card and an 8-bay drive tower filled with 2TB for 16TB total. I've created a RAID6 that gives me 12TB of space with two-disk redundancy, and I get sustained speeds of 816MB/sec write, 714MB/sec read. That's with the cache off to measure more accurately what the array is moving. I also use my old 1TB drives in a RAID0 internally for another scratch disk. This whole setup is also backed up daily with external single drives that I can 'grab in case of fire' and rebuild all my data after a catastrophe.

The benefit of a card like the Areca is that it can push one 8-bay tower now, or I can add a second one later with the ports on the card for 24TB at full speed. You said you need 6-10TB now, so it's a very nice fit today and tomorrow.

I didn't catch if you're using FCP7 or FCPX, but either way, your stress will be lower and your workflow will benefit from a true high-speed RAID with actual redundancy. (I don't know why they put an 'R' in RAID0, because there is no redundancy.)

So, drives are expensive today, but that will be reflected in any solution you choose at the moment, so you might as well build for today and tomorrow, instead of throwing money at a half-baked arrangement that you will more than likely outgrow soon. Get a real RAID card by Atto or Areca, a cheap box that connects via mini-SAS SFF-8088 cables, buy some decent enterprise disks and be done with it. I use Western Digital's RE-4 disks, and they're super.

I think that if you can articulate to your boss(es) why you need $700 for a RAID card, $300 for an 8-bay tower, and $2000+(?) or so for disks, they will be sold on the performance and expandability into the future that a system like this provides. If that's way over budget, then your boss(es) may not consider your editing system a priority, and I hope that's not the case.
 
wonderspark - many brands / models of hard drives are out there - the easiest part of the decision for me.

What box do you use and would you expand a bit on any bandwidth issues associated with mini sas?
 
I have to give credit to nanofrog for helping me learn about setting this all up. :)

I have a Sans Digital TR8X. Their website description says, "The TowerRAID TR8X is an 8-bay tower with two mini-SAS connections, for high transfer rates of up to 750MB/s when used with a high performance RAID controller card. Utilizing high bandwidth mini-SAS (SFF-8088) cabling, the TR8X supports 3Gb/s SAS or SATA hard drives to deliver superior read / write speeds..." and so on. I don't know why they say "up to 750MB/s" because I have not seen any such limitation. Maybe they mean each cable (there are two) has that speed limitation, but I don't know for sure. I can say that my speeds are awesome on it:

R6-16GB-disab.png


With this same setup in RAID 0 instead of RAID 6, I had both read and write of just over 1100MB/second on the same test. I don't see any kind of bandwidth issue at all... neither with mini-SAS nor the box. To expand a bit further, I'm using eight WD2003FYYS drives inside, and on Western Digital's site, they list the sustained data rate specs of that disk as 138MB/second. If you divide 1100MB/sec by 8, you get 137.5MB/sec, which is pretty much spot on with my test speed and the max the disks will do.

There are a few threads that I've polluted (haha) on the whole matter of RAID 3 vs RAID 6, but the Cliff's Notes on that is: RAID 6 wins hands down.
 
Thanks for that detailed response. How noisy or quiet is the fan? I'm not in a recording studio but had some fan noise issues with the MacGurus Burly Box units in the past - so I'm alert to that topic now.
 
Thanks for that detailed response. How noisy or quiet is the fan? I'm not in a recording studio but had some fan noise issues with the MacGurus Burly Box units in the past - so I'm alert to that topic now.
It's much, much quieter than the Mac Pro blowing all fans at full speed, but much louder than the Mac Pro when all fans are at minimum. I don't know how else to put it. It's about halfway?

It's on my desktop right beside my Mac pro, which is right by my monitor and keyboard... arm's length. I thought about putting it under the tabletop on a small riser (minimize dust intake) or perhaps behind the Mac Pro (swap places) but it hasn't bothered me enough to motivate the rearrangement... yet. :p

I'll put it this way... when the Mac Pro fans kick on during a longer render, I eventually start to hear those fans over the RAID fans when the Mac Pro fan speeds reach about 2000 or so RPM.
 
That description works well for me :D

I've been looking at PROAVIO, CalDigit, Promise, Sonnet and have just about concluded I will build my own with what appears to be the best components.

Thanks for all your input !!!
 
That description works well for me :D

I've been looking at PROAVIO, CalDigit, Promise, Sonnet and have just about concluded I will build my own with what appears to be the best components.

Thanks for all your input !!!
I'll envy you if you end up with something with custom large silent fans and and only spend $400 or so. I paid $299 for my Sans Digital (though they seem to have gone up now) and it's not bad. The disks were only $204 each at the time, too. Yikes!
 
Thanks for that detailed response. How noisy or quiet is the fan? I'm not in a recording studio but had some fan noise issues with the MacGurus Burly Box units in the past - so I'm alert to that topic now.
External boxes vary wildly, but worst case, you can swap out the fans (some use 80mm, some use 120mm, and is becoming more common for pedestal boxes, such as the Sans Digital and Enhance Box products).

Enhance makes good products as well, but they're more expensive (don't include the cables either, which Sans Digital does = even better value). And even they needed a fan swap for audio use. Fortunately, it wasn't that much effort to do.

It's much, much quieter than the Mac Pro blowing all fans at full speed, but much louder than the Mac Pro when all fans are at minimum. I don't know how else to put it. It's about halfway?
I assume it's not a problem, but you should be able to swap the fan for something else, such as a Noctua if needed (yeah, I'm a fan of Noctua for quiet, long lasting fans).

It's on my desktop right beside my Mac pro, which is right by my monitor and keyboard... arm's length. I thought about putting it under the tabletop on a small riser (minimize dust intake) or perhaps behind the Mac Pro (swap places) but it hasn't bothered me enough to motivate the rearrangement... yet. :p
Well you are limited by the 1.0 meter cable length limit for SATA disks, so that means you can't place it in too far away from the system.

Now if you ran SAS disks, that would be another matter (cable length increases to a max of 10 meters). But the disk costs would be horrible for that reason alone. Better to swap out the fan if it comes down to it. ;)

That description works well for me :D

I've been looking at PROAVIO, CalDigit, Promise, Sonnet and have just about concluded I will build my own with what appears to be the best components.
Stay away from CalDigit. Their RAID products are total smelly brown stuff...

ProAvio and Promise are good, but you can "build" it yourself for less money using Areca and Sans Digital, and have total control over every piece of hardware that goes in as well (depending on what you've been looking at). Particularly over hard drives used, and I can't stress enough how important this is.

I've seen ready-made systems crap out due to poor drive choices. Particularly with CalDigit's junk (they ran consumer grade Hitachi's in the HD Element boxes :rolleyes: morons :mad:).

So I'm a big fan of total control when it comes hardware selection. ;)
 
External boxes vary wildly, but worst case, you can swap out the fans (some use 80mm, some use 120mm, and is becoming more common for pedestal boxes, such as the Sans Digital and Enhance Box products).

Enhance makes good products as well, but they're more expensive (don't include the cables either, which Sans Digital does = even better value). And even they needed a fan swap for audio use. Fortunately, it wasn't that much effort to do.

I assume it's not a problem, but you should be able to swap the fan for something else, such as a Noctua if needed (yeah, I'm a fan of Noctua for quiet, long lasting fans).
Sans Digital specs indicate the fan on back is 4.7 inches, which would be about 120mm. It has a blue LED in it, which is swell, but what do you think of this fan as a replacement?
 
Look at the blade tips on those Noctua fans - sonic stealth design :D

wonderspark - make sure you look at the power leads / connectors - when I was trying to swap out the fans on my Burly box I found out that the leads were buried in the power supply which had to be removed. What should have been a simple 3 minute task was going to take 30 minutes or so with too much risk - sold to someone who was not worried about the noise.
 
Last edited:
nanofrog - yep, I've read enough web chatter about CalDigit to stay away.

My bigger question remains the best protocol for bandwidth - these 8 box raids are capable of almost saturating the current Thunderbolt I believe (850 MB/s).

At what raid card output (MB/s) does the slot in my MacPro 4,1 become the bottleneck? I'm still flopping around a bit on mini sas vs Thunderbolt.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.