Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Sans Digital specs indicate the fan on back is 4.7 inches, which would be about 120mm. It has a blue LED in it, which is swell, but what do you think of this fan as a replacement?
Noctua's the first place I go for fans, particularly for 120mm. :D

wonderspark - make sure you look at the power leads / connectors - when I was trying to swap out the fans on my Burly box I found out that the leads were buried in the power supply which had to be removed. What should have been a simple 3 minute task was going to take 30 minutes or so with too much risk - sold to someone who was not worried about the noise.
It varies wildly, but I've not had issues. Then again, I have supplies and tools that most probably don't.

My bigger question remains the best protocol for bandwidth - these 8 box raids are capable of almost saturating the current Thunderbolt I believe (850 MB/s).
Thunderbolt?

I presume you mean PCIe bandwidth...

In terms of protocol, you won't have a choice in the matter if it's SATA or SAS, as it's dictated by the disks used (SAS cards can run either SATA or SAS disks). SAS tends to be a bit faster, but that also has to do with the mechanics (i.e. 10k or 15k rpm based disks). SAS uses the SCSI protocol, which is preferable to SATA (more robust and more fine control), but SAS is still pricey.

At what raid card output (MB/s) does the slot in my MacPro 4,1 become the bottleneck? I'm still flopping around a bit on mini sas vs Thunderbolt.
PCIe 2.0 specification = 500MB/s per lane (used in all slots on your MP as well as the 1880 series cards, which are 8x lane products).

  • Now if you're running the card in slots 1 or 2, then the limit will be the card (8x lanes * 500MB/s = 4GB/s).
  • But if you're running it in slots 2 or 3, then the slot will be the limitation (4x lanes * 500MB/s = 2GB/s).
Though it's technically possible you'd throttle in slots 3 or 4, 2GB/s is still a lot of bandwidth, and not likely to be an issue any time soon. ;)
 
The pegasus raid using thunderbolt can achieve up to 1000MBps in raid 0 (and presumably also close to that in raid 5).

http://www.anandtech.com/show/4489/promise-pegasus-r6-mac-thunderbolt-review/6

The revodrive X2 might also be interesting for you, or? Maybe they are a bit too pricey when they start gaining volume. I got a barely used 240GB for ~$340, so I'm very satisfied with these results, I've attached the results from an AJA-run similar to what you did above.

This might be a good solution for you if you want to run raid5, this will make the normal drive bays in the mac pro raid5-able but you will have to boot the machine from a drive not connecting to those (for example an ssd coupled to one of the two spare sata-ports on the motherboard).

http://blog.macsales.com/12247-upgrade-your-06-08-mac-pro’s-internal-bays-to-sata-3-0
 

Attachments

  • Screen Shot 2011-12-15 at 8.32.10 PM.png
    Screen Shot 2011-12-15 at 8.32.10 PM.png
    65.3 KB · Views: 66
nanofrog - thank you for all the info - it positions me to make a much more informed decision on my storage design.
:cool: NP. :)

The pegasus raid using thunderbolt can achieve up to 1000MBps in raid 0 (and presumably also close to that in raid 5).

...
You have to be careful though, as these products tend to use consumer grade drives, which tend to be unstable for parity based RAID levels (the Pegasus units definitely do).
 
wow thanks for all the valuable discussion.

as I do more research, I am getting dizzy with all the different types of hard drives and eSata vs SAS etc.

When you say you have some RAID0 drives set up, and they're very fast, how fast are they? 200MB/sec sustained? 300MB/sec? Or is it more like 60-100MB/sec? I'm curious to see how much speed you're getting vs. what speeds you want/need.

I was getting upwards of 250MB/sec sustained on my 3x1TB eSata, with Disk Utility HD RAID0 I would like to have at least that for this new array.(I don't remember the exact numbers and the array is no longer being used)

I also have an internal RAID0 with the apple raid card with 3x500GB HDs, that has a 240 read and 280 write. I do remember though that the eSata Raid was faster somehow.

If I could get upwards of 300MB/s that would be fantastic. And 80-90% of my work is with P2 DVCProHD footage. And I use FCP7.

I think that if you can articulate to your boss(es) why you need $700 for a RAID card, $300 for an 8-bay tower, and $2000+(?) or so for disks, they will be sold on the performance and expandability into the future that a system like this provides. If that's way over budget, then your boss(es) may not consider your editing system a priority, and I hope that's not the case.

that acrea card looks nice and those 750+MB/s speeds look AMAZING! but I don't think they would go anywere near $3k for the whole system.
As i do research and see whats avaible, im thinking I'll probably have a cheap 4 drive RAID5 box for like $800 as the low end, something around 1.2-1.4k as the middle road,(and what they will probably give me), and around $2k for a 8bay tower system.

and I have a question as far as getting a new RAID card
My Macs PCIe slots are as follows
Slot1 - 4-port eSata Card
Slot2 - GPU
Slot3 - empty
Slot4 - :apple: Apple Raid card:apple:

would these SAS chips fit in slot3? it seems kind of cramped, I know I saw "low profile" on some of the cards I was looking at, I assume that means they will fit, or is slot1 faster and am I better of nixing the eSata in favor of SAS?

as far as the HDDs themselves
Those WD2003FYYS drives are pretty expensive, obviously I want the fastest drive possible, and im not going to go with the WD green or blue drives, but are the WD Black good enough for this? and are the Entripise Hitachi or seagate drives as sufficient?

I am probably looking to spend around $1k on the disks themselves if at all possible, whether its 8 1TB drives or 2TB drives, I let them decide, obviously ill try to convince them to spend more money. I like the RAID6 option though especially at those speeds, and if I can do it for around the price two RAID5 boxes would cost that would be preferable.

I'll stay away from caldigit, and I keep seeing sans digital come up, so I'll proably go with one of those, or this http://eshop.macsales.com/item/Other World Computing/MEQX2KIT0GB/

there are also several x2 drive raid0/1 boxes that I was thinking making a RAID10 array out of, there are really so many options it hard for me to make up my mind.
 
I was getting upwards of 250MB/sec sustained on my 3x1TB eSata, with Disk Utility HD RAID0 I would like to have at least that for this new array.(I don't remember the exact numbers and the array is no longer being used)

I also have an internal RAID0 with the apple raid card with 3x500GB HDs, that has a 240 read and 280 write. I do remember though that the eSata Raid was faster somehow.

If I could get upwards of 300MB/s that would be fantastic. And 80-90% of my work is with P2 DVCProHD footage. And I use FCP7.
If you only need 300MB/second, this is very easy. I edited an entire feature-length movie shot on P2 DVCProHD 1080-60i/24p using the stock three Hitachi HDE721010SLA330 1TB drives that shipped in my Mac. With the standard software RAID0 via Disk Utility, they get 330MB/sec sustained read/write.

Sell that Apple Mac PRO RAID card on eBay $300-400, buy three of those Hitachi disks for ~$130 each on Amazon here, and be done with it. I had that Apple RAID card, and finally got smart and sold it on eBay. Think I got about $350 for it. That would almost pay for the drives at today's crazy prices, and you're set for less than $100.

You can use all your other drives for backup and be pretty safe for a while. Regardless of whether your boss gives you money or not, I'd do that right away.

That Apple RAID card isn't doing you any good unless it's in RAID5. You don't need it for the RAID0 sets you're currently running, and using it for RAID5 with three internal disks won't get you the 300MB/sec you need. Ditch it!

that acrea card looks nice and those 750+MB/s speeds look AMAZING! but I don't think they would go anywere near $3k for the whole system.
As i do research and see whats avaible, im thinking I'll probably have a cheap 4 drive RAID5 box for like $800 as the low end, something around 1.2-1.4k as the middle road,(and what they will probably give me), and around $2k for a 8bay tower system.

and I have a question as far as getting a new RAID card
My Macs PCIe slots are as follows
Slot1 - 4-port eSata Card
Slot2 - GPU
Slot3 - empty
Slot4 - :apple: Apple Raid card:apple:

would these SAS chips fit in slot3? it seems kind of cramped, I know I saw "low profile" on some of the cards I was looking at, I assume that means they will fit, or is slot1 faster and am I better of nixing the eSata in favor of SAS?

as far as the HDDs themselves
Those WD2003FYYS drives are pretty expensive, obviously I want the fastest drive possible, and im not going to go with the WD green or blue drives, but are the WD Black good enough for this? and are the Entripise Hitachi or seagate drives as sufficient?

I am probably looking to spend around $1k on the disks themselves if at all possible, whether its 8 1TB drives or 2TB drives, I let them decide, obviously ill try to convince them to spend more money. I like the RAID6 option though especially at those speeds, and if I can do it for around the price two RAID5 boxes would cost that would be preferable.

I'll stay away from caldigit, and I keep seeing sans digital come up, so I'll proably go with one of those, or this http://eshop.macsales.com/item/Other World Computing/MEQX2KIT0GB/

there are also several x2 drive raid0/1 boxes that I was thinking making a RAID10 array out of, there are really so many options it hard for me to make up my mind.
For the sake of argument, say they give you $1200 to beef it up. You can get something like the Areca card and an empty tower for about $700, and an 8-bay box for $400 = $1100. Selling the Apple RAID card give you at least another $300, plus your $100 still left over, you can buy disks. If you end up with only four, that's ok, because you can still make a nice RAID5 with them, and you'll have your 300MB/second for now. As you get another $130 here and there, buy another identical disk and add it to the box. Each time you do, your speeds will increase on the RAID. Use all those other disks to back up your RAID.

You can put your GPU in slot #1, and if you want, the eSATA 4-port in slot #2 if it uses more than x4 lane speeds. Pulling out that Apple RAID card leaves you two open slots. If you get the Areca, it will work in any slot, but it's an x8 PCI 2.0 card.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
as I do more research, I am getting dizzy with all the different types of hard drives and eSata vs SAS etc.
RAID gets complicated quick, especially when a hardware solution is involved. So be prepared to put the time in to figure enough of it out to get a suitable solution for your needs.

I was getting upwards of 250MB/sec sustained on my 3x1TB eSata, with Disk Utility HD RAID0 I would like to have at least that for this new array.(I don't remember the exact numbers and the array is no longer being used)
That's about the limit for eSATA as there's likely a Port Multiplier chip involved (real world throughput on 3.0Gb/s, aka SATA II, maxes out at around 375MB/s).

I also have an internal RAID0 with the apple raid card with 3x500GB HDs, that has a 240 read and 280 write. I do remember though that the eSata Raid was faster somehow.
That card is crap, so as wonderspark mentions, sell it for whatever you can get out of it, and put the proceeds into a new storage system.

If I could get upwards of 300MB/s....
That's very doable, and much easier on the wallet (fewer drives required for a specific level).

that Areca card looks nice and those 750+MB/s speeds look AMAZING! but I don't think they would go anywhere near $3k for the whole system.

As i do research and see whats available, I'm thinking I'll probably have a cheap 4 drive RAID5 box for like $800 as the low end, something around 1.2-1.4k as the middle road,(and what they will probably give me), and around $2k for a 8bay tower system.
Given the need for enterprise drives when using a hardware RAID card such as those from Areca, this may be problem, particularly if you go for more members than would be needed to generate your target throughput level (or need a lot of capacity).

Keep in mind, that disk prices are still high right now, and consumer models on these types of cards do not work well at all (extremely unstable = must get enterprise disks in order to solve it anyway). So save yourself the hassle, and get the right disks from the beginning...

But you still haven't answered a question for me, and it's critical to determine if actually need a RAID 5 or not...
  • Is the RAID 5 for scratch or data?
  • If this is for scratch, what is the specific need behind using RAID5 vs. RAID 0?
The reason behind the questions, is you may be way over-spending for a temporary storage volume, and could put the funds to better use elsewhere.

and I have a question as far as getting a new RAID card
My Macs PCIe slots are as follows
Slot1 - 4-port eSata Card
Slot2 - GPU
Slot3 - empty
Slot4 - :apple: Apple Raid card:apple:
Toss the Apple RAID Pro.

Place cards in the following slots:
  • Slot 1 = GPU
  • Slot 2 = New RAID card
  • Slot 3 = eSATA card (usable for connecting to a Port Multiplier based enclosure for backup)
  • Slot 4 = empty
The reason is, slots 1 & 2 are 16x lanes, and 3 &4 are 4x lanes. This would allow enough lanes for both the GPU and RAID card without throttling (if you ever pushed them hard enough to even get near it).

..."low profile"...SAS vs. eSATA...
Don't worry about Low Profile or not, as that's not an issue in a MacPro (LP models are important for rack-mounted servers, as they have limited space for PCIe cards).

As per SAS v. SATA, the SAS cards can run both SAS and SATA drives, so it's not an issue there.

Only for SATA only controllers, as those will not run a SAS disk. But given the costs involved and your budget, SAS disks are off of the table (too expensive).

as far as the HDDs themselves
Those WD2003FYYS drives are pretty expensive, obviously I want the fastest drive possible, and I'm not going to go with the WD green or blue drives, but are the WD Black good enough for this? and are the Enterprise Hitachi or Seagate drives as sufficient?
Greens (consumer versions, which don't do well in RAID at all, including software implementations), Blues, and Blacks are a non-starter for a hardware RAID card. Period.

You will have to use enterprise models in order for the array to be stable due to the different recovery timings programmed into the disk firmware (and it's no longer possible to change them on the consumer models).

I am probably looking to spend around $1k on the disks themselves if at all possible, whether its 8 1TB drives or 2TB drives, I let them decide, obviously ill try to convince them to spend more money. I like the RAID6 option though especially at those speeds, and if I can do it for around the price two RAID5 boxes would cost that would be preferable.
This will be a tall order with enterprise grade. They're always more expensive than their consumer counterparts, but is much worse ATM due to the flooding in Thailand.

Now to give you an idea, the 1TB WD RE4 (WD1003FBYX), is $250 most places. So 8x of those is $2k (twice what you're willing/able to spend right now).

Performance wise, you won't need 8 anyway (4x will do).

And using that, you get the following:
You can find those drives for less (seen them here for $128 per, but I've never used the vendor), but even with that low price, it's still going to push your budget to the limit I suspect.

Performance will exceed 300MB/s (usable capacity = 3TB in a RAID 5 configuration), and you will be able to add disks in the future which will allow you to increase both capacity and performance simultaneously.

If you only need 300MB/second, this is very easy. I edited an entire feature-length movie shot on P2 DVCProHD 1080-60i/24p using the stock three Hitachi HDE721010SLA330 1TB drives that shipped in my Mac. With the standard software RAID0 via Disk Utility, they get 330MB/sec sustained read/write.
This may be all that's needed, as cbt3 still hasn't answered whether or not the RAID 5 is for scratch or data. And if it's for scratch, if there's a specific reason the redundancy is necessary for temporary data.

Sell that Apple Mac PRO RAID card on eBay $300-400, buy three of those Hitachi disks for ~$130 each on Amazon here, and be done with it.
I agree with selling the card.

But the drive linked is a consumer model, which will have stability issues on a hardware RAID controller = really bad advice.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
This may be all that's needed, as cbt3 still hasn't answered whether or not the RAID 5 is for scratch or data. And if it's for scratch, if there's a specific reason the redundancy is necessary for temporary data.

I agree with selling the card.

But the drive linked is a consumer model, which will have stability issues on a hardware RAID controller = really bad advice.
Yeah, I was saying that if he just needed to edit DVCProHD footage, he could get three of those disks and sell the Apple RAID Card to offset the purchase. If he goes with a real RAID card, then yeah, don't waste any money on consumer disks at all.

It sounds like he has what he needs already, if it was just rearranged.

- 3x500GB in RAID0 internally would make a good scratch disk, and should see close to 300MB/sec, although he mentioned only 240/280. Guess they're slow disks. Still, 240/280 would work for scratch media. I had ONLY my 3x1TB RAID0 internally which held my media, scratch files, renders and everything, at only 330MB/sec. Having the media all spread on what is already there would work fine.

- 3x1TB in RAID0 for media should get about 300MB/sec, or you could break those up into #1 for media, #2 for media backup, and #3 for render outputs. That will depend on how much space you need at a time for editing. I had my 104 minute finished movie in 1920x1080, and it all fit on 1.5TB. 500GB of raw DVCProHD footage and audio files, and the rest was stuff I rendered out or created in the process.

- Any money from the boss plus money recovered from selling Apple RAID Card can be used for backup, real RAID card, etc. for future implementation or small-scale immediate implementation.
 
Last edited:
Yeah, I was saying that if he just needed to edit DVCProHD footage, he could get three of those disks and sell the Apple RAID Card to offset the purchase. If he goes with a real RAID card, then yeah, don't waste any money on consumer disks at all.

It sounds like he has what he needs already, if it was just rearranged.

- 3x500GB in RAID0 internally would make a good scratch disk, and should see close to 300MB/sec, although he mentioned only 240/280. Guess they're slow disks. Still, 240/280 would work for scratch media. I had ONLY my 3x1TB RAID0 internally which held my media, scratch files, renders and everything, at only 330MB/sec. Having the media all spread on what is already there would work fine.

- 3x1TB in RAID0 for media should get about 300MB/sec, or you could break those up into #1 for media, #2 for media backup, and #3 for render outputs. That will depend on how much space you need at a time for editing. I had my 104 minute finished movie in 1920x1080, and it all fit on 1.5TB. 500GB of raw DVCProHD footage and audio files, and the rest was stuff I rendered out or created in the process.

- Any money from the boss plus money recovered from selling Apple RAID Card can be used for backup, real RAID card, etc. for future implementation or small-scale immediate implementation.
Even with 4x WD 1TB RE's @ $128 per, the total with a card and 8 bay enclosure is $1627, and that doesn't include any shipping costs (or taxes if applicable).

So unless it's a software implementation, there won't be enough money for a backup solution that requires additional hardware (just re-arrange what's already in-hand).
 
Even with 4x WD 1TB RE's @ $128 per, the total with a card and 8 bay enclosure is $1627, and that doesn't include any shipping costs (or taxes if applicable).

So unless it's a software implementation, there won't be enough money for a backup solution that requires additional hardware (just re-arrange what's already in-hand).
Wait, so there's not enough to make a real RAID work?... lemme rethink with some optimism:

PLAN A:
Say the boss ok's $1400. (He said 1.2-1.4k) Add $300 to sell the Apple RAID card = $1700 total.
Buy the Areca card, 8-bay box and 4x WD 1TB RE-4 disks for about $1627, which leaves $73 for shipping and such. In reality, we hope he already has a beefy UPS at least, so a BBU could be put off.

Now he still has existing 3x500GB + 3x1TB disks from what he's using now for backups and/or scratch. It could work out, yes?!

PLAN B:
Just buy some WD RE-4 disks for $512 and install them internally in RAID0 via Disk Utility, pull out Apple card and sell it. Use existing disks for backups and scratch via that eSATA card that he's currently using. I know with 3x RE-4 disks in RAID0, he will get over 300MB/second, because all those disks read over 100MB apiece.

After that, build up funds until there's enough for a better solution.

I guess this is why so many of those cheap eSATA RAID5 boxes sell... it's right between the price point of a real solution and a limited budget. Oh, well... gotta do what you can!
 
remember that if you want to utilize your internal bays for a raid0 you could extend that to 4 drives by using one of the two spare sata-ports for the boot drive (preferably an ssd).
These ports can be found directly on the mother board beneath the fans. It's not much work to take a sata-cable and connect it between one of those spare ports and an ssd housed in one of the 5.25"-dvd-drive spaces. I just ordered a http://eshop.macsales.com/item/OWC/MM352A52MP/ to fit two ssds in that space. Works very well.
 
Good point. I never think of that because I have a Blu-ray burner under the standard burner. Plus, I'm waiting for SSDs to get cheaper and more refined. :)
 
Wait, so there's not enough to make a real RAID work?... lemme rethink with some optimism:
That's not what I'm saying.

But it will depend on whether or not the existing equipment can be re-configured in a manner suitable to the remaining requirements, such as boot, scratch, and backup (still have no idea what the OP has for external equipment either).

PLAN A:
Say the boss ok's $1400. (He said 1.2-1.4k) Add $300 to sell the Apple RAID card = $1700 total.
Buy the Areca card, 8-bay box and 4x WD 1TB RE-4 disks for about $1627, which leaves $73 for shipping and such. In reality, we hope he already has a beefy UPS at least, so a BBU could be put off.
Assuming the existing drives can be re-configured to fill the remaining requirements (handle boot, scratch and backup), then the storage aspects would be doable for $1700 ($1400 from the boss + $300 resulting from the sale of the existing RAID card).

And as you mention, hopefully they have a suitable UPS on-hand already. If not, this will very likely be a fly-in-the-ointment as they say (need to cough up another $187 (CyberPower CP1500PFCLCD).

There may be ways of saving a bit of cash, such as using an internal card with the HDD bays, but that could run into issues as well that consume the "savings" (figuring on the internal bays will be used for boot and scratch, and the existing external eSATA storage equipment for backup).

Now he still has existing 3x500GB + 3x1TB disks from what he's using now for backups and/or scratch. It could work out, yes?!

PLAN B:
Just buy some WD RE-4 disks for $512 and install them internally in RAID0 via Disk Utility, pull out Apple card and sell it. Use existing disks for backups and scratch via that eSATA card that he's currently using. I know with 3x RE-4 disks in RAID0, he will get over 300MB/second, because all those disks read over 100MB apiece.
If using a software RAID implementation, there's no absolute need to run enterprise grade HDD's; those are only necessary when they're attached to a hardware RAID controller, such as an Areca. Which allows the user to save money on the HDD's purchased (use Caviar Blacks instead ;)).

I guess this is why so many of those cheap eSATA RAID5 boxes sell... it's right between the price point of a real solution and a limited budget. Oh, well... gotta do what you can!
Exactly.

There's a sizable gap between software based implementations, or very inexpensive RAID on a Chip based enclosures (i.e. OWC's Qx2 = cheap RoC based 4 bay RAID 5 box) vs. professional grade gear for a DAS implementation (either a box like the ARC-8040 or putting something together with a RAID card and other bits needed).
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
There's a sizable gap between software based implementations, or very inexpensive RAID on a Chip based enclosures (i.e. OWC's Qx2 = cheap RoC based 4 bay RAID 5 box) vs. professional grade gear for a DAS implementation (either a box like the ARC-8040 or putting something together with a RAID card and other bits needed).

Is the Qx2 that bad? Was considering it as an online backup for another more robust RAID tower.
 
My rule of thumb is if the enclosure looks "Mac Like" and aluminum ala LaCie and OWC. Avoid. Cheap chipsets and components sold to the Mac masses purely for esthetics. I have piles of e-waste like this at my desk. All ripped apart waiting for some sort of use. Also tons of those ridiculous "Rugged" LaCie 2.5" drives. All dead. Meanwhile my Firelight Smartdisk's keep on going. Still have a G4 build on one of them.
 
If using a software RAID implementation, there's no absolute need to run enterprise grade HDD's; those are only necessary when they're attached to a hardware RAID controller, such as an Areca. Which allows the user to save money on the HDD's purchased (use Caviar Blacks instead ;)).
I knew I was going to build my current setup long before I could afford it, so I started by buying three of the RE-4 disks and replacing the three Hitachis in my internal RAID0. It doubled my space and bought me time to sell my Apple card while saving and researching.
 
Sans Digital specs indicate the fan on back is 4.7 inches, which would be about 120mm. It has a blue LED in it, which is swell, but what do you think of this fan as a replacement?

that fan is very quiet I replaced my rosewill 8 bay unit with a noctua . the rosewill is a clone of the sans digital one thing to know the sans digital has 2 fans one for the hdds and one for the power supply.

here is the ebay listing one the one that I sold I kept one.


my unit is the lower cost esata type so max speed is 2x sata or about 540mbs. the inside fan setup should be the same. so here is the listing with some clear photos of fans. note to mods this has been sold and I am not trying to sell one now.


http://www.ebay.com/itm/15068325731...X:IT&_trksid=p3984.m1559.l2649#ht_2823wt_1299
 
Last edited:
Is the Qx2 that bad? Was considering it as an online backup for another more robust RAID tower.
No, it was a matter of "You get what you pay for...".

The Qx2 would be fine for a backup source if you want to use RAID 5 for a little redundancy. Which isn't a bad idea.

But they had to make compromises in order to meet the price point (why the use an inexpensive RoC rather than a full-on hardware implementation as is the case with the ARC-8040). Which you'll see in both throughputs, and robustness (i.e. areas such as RAID levels possible, recovery capabilities, Online Expansion, and Online Migration are all reduced or missing).

My rule of thumb is if the enclosure looks "Mac Like" and aluminum ala LaCie and OWC. Avoid. Cheap chipsets and components sold to the Mac masses purely for aesthetics. I have piles of e-waste like this at my desk. All ripped apart waiting for some sort of use. Also tons of those ridiculous "Rugged" LaCie 2.5" drives. All dead. Meanwhile my Firelight Smartdisk's keep on going. Still have a G4 build on one of them.
According to MR members that use Qx2's for backup, I only recall one that had an issue.

Some of their other products have had more issues however, as well as LaCie's single and dual disk enclosure units from what I recall.

I knew I was going to build my current setup long before I could afford it, so I started by buying three of the RE-4 disks and replacing the three Hitachis in my internal RAID0. It doubled my space and bought me time to sell my Apple card while saving and researching.
Keep in mind however, that you had full control of both the what was going to be put into your system, as well as the purse strings (i.e. researched what you needed, and were willing to pay for it).

Perfect way to go IMO when in this position, but unfortunately, that's not usually the case when you're an employee of someone else's company (make a case for the intended solution, and hope they approve the funding :eek: :p).

Ah, thanks! Didn't think about that.
As you can see from the pics, there's not much to them. So it's not that bad to replace fans in my experience. ;)

And there's little difference between eSATA and MiniSAS boxes either (basics are the same; box, PSU, backplane boards, and wiring; connections may be on a board that includes enclosure monitoring capabilities above the LED's on the front).
 
Thank you all for your continued commentary...

In response to a few comments, let me clear some stuff up

As I have said before, I need this array as a Final Cut Pro Scratch Disk, however this does not fit the notion people here have been saying of scratch disks, this NOT TEMP data, and needs to be stored as I convert footage and render sequences.

Also let me clear up EXACTLY what I have right now in terms of hard drives, because I have a lot, but there all over the place and a mess which is why I want ONE GIANT drive an backup elsewhere

When I started working here in 2008 and my computer was purchased by our IT guy (who is no longer with us), it had the Apple Raid Card, and 4 500GB HDs, without taking them out of my computer I can't check right now exactly what they are, apple raid utility only lists them as STS3500320NS, which only tells me they are some kind of Seagate drive. They were originally in a RAID5 array (which was about 150-175 Mbs r/w to my recollection, honestly I’m not sure the speeds but I'm POSITIVE it was less then 200mbs), which watch only drive on the computer. However after a few years of editing experience I found out, you should not use the boot drive as your scratch disk (again this is NOT TEMP DATA), eventually I had some externals set up and converted the 4 to 1 boot drive, and 3 RAID0.
The three 1TB drives I had on an eSata RAID0 array were these
http://www.amazon.com/AcomData-PureDrive-Desktop-External-PHD10000USE-72/dp/B000YUFUCO
There is some kind of Hitachi drive in them but I'm fairly sure they are not enterprise drive or anything.
I also have several firewire/eSata 1TB externals, I need specially two additional 1TB drives for ‘travel’ reasons, as footage is shot somewhere else and its needs to be sent back to me, on two drives (redundant RAID1 just incase someone drops on in transit), however during my business trip last month, with an unplanned up-convert to HD I ended up needing more drive space, so last minute all that I could get that was available at the time was some over priced G-Drives, 2 TB each, which as of right now are reconfigured on RAID0 eSata and get 140mbs write/ 160mbs read
With the extra space, and hopes of getting an external raid5 or two, I was planning on taking the 3 1TB drives out of the external cases and putting them internal and running a RAID 0 there

To sum up, I have my BOOT, and BACKUP, covered; I need this for DATA, and heavy large file rendering work. Also since I know wonderspark will ask, I probably don’t need more then 500-600GB on my active drive at once, I don’t work on any hour and half movies, but I work on several small projects simultaneously, and sometimes things go on the backburner for a while, then come back, and sometimes I need to reference older projects for things, so it would be much easier for me to have more space to keep the most current projects and all there files in one place securely, currently I have all the data for several projects copied on at least two drives, and then archived later once I know I probably wont need to reference for a long time or ever really, on a slow Drobo Network storage that I share with 5 other people, although they probably use 1/4 of the space on it, and I use about 1/2 of it(the other 1/4 is free space) I wont get into the NAS details because its not relevant to what I'm doing on my system itself.



So ALL my HDs are as follows

4x 500GB internal (1 Boot, 3 on RAID0)

3x 1TB RAID0 (not currently in use, info on drives is transferred to other drives and they can be repurposed)[edit: these are Samsung HD103UH drives]

2x2TB RAID0 eSata http://www.g-technology.com/products/g-drive.cfm

3X1TB LACIE http://www.amazon.com/LaCie-FireWire800-FireWire400-External-301442U/dp/B001KFH6K6/ref=pd_cp_e_3 (wow price sure went up!)
2 are currently on a RAID1 and I need to keep separate for travel
1 is on eSata holding some backups

1x1TB an older version of WD mybook studio, it has eSata and firewire, I don’t think they make it anymore, although eSata used to give me trouble with this drive
I have it on FW800 right now, holding some backups

1X500GB eSata (Rosewill (eSata/FW) case (one of those “Mac like” designs), with a Seagate drive (consumer level)) Its in 2 partition (I know stupid), one is about 128GB to have a boot clone, and the rest of the drive is more backup; I didn’t want to waste the whole drive as a boot clone so I partitioned only part of it for that.

So I hope that clears everything up, I need to have all my **** together by tomorrow to present them some stuff, I wont get the money to do this till Jan or Feb., so hopefully prices will go down, but I need to put this in so it can be in the 2012 budget, as far as money, I have no idea what they will let me have until I give them options, they might say hey here is $3k go nuts, so I want to have all options available. Considering as I am already functioning now, I doubt it though.










If you only need 300MB/second, this is very easy. I edited an entire feature-length movie shot on P2 DVCProHD 1080-60i/24p using the stock three Hitachi HDE721010SLA330 1TB drives that shipped in my Mac. With the standard software RAID0 via Disk Utility, they get 330MB/sec sustained read/write.
That seems really ironic that disk utility is faster then the hardware-based raid


That Apple RAID card isn't doing you any good unless it's in RAID5. You don't need it for the RAID0 sets you're currently running, and using it for RAID5 with three internal disks won't get you the 300MB/sec you need. Ditch it!

wow I didn’t know that, I will be ditching it even if I don’t get a new RAID card



For the sake of argument, say they give you $1200 to beef it up. You can get something like the Areca card and an empty tower for about $700, and an 8-bay box for $400 = $1100. Selling the Apple RAID card give you at least another $300, plus your $100 still left over, you can buy disks. If you end up with only four, that's ok, because you can still make a nice RAID5 with them, and you'll have your 300MB/second for now. As you get another $130 here and there, buy another identical disk and add it to the box. Each time you do, your speeds will increase on the RAID. Use all those other disks to back up your RAID.

You can put your GPU in slot #1, and if you want, the eSATA 4-port in slot #2 if it uses more than x4 lane speeds. Pulling out that Apple RAID card leaves you two open slots. If you get the Areca, it will work in any slot, but it's an x8 PCI 2.0 card.

I am thinking getting an 8-drive box either way, so I can make it bigger later, however is it’s a big procedure to add a drive to a raid5? (as in need to backup the whole drive and delete and rebuild the array?) Or is it straightforward?(just pop it in and let the drives reconfigured for a few dozen hours?)



But you still haven't answered a question for me, and it's critical to determine if actually need a RAID 5 or not...
  • Is the RAID 5 for scratch or data?
  • If this is for scratch, what is the specific need behind using RAID5 vs. RAID 0?
The reason behind the questions, is you may be way over-spending for a temporary storage volume, and could put the funds to better use elsewhere.

Please see my detailed explanations further up this post but I am fairly certain I mentioned before in the thread

I am a video editor, a scratch disk is where I write all my ingested video files, as well as rendering, it is not temporary data; I currently use a Raid 0 for speed, but back everything up often at fear that one drive may crash and the whole raid goes down, I would like some extra stability

Toss the Apple RAID Pro.
will do

Place cards in the following slots:
  • Slot 1 = GPU
  • Slot 2 = New RAID card
  • Slot 3 = eSATA card (usable for connecting to a Port Multiplier based enclosure for backup)
  • Slot 4 = empty
The reason is, slots 1 & 2 are 16x lanes, and 3 &4 are 4x lanes. This would allow enough lanes for both the GPU and RAID card without throttling (if you ever pushed them hard enough to even get near it).
this explains why my internal RAID0 was slower then my eSata RAID0, why would the Apple Raid Card be in the slow lane? It is next the hard drives I guess.

As per SAS v. SATA, the SAS cards can run both SAS and SATA drives, so it's not an issue there.

Only for SATA only controllers, as those will not run a SAS disk. But given the costs involved and your budget, SAS disks are off of the table (too expensive).


Now to give you an idea, the 1TB WD RE4 (WD1003FBYX), is $250 most places. So 8x of those is $2k (twice what you're willing/able to spend right now).

Performance wise, you won't need 8 anyway (4x will do).

And using that, you get the following:


Performance will exceed 300MB/s (usable capacity = 3TB in a RAID 5 configuration), and you will be able to add disks in the future, which will allow you to increase both capacity and performance simultaneously.

You say at first SAS is out of the question, then recommend a SAS drive, $2k is probably doable for what they MIGHT give me, would I get 300 MB/s if I had that array on eSATA? Also would I need a separate RAID Card? I know my current eSata card has some sort of RAID functionality to it but I don’t know a damn thing about it, I got that card at least two years ago and only ever used Disk Utility Raid.


And as you mention, hopefully they have a suitable UPS on-hand already. If not, this will very likely be a fly-in-the-ointment as they say (need to cough up another $187 (CyberPower CP1500PFCLCD).

I do not have a UPS, I have a regular surge protector.

There's a sizable gap between software based implementations, or very inexpensive RAID on a Chip based enclosures (i.e. OWC's Qx2 = cheap RoC based 4 bay RAID 5 box) vs. professional grade gear for a DAS implementation (either a box like the ARC-8040 or putting something together with a RAID card and other bits needed).


I was actually looking at that OWC as an option, good to know its crap, but how crap is it? Would I get those 300 mb/s speeds out of it?

And there's little difference between eSATA and MiniSAS boxes either (basics are the same; box, PSU, backplane boards, and wiring; connections may be on a board that includes enclosure monitoring capabilities above the LED's on the front).

Isn’t there a significant speed difference?

----------

also, as far as RAID cards go...

this case
http://www.newegg.com/Product/Product.aspx?Item=N82E16816111175

comes with a SAS card and cables and such, is this sufficient? it says it is a RAID5 tower with hardware RAID, why would I need a $700+ raid card?
 
Last edited:
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
As I have said before, I need this array as a Final Cut Pro Scratch Disk, however this does not fit the notion people here have been saying of scratch disks, this NOT TEMP data, and needs to be stored as I convert footage and render sequences.
You're storing your raw data on the scratch space? Or are you trying to -reuse the temp data (that's what true scratch data actually is, as it's mid-processed data, with the final output stored to whatever volume it's directed to)?

Please understand I realize you've stated it's for scratch, but the way you're talking about things, it doesn't sound like it's purely for scratch data (temp data created by your applications).

If you're storing both data (raw or completed) and scratch on the same disk/volume, that's not a good idea at all.

Take a look at this post (Digital Skunk is a professor in this field BTW).

Now assuming this is the case (scratch is set to the same volume as your primary data/where you're storing any raw data), then you need to use a separate disk/volume for scratch. Boot should also be on it's own volume.

There's not only a performance penalty for this, it's also expensive. Now that's not to say you shouldn't implement a RAID 5 on a hardware controller, but use it exclusively for working data (any raw footage as well as final output). I'd actually recommend this, as your most secure data location for the entire system.

Even if you're scratch data is there for an extended period of time, which you still haven't clarified, there are other ways to go about it that would be much cheaper (i.e. single SSD is safe enough, but worst case, a software based 10 would be sufficient purely for scratch).

For example, a fast and reliable editing system can look something like this:
  • SSD for boot (nothing else on it)
  • SSD for scratch (nothing else on it)
  • RAID 5 for working data
  • eSATA card + Port Multiplier based enclosure for backup and OS clone
As per the physical installation, there are options that can be gone through later.

But the point is, that spending over $1600 for scratch data, not working data, is beyond foolish. Even if you actually do need redundancy for it, as your working data is the most critical.

You'd be better off putting an inexpensive SSD in there for scratch (example), and putting the RAID to use for your working data. Now if you can explain why you need redundancy for scratch (it really is temp data), then you can set up either a RAID 1 or RAID 10 much cheaper, and put the cost savings into another part of the system, such as RAID for the working data and/or more memory (you shouldn't actually need to go to the scratch space that often if the system has sufficient memory).

...STS3500320NS....
Yes, they're Seagate's. And they're enterprise grade Seagate's, not the consumer models.

Consumer models end in AS, while the enterprise editions end in NS.

Seagate is my least favorite brand in the enterprise world these days due to high failure rates, but it could be a leg-up in terms of saving you some cash (can be put into another hardware based RAID, and you don't need identical disks).

The three 1TB drives I had on an eSata RAID0 array were these
http://www.amazon.com/AcomData-PureDrive-Desktop-External-PHD10000USE-72/dp/B000YUFUCO
There is some kind of Hitachi drive in them but I'm fairly sure they are not enterprise drive or anything.
Definitely not enterprise grade...

One of these can be usable for making an OS clone though, so they're not totally useless. ;) Heck, you could pull them and put them into a software based RAID volume of some sort, or put them into a Port Multiplier enclosure and use them as part of a backup volume (i.e. concatenation = puts different disks together end-to-end that the system sees as a single volume).

The same goes for any FW drive that you don't need for portability.

Just a thought, given the budget.

To sum up, I have my BOOT, and BACKUP, covered; I need this for DATA, and heavy large file rendering work.
Aha!!!

Finally. :D :p As this is actually for working data, it definitely isn't temporary data at all. Scratch data however, is and why using a RAID 5 is both foolish and a HUGE waste of funds for your stated usage. ;)

So ALL my HDs are as follows

4x 500GB internal (1 Boot, 3 on RAID0)

3x 1TB RAID0 (not currently in use, info on drives is transferred to other drives and they can be re-purposed)[edit: these are Samsung HD103UH drives]

2x2TB RAID0 eSata http://www.g-technology.com/products/g-drive.cfm

3X1TB LACIE http://www.amazon.com/LaCie-FireWire800-FireWire400-External-301442U/dp/B001KFH6K6/ref=pd_cp_e_3 (wow price sure went up!)
2 are currently on a RAID1 and I need to keep separate for travel
1 is on eSata holding some backups

1x1TB an older version of WD mybook studio, it has eSata and firewire, I don’t think they make it anymore, although eSata used to give me trouble with this drive
I have it on FW800 right now, holding some backups
Use the slowest externals for clone disks (whether you keep them in their enclosures or not), and the faster ones (7200 rpm), and be added to a Port Multiplier enclosure and used for backup.

One for a boot volume (RAID 0 does not improve booting or loading applications). Just sequential throughputs and capacity of the volume (what the system sees) at the cost of reliability.

I am thinking getting an 8-drive box either way, so I can make it bigger later, however is it’s a big procedure to add a drive to a raid5? (as in need to backup the whole drive and delete and rebuild the array?) Or is it straightforward?(just pop it in and let the drives reconfigured for a few dozen hours?)
On a hardware RAID controller, you have the option of either method.

Backing up, creating a new array, and restoring data will be faster than using Online Expansion (toss in another disk, and the card does the rest). Software implementations don't tend to offer Online Expansion (Disk Utility does not offer this, so it's restore data from backup/s after creating a larger array).

You say at first SAS is out of the question, then recommend a SAS drive, $2k is probably doable for what they MIGHT give me, would I get 300 MB/s if I had that array on eSATA? Also would I need a separate RAID Card? I know my current eSata card has some sort of RAID functionality to it but I don’t know a damn thing about it, I got that card at least two years ago and only ever used Disk Utility Raid.
The disks linked are SATA.

SAS cards can run both SAS and SATA drives. SAS controllers are also cheaper these days, and that's all RAID card makers are using now (last dedicated SATA card from Areca was in 2006; they're still really fast though).

I do not have a UPS, I have a regular surge protector.
If you go with a hardware RAID, then this isn't really an option as you'll get burnt if you don't (lost/corrupt data).

I was actually looking at that OWC as an option, good to know its crap, but how crap is it? Would I get those 300 mb/s speeds out of it?
It's not total crap, but it's limited.

Great for a backup/secondary array, but not that good for a primary array (still doesn't have the same robustness of an Areca in terms of recovery and features such as Online Expansion/Online Migration, which you really need with a primary RAID volume).

Speeds out of it would top out at ~250MB/s or so in a RAID 0. You might see ~175MB/s in RAID5.

also, as far as RAID cards go...

this case
http://www.newegg.com/Product/Product.aspx?Item=N82E16816111175

comes with a SAS card and cables and such, is this sufficient? it says it is a RAID5 tower with hardware RAID, why would I need a $700+ raid card?
The card that comes with it is a pile of crap (software based, so it's only good for 0/1/10), so don't waste your time and money with that card.

The enclosure itself, is the same one I linked previously. It's just the card that you need to avoid.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Thank your for your continued help;

You're storing your raw data on the scratch space? Or are you trying to -reuse the temp data (that's what true scratch data actually is, as it's mid-processed data, with the final output stored to whatever volume it's directed to)?

Please understand I realize you've stated it's for scratch, but the way you're talking about things, it doesn't sound like it's purely for scratch data (temp data created by your applications).

If you're storing both data (raw or completed) and scratch on the same disk/volume, that's not a good idea at all.

Take a look at this post (Digital Skunk is a professor in this field BTW).

Now assuming this is the case (scratch is set to the same volume as your primary data/where you're storing any raw data), then you need to use a separate disk/volume for scratch. Boot should also be on it's own volume.

There's not only a performance penalty for this, it's also expensive. Now that's not to say you shouldn't implement a RAID 5 on a hardware controller, but use it exclusively for working data (any raw footage as well as final output). I'd actually recommend this, as your most secure data location for the entire system.

Even if you're scratch data is there for an extended period of time, which you still haven't clarified, there are other ways to go about it that would be much cheaper (i.e. single SSD is safe enough, but worst case, a software based 10 would be sufficient purely for scratch).

the term "scratch" as used in final cut, is for converting files from a camera (P2, AVCHD, etc) to an editable format(DVCPROHD, ProRes, etc) in which case, the drive you are "scratching" to is not temp data, it is where you are saving files and where you will use them when working in final cut. Also for rendering sequences with different effects and filters such as green screen, will save to a scratch disk, again this is not temporary, when you go to watch or output your sequence this data will be referenced. I thank you for your persistence, but I don't know how to make that any more clear, this is raw and processed data, it is not temp data.

the link you reference explains that your scratch disk(by final cuts terms, not temp data) should not be the same as the boot drive.


Definitely not enterprise grade...

you may have not seen this as I edited in later, but opened the cases, they are Samsung HD103UH drives




Use the slowest externals for clone disks (whether you keep them in their enclosures or not), and the faster ones (7200 rpm), and be added to a Port Multiplier enclosure and used for backup.

as far as I know all my drives are 7200rpm

One for a boot volume (RAID 0 does not improve booting or loading applications). Just sequential throughput and capacity of the volume (what the system sees) at the cost of reliability.

my boot drive is not raid 0, of my 4 internal drives; 3 of them are on RAID0, and one is boot
 
the term "scratch" as used in final cut, is for converting files from a camera (P2, AVCHD, etc) to an editable format(DVCPROHD, ProRes, etc) in which case, the drive you are "scratching" to is not temp data, it is where you are saving files and where you will use them when working in final cut. Also for rendering sequences with different effects and filters such as green screen, will save to a scratch disk, again this is not temporary, when you go to watch or output your sequence this data will be referenced. I thank you for your persistence, but I don't know how to make that any more clear, this is raw and processed data, it is not temp data.

the link you reference explains that your scratch disk(by final cuts terms, not temp data) should not be the same as the boot drive.
The finished file (data conversion) is working data. I get that.

But the general definition of scratch space is temporary data (and is how Adobe uses it). So either FCP's definition is wrong, or there's been a misunderstanding on your part (not picking, just trying to educate).

The reasoning for not sharing volumes is for performance reasons (disk has a limited bandwidth, and the more tasks it's being used for, the slower each task, technically an I/O operation, will be completed). So splitting things onto dedicated drives is a means of speeding things up.

you may have not seen this as I edited in later, but opened the cases, they are Samsung HD103UH drives
Actually, I did.

Now if you search, you'll find that the HD103UH is infact a consumer grade disk. But another significant indicator, was the cost, as enterprise models are much more expensive. Even before the flooding in Thailand, and is worse now. :eek: :p

For example, if you found a 2TB Caviar Black for say ~$150, the RE4 version would be around the $250 mark. Mechanically, they're based off of the same components. But the enterprise versions add additional sensors, use different firmware, and have cherry-picked platters. WD at least uses the same platters in both versions (enterprise is typically 1E15 unrecoverable bit error rate vs. most consumer disks have a 1E14 unrecoverable bit error rate).

as far as I know all my drives are 7200rpm
Best to be sure, and check each drive by model number.

my boot drive is not raid 0, of my 4 internal drives; 3 of them are on RAID0, and one is boot
Again, I'm just trying to educate (many make the mistake of thinking RAID 0 will speed up anything, which isn't the case).

It's also the reason why I gave you a basic storage configuration of what a good editing system looks like (and there are plenty of members that are using that configuration for both Adobe and FCP rigs; AVID too).
 
I am thinking getting an 8-drive box either way, so I can make it bigger later, however is it’s a big procedure to add a drive to a raid5? (as in need to backup the whole drive and delete and rebuild the array?) Or is it straightforward?(just pop it in and let the drives reconfigured for a few dozen hours?)
I know with an Areca card, it is really as simple as inserting the new drive into an empty bay, and running the Expand function. It rebuilds automatically and takes a few hours. Whether it's faster or not to delete the raidset, rebuild with the added disk(s) and reload data from backup is hard to say. It's faster to delete and re-initialize a fresh RAID, but when you add the time to reload the data, it could be a wash. If you expand it, the benefit would be that you can still work off the RAID with some speed degradation.
 
I know with an Areca card, it is really as simple as inserting the new drive into an empty bay, and running the Expand function. It rebuilds automatically and takes a few hours. Whether it's faster or not to delete the raidset, rebuild with the added disk(s) and reload data from backup is hard to say. It's faster to delete and re-initialize a fresh RAID, but when you add the time to reload the data, it could be a wash. If you expand it, the benefit would be that you can still work off the RAID with some speed degradation.
A lot of it depends on the level used and size of the array (not only capacity, but member count). Card settings matter as well, particularly background and foreground settings.

But on large arrays, all things being equal, it's faster to do it manually with parity based levels if you can (may not be able to take the array off-line though, which is where Online Expansion and Migration functions allow a RAID card to "earn it's keep" in a visible/noticeable way as it were).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.