Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

sarge-

macrumors newbie
Original poster
Jan 24, 2011
24
0
I'm looking at purchasing a Sans Digital TowerRAID TR8XP, which comes with a mini-SAS card. It's offered in their 'Storage 4 Mac' store, so presumably it works with current 64 bit macs.

yhst-87355582647714_2140_21942185


I'd fill it with these Seagate 2TB SAS drives and run it in RAID 5.

The purpose is to use this as an external backup on the Mac Pro 3.33ghz 6 core, which I'm planning to configure as follows:

SSD in optical bay 2
3TB drives in bays 1-4 (WD Caviar Green... maybe Hitachi) 1 drive will be primarily for Aperture, one for video projects, one for iTunes (large library, and I rip everything lossless) and one for DVD's (I rip to Video_TS). In order to keep the drives happy (not crammed) I need to upgrade to 3TB drives.

I'm replacing my current Mac Pro 2,1 w/ 2x3.0Ghz Quad Core Clovertown processors. It is linked via fibre channel to an XServe RAID (14x750gb) which is reaching capacity, and there's no way to upgrade the drives (BOO APPLE!)

My most intensive application is Aperture, which runs faster on the 3.33Ghz 6 core than any other machine (Aperture won't use more than 6 cores). I'm getting into video, both due to convergence in DSLR/video, and my 9 month old son... so video editing is becoming more important as well.

In any event, I need at least an 8-bay enclosure for use with Time Machine.

My question is whether there's a better card to use with the TR8XP (i.e. four port external, in case I want to add a second external enclosure later) that works quickly, efficiently (takes all the RAID load off the main CPU) and doesn't suffer kernel panics in 64 bit mode.

I'd love to use my fibre channel cards with a new setup, but it seems that gear is all quite expensive (and I can't spend the kind of money I did several years ago when I bought the current XRAID setup. I'm planning to ebay my existing gear to subsidize the new purchase).

If there's a better solution that I'm overlooking, I'd appreciate that too... Thanks in advance for any insight.

- Sarge
 
TowerRAID TR8X-B with Areca card and HDD that would of course be on the Areca approved list

is this the best ? yes and no ?
their is no best its what works for you and what you like etc..

you need to decide what you want to do with raid and can you get by with PM instead ?

I would look into the 1222x from areca and this case above as a nice setup

if you want more but plan to spend more look into the 1880 series areca cards
 
TowerRAID TR8X-B with Areca card and HDD that would of course be on the Areca approved list

is this the best ? yes and no ?
their is no best its what works for you and what you like etc..

you need to decide what you want to do with raid and can you get by with PM instead ?

I would look into the 1222x from areca and this case above as a nice setup

if you want more but plan to spend more look into the 1880 series areca cards

Any 64 bit kernel panic problems with any of them?

Also, does anyone have a good idea how fast transfer rates are on the the eSATA version of the enclosure be with a new 6g-capable card vs the mini-SAS?

The cost of going with eSATA vs SAS seems quite substantial (about double) when you compare the box, SAS drives, and mini-SAS controller to a SATA setup.

The SAS option:

Thoughput = 650Mb/S (per Sans Digital)

Areca 1880i = $542 (high fan failure rate?)
Sans Digital TowerRAID TR8XP = $680
Seagate Constellation ES 2TB SAS $258x8 = $2,064
TOTAL: $3,286

The eSATA option:

Firmtek SeriTek e6G = $98
Sans Digital TowerRAID TR8MP = $319

To mess everything up - I've just run across a new 3tb enterprise option from Hitachi: Hitachi 3TB SATA Enterprise-class = $280

I could make do with a 5 drive array in RAID 5 to back up my 4 internal 3tb drives (+ small SSD). Newer technology, but 'enterprise rated' with a 5 year warranty, and fewer drives to fail. (SAS 3TB drives won't be available until later this year, but the SATA version is rated at 6GB/s).

As a third option, the price comes out like this:

Firmtek SeriTek e6G = $98
Hitachi 3TB SATA Enterprise-class $280 x5 =$1400
TowerRAID TR5UTP = $310
TOTAL: $1,808

I'm not sure of the specific compatibility of the new Hitachi drive with any of the controllers mentioned, and I'm not sure the Sans Digital is the best box (USB 3.0 = SATA 6.0 Gb/s?). If the Sans Digital card (included) works, then this option nets 15TB of enterprise drive storage at SATA 6.0GB/s for only $1,710 - which is pretty good. The Firmtek card, for an additional $98, offers 300Mb/s throughput, but I'm not clear if the Sans Digital 5-bay box would take advantage of it.

Comparatively, the SAS 8-bay route offers 16TB at 650Mb/s, for $3,200

Since I'm using this for Time Machine purposes, it seems like the 5-bay with the Hitachi 3tb enterprise-class drives is a pretty good, reliable solution.

Lastly, I'm assuming the Sans Digital box could be swapped out if it failed, and the 5 drives could be swapped into a new box without data loss. Am I correct?
 
.....
Lastly, I'm assuming the Sans Digital box could be swapped out if it failed, and the 5 drives could be swapped into a new box without data loss. Am I correct?


I did testing with 3 3tb western digital in a 9tb raid with the firmtek card i simply moved the 3 hdds from one eight bay unit to the other 8 bay unit and all was well. I used non enterprise hdds to run the raid0 and it moved with ease. from one unit to the other. I have decided to go a completely different route for backup but i liked the sans digtial units and their clone



http://www.newegg.com/Product/Produ...6&cm_re=8_bay_rosewill-_-16-132-016-_-Product
 
ahhh its for time machine :) thought it was for your main storage :)


I can say one of the reasons I went with the 1222x is the track record is good and the drivers are solid and doing PS and image work no video no audio for me the 1222x provides plenty of speed so I went with long term track record :)
no KP nothing been rock solid :)

my second card can be a 1880 since I can wait a bit for things to get worked out for dependability :) not saying they are not good I just am tired of living near the cutting edge to much going on in life ;)

so for time machine I think that would be overkill ? so some thoughts below


I had some Venus T5 which are 5 disc raid 5 cases I used for bu and for TM (Time Machine) and recently one started giving me kernel panics I mean big time then it started dropping drives ? then just dropped and never came back
OK freak out time !!! so I swapped the drives out with my other case and no issues for now but my confidence is shaken and I am going to get rid of them and might try the sans as I have other cases about 3 of their PM cases also and had good luck

so to recap I had a 5 bay venus raid 5 for my time machine and another one for my 3rd layer BU
when the first failed I bought 5 more samsung 2 TB drives and a 5 bay sans case I had on another computer and put that for my 3rd layer BU and then used the other venus case for my time machine !

now not saying you will have issues with the sans 5 bay raid 5 cases but for me I pick up two of them cause they are cheap and I can keep the second one for a 3rd layer backup

I did not have enterprise drives in them ? had a lot of PMs with nanofrog but would also be curious his thoughts if he sees this thread ?

I do have drives that are on the list for my areca but for the 5 bay stuff I was always told you dont need enterprise drives ?

so that might be a good call but for some reason the controller or something on the greenboard I believe went out on the venus and have not had time to play with it yet to see whats up ? I think they were $279 or something ? so bummer but not huge amount gone


so usually you can take drives out of one case and put them in a identical case I am pretty sure I could not take out my venus and put them in a sans case ? but I might try it for fun :)


so part of me is wanting to go back to simple for time machine while I really wanted the redundant and large safe feeling of a raid 5 for time machine part of me is freaked out and off of cheaper raid 5 boxes and I should have take a dose of my own advice that you can not do raid for cheap !!!

so part of me might be going back to raid 10 setups and or just JBOD

lets say for that this setup you posted:
Firmtek SeriTek e6G = $98
Hitachi 3TB SATA Enterprise-class $280 x5 =$1400 (the link went to a 2TB HDD)
TowerRAID TR5UTP = $310
TOTAL: $1,808

so lets say a 2TB enterprise drive is about $250 ? so about $150 cheaper than you had so lets say $1650 for above setup giving us 8TB


I can get a 8 bay PM sans case for $280
8 regular 2TB HDDs for $80 on sale $640
same card $100
about $1040
se them up as raid 10


I might be going this way even though I am trying to avoid to many HDD spinning heat noise etc..

hope these rambling thoughts have helped a bit
 
ahhh its for time machine :) thought it was for your main storage :)

Hitachi 3TB SATA Enterprise-class $280 x5 =$1400 (the link went to a 2TB HDD)

Seems that site has mixed up their listings a bit. There is a second listing for the $419 for the same drive. I haven't seen it for sale anywhere else yet, so that may be the initial price... which changes the economics of the 5 bay argument a bit:

5-bay w/ 3TB Enterprise drives:

Firmtek SeriTek e6G = $98
3tb Hitachi enterprise SATA drives 5x $419 = 2,095
TowerRAID TR5UTP = $310
TOTAL: $2,503

At that point, it looks less worthwhile, being only $500 less than 8-bay SAS system with 2TB enterprise drives, and $350 more than an 8-bay 16TB eSATA system.

8-bay eSATA w/ enterprise drives:

Firmtek SeriTek e6G = $98
Sans Digital TowerRAID TR8MP = $319
WD 2TB RE4 = $217 x 8 = $1,736
TOTAL: $2,153

What type of 2TB drives are you buying for $80? Where are you finding those and the case for $280? $1,020 is cheap for 16TB... too cheap to last 3-4 years, even in home-backup use?

I'm a little concerned about getting that far away from enterprise, as I've had my current Apple XServe RAID gear for almost 4 years without a single hitch, ever. That said, since it's just for Time Machine, maybe anything 'enterprise-level' is overkill...

(I was using part of my XServe RAID to host movies for other TV's (via Mac Minis) around my house, but will serve them from the new Mac Pro internal drives instead. I may later use a Mac Mini server and connect it to another FW800 attached storage unit, or maybe NAS.)
 
Last edited:
Lastly, does anyone know if the 2 meter max length on SATA cables is 'total' for the SATA network, or 'per run'?

IE: The Sans Digital TowerRAID TR8MP requires TWO SATA cables to connect all 8 drives to the host, so does that mean each SATA cord can't be more than ONE meter, or TWO?

If the whole SATA network can't be more than 2 meters, maybe even that is pushing it, as there's surely some length of cabling inside the SD TowerRAID too...

(I'm trying to keep the box in a different room, running the cable through a wall, to minimize noise.)

Thanks again for further replies...
 
not sure about the run length but part of it will depend on the cable a bit

newegg is where I got the HDD they routinely have them for $79 on deals
last round I got the samsung F4 before that the Hitachi

the case is my local frys newegg has them for $300

not sure if you are in the US or UK or ? but the prices I have from newegg are of course US

the enterprise thing is tough for time machine

once 3TB HDD hit about $150 chances are I will buy some up my next HDD purchase is some RE4 drives in 1.5 or 2 TB for my raid or hitachi equiv ?
 
Wouldn't 7200RPM drives (WD RE4, Blacks) etc be better for bays 1-3? Then a WD Green for everything else inc the backup drives seeing as you don't need super fast kick ass performance if your doing incremental backups :) and the noise + power savings would be quite nice.
 
You're looking at some drastically different setups, so what exactly are you after in terms of RAID level and performance?

A couple of notes though:
  1. Highpoint's RAID products tend to be junk, and the support is lousy, so I'd recommend staying away from it.
  2. You not only need to run enterprise disks, but make sure they're on any HDD Compatibility List (Areca in this case; .pdf file) if it's available for that card and vendor (I would avoid cards that do not, as incompatible drives result in instability if they'll initialize at all, which is beyond a PITA to deal with as RMA's will be involved, assuming you don't wait too long).
Now if the enclosure is solely for Time Machine (backup), you'd be best going with a Port Multiplier enclosure such as the TR8MP (kit that includes a 6.0Gb/s card), which will work with standard disks. It will provide the greatest raw capacity at the lowest price.
 
not sure about the run length but part of it will depend on the cable a bit

According to OWC the limit is 2 meters PER CABLE. This is good, as it will allow me to locate the RAID tower in a different room for noise/temp purposes.
 
You're looking at some drastically different setups, so what exactly are you after in terms of RAID level and performance?

I'm moving from a bulletproof fiber channel XServe RAID setup (spoiled by speed) to a far less expensive setup, mainly because Apple abandoned support for the XServe RAID and I can't install higher capacity drives (Would it have killed them to come out with a firmware/hardware update to allow 2TB drives for all the $millions people invested in their product?)...

I'm looking at a couple of things - my itunes, video and photo files take up about 7TB of space. I'm purchasing a new Mac Pro with 4x3TB drives, and plan to use the RAID to back up the internal drives.

I could use my existing XServe RAID (9 TB total) to back up my existing files, but I'm basically out of space - no room to grow (and really 7TB on a 9TB system is overloaded). So my thought was to ebay the XServe RAID (worth a paltry $2,500) and replace it with an 8 bay tower with more capacity.

Being as I'm used to the speed and bulletproof reliability of the XServe and fiber channel (never a single problem in about four years), I started looking at the mini-SAS tower with enterprise drives as an acceptable step down. The reality is I'm probably fine without the speed for backup purposes. So maybe an eSATA 8-bay is fine, and maybe I don't even need enterprise drives.

I'm presently trying to get comfortable with the thought of a Sans Digital TowerRAID TR8MP 8-bay eSATA with non-enterprise drives... After all, what's the probability of multiple drive failures in the RAID AND in the Mac Pro concurrently?
 
Last edited:
Wouldn't 7200RPM drives (WD RE4, Blacks) etc be better for bays 1-3? Then a WD Green for everything else inc the backup drives seeing as you don't need super fast kick ass performance if your doing incremental backups :) and the noise + power savings would be quite nice.

The only bay I'd need a WD black for is the Aperture drive, but my Aperture library is too large to house on one 2TB drive. The other drives are mainly 'servers' for my music and video, and I'd need 3TB drives in each of those to hold everything.

Alternatively I may just get a second external 5-bay RAID to house the 'home media library' and install multiple WD 2TB black drives for photo and video work. My thought was to initially go with 4x 3TB drives in the Mac Pro, and if unhappy with the performance, move them to a 5-bay RAID, RAID 5, and install 2TB WD blacks in the Mac Pro (moving the home media to the external RAID). Part of the problem is I've not yet found a 5-bay or 8-bay RAID case that supports the 3TB drives.

All of it would be backed up to the 8-bay RAID.
 
How about 10bay Hardware RAID tower

Transfer about over 400MB/s and it claims NO DRIVER required

Interesting option. $1k for the box. 10 2TB WD Enterprise drives for $2,100. $3,100 for 20TB of enterprise-drive storage.

The product description notes:

"Hardware raid 2x RAID5"

Does this mean you wind up with two 10TB RAID5 partitions - i.e. vs a single 20TB RAID5 partition?
 
I would not go for the WD Enterprise, but Hitachi Deskstar 2.0TB which cost much less.
20TB @ $2689.00 or 30TB @ $3789.00 base on the website

Here what i see:
Here is the test environment: MACPro 2006, 1x eSATA_PCIe8, OS 10.6.3
- 1x T10-HR5 with 10x Hitachi Deskstar 2.0TB Model HDS722020ALA330
- Configured as RAID50
- Connect to MAC Pro via 2x eSATA cables
So, yes two 10TB raid5 partitions and stripe - now it is a SINGLE volume with speed

That why is so fast!
 
According to OWC the limit is 2 meters PER CABLE. This is good, as it will allow me to locate the RAID tower in a different room for noise/temp purposes.
This is the case when a PM chip is in the enclosure (known as an active signal). So the TR8MP or any other PM based enclosure will allow for 2.0 meters between it and the eSATA card it's connected to.

It's when there's a direct connection between the card and drive that you'll end up with half the distance (1.0 meters, as it's a passive signal - nothing in the path to up the voltage for stability over the increased distance). This is usually only seen when running internal disks or running SATA disks in a MiniSAS based enclosure attached to a proper RAID card, such as an Areca (why I even mentioned this).

If you go with a SAS card, MiniSAS enclosure, and SAS based disks, the distance limit is 10 meters. SAS disks are expensive though, and not financially viable for what you're trying to do (backup for a DAS system).

I'm moving from a bulletproof fiber channel XServe RAID setup (spoiled by speed) to a far less expensive setup, mainly because Apple abandoned support for the XServe RAID and I can't install higher capacity drives (Would it have killed them to come out with a firmware/hardware update to allow 2TB drives for all the $millions people invested in their product?)...
FC is overkill (meant primarily for SAN), as a single system = DAS (Direct Attached Storage, just in case you're not familiar with the term). DAS is even sufficient for small single user clusters (you can get very high speeds out of them when done right).

But typical high speed DAS systems are also for primary storage (working data), not backups for the most part (there are exceptions, such as when capacity is large enough that it can't complete a full backup process within the time frame of a nightly run period, of say 8 hrs).

I'm looking at a couple of things - my itunes, video and photo files take up about 7TB of space. I'm purchasing a new Mac Pro with 4x3TB drives, and plan to use the RAID to back up the internal drives.
How are you planning to configure the internal 12TB?

I ask, as using a stripe set isn't the best way to go for that kind of capacity IMO, given the time it will take to restore it when a failure occurs. Even a software implemented JBOD (concatenation) has it's issues, as even though the data on the remaining disks are fine, you still have to use recovery software to get it (assuming no backup).

This all translates to time needed to execute a full data restoration with either implementation for your primary data.

To reduce the time and particularly the user effort involved, you'd be better off using your existing RAID (or a new one if you must have more capacity) for your primary data (faster, and can implement a redundant level - this is where you save effort on your part, and potentially time, depending on the specifics of the array; capacity, members, and the card used). BTW, Apple's RAID Pro (4 port SAS based RAID card), is junk, so avoid it if you decide you need such a card.

OS X is only good for 0/1/0 and JBOD, so for a parity based array (or if you want the additional recovery features possible), you'd need a hardware card. Areca is a good brand, and works well in MP's. ATTO as well, but they're more expensive.

Pairing up Areca and Sans Digital make a cost effective solution (especially for external), as the cables will be included (Areca includes internal cables for internal ports - in the case of the MP, only usable if the disks are in an optical bay, and Sans Digital external cables; neither types are cheap by most people's standards). For an internal array, you'll need an adapter kit (here) to make it work with a 3rd party card (no choice, as the HDD data is run via PCB traces - the kit gets around this).

I could use my existing XServe RAID (9 TB total) to back up my existing files, but I'm basically out of space - no room to grow (and really 7TB on a 9TB system is overloaded). So my thought was to ebay the XServe RAID (worth a paltry $2,500) and replace it with an 8 bay tower with more capacity.
I'd recommend trying to use this, or a newer one that can handle an increased capacity, for your primary data. And use a PM based enclosure for your backup purposes (external hardware unit if you're paranoid/don't want to use a software implementation for a RAID, given the capacity). 10 isn't efficient in terms of capacity, so JBOD would be the best software implementation, or parity on true hardware. There's other external RAID-in-a-box solutions that only use a single controller to manage all disks (eSATA and SAS examples respectively; neither of these come with interface cards). Either of these will allow you 8x disks in a single array though, unlike the DAT Optic solution that's been linked.

Being as I'm used to the speed and bulletproof reliability of the XServe and fiber channel (never a single problem in about four years), I started looking at the mini-SAS tower with enterprise drives as an acceptable step down. The reality is I'm probably fine without the speed for backup purposes. So maybe an eSATA 8-bay is fine, and maybe I don't even need enterprise drives.
I understand. Once a user becomes familiar with the reliability and recover options in the event a failure needed to be recovered, they don't want to go back to half-baked software solutions (even for level 10 due to online expansion and migration features alone). :eek: :p

I'm presently trying to get comfortable with the thought of a Sans Digital TowerRAID TR8MP 8-bay eSATA with non-enterprise drives... After all, what's the probability of multiple drive failures in the RAID AND in the Mac Pro concurrently?
Exactly, which is why it's actually a viable solution. ;) Much easier on the wallet too. :D

Interesting option. $1k for the box. 10 2TB WD Enterprise drives for $2,100. $3,100 for 20TB of enterprise-drive storage.

The product description notes:

"Hardware raid 2x RAID5"

Does this mean you wind up with two 10TB RAID5 partitions - i.e. vs a single 20TB RAID5 partition?
Yep.

It's running a pair of inexpensive hardware controllers (I'm betting on them being Oxford 936 series), so they'd each only control 5x disks.

For what you're doing, you won't be required to run enterprise grade disks, though I do prefer to do that myself (data replacement is damn expensive). But I usually use consumer grade disks for non hardware RAID based backup systems, as they're not accessed all that often vs. the primary data location.

It's an acceptable compromise of reduced costs and performance for the intended usage.

If you do decide you want a RAID in-a-box, see the above solutions I linked, as they don't suffer this problem (the eSATA version is the one to compare, and it is $300USD more, but worth it IMO, as you can create a single large array).

I would not go for the WD Enterprise, but Hitachi Deskstar 2.0TB which cost much less.
20TB @ $2689.00 or 30TB @ $3789.00 base on the website

Here what i see:

So, yes two 10TB raid5 partitions and stripe - now it is a SINGLE volume with speed

That why is so fast!
But this is done by 2x RAID 5's being created on the unit, and striping via OS X. Part hardware, part software = hybrid, as it's not all controlled by the unit's controller chips.

It will work however. I'm just pointing out the details, as it could have bearing on recovery in the event of a failure (ideally, you fix the degraded RAID5, and OS X will be fine - but it may not; I've not tested this, nor do I recall anyone that has).
 
Sans Digital TowerRAID TR8MP 8-bay eSATA with RR622 RAID Controller

That is Fake RAID (Linux term). RR662 is a SOTWARE RAID CARD. I doubt this can get over 300MB/sec - Some one have the test data?

It's running a pair of inexpensive hardware controllers (I'm betting on them being Oxford 936 series)

I don't think so, Oxford Oxford 936 can only supports 4x drives with Quad interface, and the RAID GUI does not look like Oxford.
 
for $1000 I could buy a areca 1222x battery BU fro raid card and a sans box and have something that is much much better quality I am willing to bet than some of these less expensive stand alone setups ?

I know after having these less expensive raid boxes and high quality stuff I am a bit leery now of the cheap raid boxes for my TM and am going back to simple


one thing I have been looking into is the areca arc-5040
http://www.areca.com.tw/products/esatafirewire800iscsiaoeusb.htm
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151070
$1300 tekram has it for $1099
but knowing their quality and dependability and since I am throwing away or scrapping or giving to my buddy two other venus raid 5 boxes cause I am once bitten by them and dont plan on having that happen again I should have just bought this in the first place !!!!!

but it was not out two years ago ?
also having variety of interfaces is nice and for secure TM knowing my archives are their which for me is nice since I can throw it on a JBOD I prefer it not to go down

the penalty is the speed ? no way is it going to be close to a regular raid card setup which is fine for its use

also rare chance a raid card dies I dont like putting all my eggs in one basket being able to move it quickly to another machine is nice even a laptop
 
That is Fake RAID (Linux term). RR662 is a SOTWARE RAID CARD. I doubt this can get over 300MB/sec - Some one have the test data?
Yes, it's a Fake RAID controller (nothing but a 6.0Gb/s SATA chip and software to handle the RAID functions). They're fine for 0/1/10 and JBOD, but not parity based arrays, as there's no provision to solve the write hole issue.

For Open Solaris and Linux distros that support it, you can use such cards with Z-RAID1 and 2 successfully as well, as the implementation of those levels eliminates the write hole issue itself. Unfortunately, Apple dropped the support for this as was previously promised to OS X users.

As per speed, it can go past 300MB/s when using fast disks and enough of them. We've already had users place SSD's on such cards (newertech for sure, and it's using a Marvell controller chip from the same series, if not the same P/N - depending on which version of newertech).

I don't think so, Oxford Oxford 936 can only supports 4x drives with Quad interface, and the RAID GUI does not look like Oxford.
Good catch.

But it's going to be something similar, as a custom design would have been better to use a single RAID chip to manage the entire unit.

Honumaui - I ended up linking the same unit. Good to know it can be had cheaper than $1300 (Tekram's link).
Found a few others at that price too, and one a bit cheaper (Costcentral - here). Never used them as a source though.
 
Last edited:
I bet that raid box that has ten drives use that POS !!!!! slicone image raid setup is my bet as that seems to be a common 5 disc cheap raid setup
and the one that lasted about 2 years on me :)
 
Firstly, thanks nanofrog for the extensive, knowledgeable response - I learned quite a bit in there. You are one of the 'star' contributors here and I definitely appreciate your input.

Based on yours and the replies above, I'm considering the following structure:

Needs (recap):
(1) Workstation for photo/video work (work-from-home)
(2) Media Server for mac minis at 3 TVs in-home.
(3) Backup of the above

Currently, I have one MP 1,1 acting as media server, second MP 2,1 acting as photo/video workstation connected to an XServe RAID. I'm planning to ebay all of it and replace it with a more energy-efficient system, as follows:

MP 5,1 w/ single 3.33Ghz 6-core (2010), ($3,515)
- Radeon 5770 GPU (included)
- 16GB RAM (OCW) ($300)
- 120GB SSD boot (OCW) ($250)
- 60GB SSD scratch disk (might be a waste of money, given the high speed of the boot SSD? Not clear on that yet) ($150)
- 1x 3TB WD Caviar Green (quiet, cool, $200) or 1x 3TB Hitachi Deskstar (noisy? $200) for Aperture library
- 2x 2TB WD Caviar Green (no RAID, just for docs, working video files, laptop backup, etc, xfers from existing MP) ($free)
TOTAL: $4,315

Media Center File Server:
DATOptic 5-bay hardware RAID5 NAS
(media server: use for sharing iTunes and other video files with mac minis on local home TVs) ($615)
- eSATA connection to MP (card included)
- Gb ethernet connection to network (i.e. mac mini client access)
- hardware RAID5
- 5x 3TB WD Green Caviar drives (15tb = room to grow) ($1,000)
- card included
TOTAL: $1,615

If I understand correctly, I do not need to leave the MP turned on to serve the files - the mac minis can access the drive directly via ethernet even when the MP is turned off. Yes?

Backup Storage:
DATOptik 10-bay Rack Mount
($1,170)
- hardware RAID5
- eSATA connection to MP, card included
- 10x 3TB Hitachi Deskstar drives ($2,000) (DATOptics is the only company I've seen advertise support for WD 3TB drives so far - Sans Digital just says 'testing'.)
TOTAL: $3,170

TOTAL SYSTEM COST: $9,100

This uses the esata cards included with the external RAID boxes and no enterprise drives, but given the unlikely event of simultaneous failure of the original and backup, it seems safe. (I'm unclear whether it would be worth purchasing a different eSATA card vs the ones included with the DATOptics boxes, which have apparently been tested/designed to work with Snow Leopard - would a 'better' eSATA card only add more variables?)

I have about 10Tb of raw data at the moment, so 30TB of raw backup (25TB in RAID5?) gives room to grow, and low-stress on 'new to market' non-enterprise 3TB drives. That will allow at least 3 years of use before more capacity problems, and presumably DATOptiks will support 3TB enterprise drives when they come along (if I want to upgrade in a couple years)

Thoughts?
 
Last edited:
I bet that raid box that has ten drives use that POS !!!!! slicone image raid setup is my bet as that seems to be a common 5 disc cheap raid setup
and the one that lasted about 2 years on me :)

Where did you see a Silicon Image raid5 controller has LCD controller module?

How much do you want to bet ?! J/K i don't gamble :) Look at the raid GUI, it does not look like HPT, SiI or Areca GUI

Only high end raid would have LCD controller! Right?

Let me call the manufacture

If I understand correctly, I do not need to leave the MP turned on to serve the files - the mac minis can access the drive directly via ethernet even when the MP is turned off. Yes?
Yes! As lone as the NAS is ON you can use any of these protocols FTP/SMB/NFS or AFP to access it, and other system does not need to be ON.

I think it is a good choice for Media Server, it has Bit Torrent and iTUNE built-in
 
Last edited:
I bet that raid box that has ten drives use that POS !!!!! silicon image raid setup is my bet as that seems to be a common 5 disc cheap raid setup
and the one that lasted about 2 years on me :)
You're right, it's using a pair of SIL4726's.

Firstly, thanks nanofrog for the extensive, knowledgeable response - I learned quite a bit in there. You are one of the 'star' contributors here and I definitely appreciate your input.
:cool: NP. :)

MP 5,1 w/ single 3.33Ghz 6-core (2010), ($3,515)
- Radeon 5770 GPU (included)
- 16GB RAM (OCW) ($300)
- 120GB SSD boot (OCW) ($250)
- 60GB SSD scratch disk (might be a waste of money, given the high speed of the boot SSD? Not clear on that yet) ($150)
- 1x 3TB WD Caviar Green (quiet, cool, $200) or 1x 3TB Hitachi Deskstar (noisy? $200) for Aperture library
- 2x 2TB WD Caviar Green (no RAID, just for docs, working video files, laptop backup, etc, xfers from existing MP) ($free)
TOTAL: $4,315
Using separate boot and scratch SSD's is fine, and will reduce the wear on the boot disk. You don't even need a 60GB though, as the 40GB will do, and is cheap. Up to you of course, but it's easier to toss that way ($100 is more "disposable" if you will, as it will need to be replaced from time to time due to dead cells).

BTW, do you really need 120GB for a boot disk (i.e. do your applications and libraries consume that much capacity)?

I ask, as 60GB seems to be sufficient for most from what I'm seeing (~25GB seems to be typical for OS X, and it can be trimmed down if you need to = 35GB or so for applications and any libraries they use).

Media Center File Server:
DATOptic 5-bay hardware RAID5 NAS
(media server: use for sharing iTunes and other video files with mac minis on local home TVs) ($615)
- eSATA connection to MP (card included)
- Gb ethernet connection to network (i.e. mac mini client access)
- hardware RAID5
- 5x 3TB WD Green Caviar drives (15tb = room to grow) ($1,000)
- card included
TOTAL: $1,615

If I understand correctly, I do not need to leave the MP turned on to serve the files - the mac minis can access the drive directly via ethernet even when the MP is turned off. Yes?
Correct. You do not need to leave the MP on in order to access the NAS (it's actually a computer built from an Atom D510, which is sufficient to run a small NAS).

Backup Storage:
DATOptik 10-bay Rack Mount
($1,170)
- hardware RAID5
- eSATA connection to MP, card included
- 10x 3TB Hitachi Deskstar drives ($2,000) (DATOptics is the only company I've seen advertise support for WD 3TB drives so far - Sans Digital just says 'testing'.)
TOTAL: $3,170

TOTAL SYSTEM COST: $9,100

This uses the esata cards included with the external RAID boxes and no enterprise drives, but given the unlikely event of simultaneous failure of the original and backup, it seems safe. (I'm unclear whether it would be worth purchasing a different eSATA card vs the ones included with the DATOptics boxes, which have apparently been tested/designed to work with Snow Leopard - would a 'better' eSATA card only add more variables?)
First thing's first; it's the same hardware as the other 10 bay unit from DAT Optic you linked earlier (5x disks per SIL4726 controller), so you won't be able to use the hardware alone to make a single array.

You can if you're willing to run those together via OS X's software RAID capabilities as a 50 (make a pair of RAID 5's in the DAT Optic unit, then stripe them via Disk Utility).

The usable capacity of RAID 5 = (n -1) disks * capacity of a single disk. So using 3TB disks (30TB raw @ 3 * 10), you get 2x RAID 5's @ 12TB each of usable capacity (24TB total, but it's not a single array - not yet, if you don't have a problem with software striping).

Now to get that 24TB in a single array, you'd then go into Disk Utility and stripe those 2x pairs of RAID 5's you created in the DAT Optic unit.

But you really don't need this for a backup solution. 50 is beyond overkill.

You'd still be better off using a DAS (Areca card + enterprise disks for primary), with a less extensive backup configuration.

I have about 10Tb of raw data at the moment, so 30TB of raw backup (25TB in RAID5?) gives room to grow, and low-stress on 'new to market' non-enterprise 3TB drives. That will allow at least 3 years of use before more capacity problems, and presumably DATOptiks will support 3TB enterprise drives when they come along (if I want to upgrade in a couple years)

Thoughts?
I still don't like the idea of primary data locations = single disk, while the backup system is an extensive RAID configuration. It's backwards of what you should be doing. Use a redundant RAID array for your primary data, and a simpler RAID for backup (especially if it's archival = no other source it's stored on).

Where did you see a Silicon Image raid5 controller has LCD controller module?
Take a look at the SIL4726 sheet, and examine the diagram. Pay attention to the bottom most portion's arrows. You'll notice that one of them is labeled GPIO.

This is the interface that is used to connect the LCD display (of which there are 2x; one per chip). ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.