Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Where did you see a Silicon Image raid5 controller has LCD controller module?

How much do you want to bet ?! J/K i don't gamble :) Look at the raid GUI, it does not look like HPT, SiI or Areca GUI

Only high end raid would have LCD controller! Right?

Let me call the manufacture


Yes! As lone as the NAS is ON you can use any of these protocols FTP/SMB/NFS or AFP to access it, and other system does not need to be ON.

I think it is a good choice for Media Server, it has Bit Torrent and iTUNE built-in


no worries just my thoughts :) and seems Nanofrog found out it was :)

so I did not see it ? but it seems you get what you pay for the fact its two sets of raid 5 meaning its a 5 disc raid 5 solution and sil is a big maker of them ? also meaning its some form of lower quality raid setup could be a RoC but does not meant its good ? again its a price point and something but for that price I can get a lot nicer stuff for very close to that like the Areca box or other options
if it was $650 maybe but its overpriced for what it is ? thats all :)

just cause it has a LCD front end does not meant its high end ?
kinda like those people that get the Bentley looking grill for those cars that try to look like a Bentley ? forgot the car but they were rental cars on the islands :)
 
Last edited:
for the scratch SSD
PS it will help out
LR it really helps I have not tried to optimize Aperture ? so cant say but I still play with Aperture to keep up on it so if it comes out with features I can use :)

if you use LR and PS I have two 40s in raid 0 but a single 40 I think would not be enough ? for PS yes not for PS and LR I am finding UNLESS you dump your LR cache before you go to PS just a good heads up
the thing thats filling my scratch/cache disc setup is bridge !! but the bridge cache set to SSD makes bridge much nicer and actually a useful tool
so depending on what you are going to point at that scratch/cache SSD you might want to stay with a 60 ?

for boot I like 100 at least cause I do have the full adobe suite and a few other things going on and keep things down to a minimum but it expands and contracts a bit around 50 gigs up or down 10 either way depending on what I am doing etc..
but I might not be a normal user ?
 
Last edited:
Nanofrog
You're right, it's using a pair of SIL4726's.
Where did you learn SIL4726 can do raid 3/5/CLONE???!!!
If you look around DATOptic website you will see it uses SPM394 controller, which uses JM394chip set - here is the JM394 chipset brief - Got cha !

Just testing your knowledge :) Please accept my apology if I'm offended you

I have one of DAToptic T5_R5-eSUF Quad Interface RAID/JBOD :)
 
Nanofrog

Where did you learn SIL4726 can do raid 3/5/CLONE???!!!
If you look around DATOptic website you will see it uses SPM394 controller, which uses JM394chip set - here is the JM394 chipset brief - Got cha !

Just testing your knowledge :) Please accept my apology if I'm offended you

I have one of DAToptic T5_R5-eSUF Quad Interface RAID/JBOD :)

very good :) I will stand corrected :) as I thought it was the sil ?
I still feel this is a over expensive box for what you get though ?
Jmicron or sil ? either way cheap RoC setup
 
Nanofrog

Where did you learn SIL4726 can do raid 3/5/CLONE???!!!
If you look around DATOptic website you will see it uses SPM394 controller, which uses JM394chip set - here is the JM394 chipset brief - Got cha !

Just testing your knowledge :) Please accept my apology if I'm offended you

I have one of DAToptic T5_R5-eSUF Quad Interface RAID/JBOD :)
Where are you getting that the linked DAT Optic 10 bay units are capable of level 3 and Clone (it does explicitly state it's level 5 capable)?

Neither of those levels are listed from what I'm reading... :confused: But to be fair, the site seems to be missing information, as it should be able to do 0/1/10 if it can do level 5 per chip, and there's the ability to implement 0/1/10 via Disk Utility if needed.
 
You're right, it's using a pair of SIL4726's.

So if DATOptics is using really cheap (short lifespan) hardware, then what's 'better', in the 'beer budget/champagne taste' category - i.e. reliable NAS that accepts 3TB disks?

Using separate boot and scratch SSD's is fine, and will reduce the wear on the boot disk. You don't even need a 60GB though, as the 40GB will do, and is cheap. Up to you of course, but it's easier to toss that way ($100 is more "disposable" if you will, as it will need to be replaced from time to time due to dead cells).

I currently use about 30GB for my system files (I presently use a Velociraptor). 40GB would work for scratch. My initial thought was I could use the system disk for scratch use too, since it's so fast anyway (or is it still better to use a separate scratch disk?)

First thing's first; it's the same hardware as the other 10 bay unit from DAT Optic you linked earlier (5x disks per SIL4726 controller), so you won't be able to use the hardware alone to make a single array.

You can if you're willing to run those together via OS X's software RAID capabilities as a 50 (make a pair of RAID 5's in the DAT Optic unit, then stripe them via Disk Utility).

Now to get that 24TB in a single array, you'd then go into Disk Utility and stripe those 2x pairs of RAID 5's you created in the DAT Optic unit.

But you really don't need this for a backup solution. 50 is beyond overkill.

You'd still be better off using a DAS (Areca card + enterprise disks for primary), with a less extensive backup configuration.

I don't necessarily like the idea of using OSX to run RAID 50. Seems like not a great idea? What if the MP boot drive fails? What happens to the software RAID - still ok? Better to use an 8-bay Sans Digital tower with a Firmtek Seritek e6G card (cheaper than Areca cards - is that b/c it's not actually a RAID controller, just a 'port multiplier'? If so, better to spring for the Areca raid controller?) Jumbled mess of a series of questions... sorry.

I still don't like the idea of primary data locations = single disk, while the backup system is an extensive RAID configuration. It's backwards of what you should be doing. Use a redundant RAID array for your primary data, and a simpler RAID for backup (especially if it's archival = no other source it's stored on).

Take a look at the SIL4726 sheet, and examine the diagram. Pay attention to the bottom most portion's arrows. You'll notice that one of them is labeled GPIO.

This is the interface that is used to connect the LCD display (of which there are 2x; one per chip). ;)

So what would you recommend for managing a 2TB Aperture library, about 1TB of raw video, 2TB iTunes library, about 6TB of Media Files (serving other mac minis), and <1TB of local docs? How would you spend your own money to accomodate and provide backup for all that, with room to grow it about 2TB/yr?
 
very good :) I will stand corrected :) as I thought it was the sil ?
I still feel this is a over expensive box for what you get though ?
Jmicron or sil ? either way cheap RoC setup

So what's the 'better' solution? I have about 5TB of local data files (audio, video, photo) and about 6TB of Media files to serve to mac minis in my house. I'd like 10TB+ of NAS RAID5 for the Media Files, and need about 20TB to back up everything.

How would you go about it, for $3-4K?
 
nanofrog
Look at the RAID GUI USER GUIDE! and JM394 Product brief link. You know the drill, it does not mention, it does not mean you can not do it :D

sarge-
What you have about DATOptic is best value.

Here is my 40TB NAS, that streams BD.ISO in the house - Built with SMP394, same controller that use in DATOptic product

40TB Media Server

Sorry for go off the topic a bit
 
Last edited:
Sarge a few thoughts from my own testing
scratch and boot is not as good as dedicated the problem is you cant tell if you are going to fill the scratch up with some programs then you will really stall out your system
for SSD boot I still believe in the %50 rule of not filling your disc up more than %50

again I had failures I am sure more have not but mine are on 24/7 and working as we go through a lot of data so I guess if funds are tighter then you might be throwing those out when/if they fail so sometimes better to spend a bit more and get something nicer

yes the firmtek are just PM cards

who wrote the dont use raid for the BU when your main is basic ? that is true
I use raid for BU cause I have raid for storage and its another layer of security

I agree a NAS for your media and that would be nice
http://www.smallnetbuilder.com/
has good info to read
I have had 2 NAS boxes both went back cause I did not like performance etc.. so while I like the idea I think I am going to have to spend $1000 for the box ? then I get the thought if I am watching TV I can just network off my computer so I think for me NAS is not needed
buy my boxes stay on 24/7 I think I read you wanted it to work with the computer off so sounds like NAS might be good for you
a 6 bay netgear or Qnap might be good

I would get some 7200 RPM 3TB HDD for main storage run them in raid 10 in the 4 main slots giving you 6TB of pretty fast and reliable storage for your Aperture and video etc.. figure about $800

get a 8 bay sans digital case with 2 TB HDDs for your time machine figure about $1100 set 3 of them in raid 0 for daily BU and in case you need to work off it ever ? so that 6 TB will BU your main working HDD
then the other 5 discs in JBOD for backing up your media files off a NAS
you then can get a NAS box
get a standalone SANS box 5 bay for time machine with 2 TB discs
the reason the seperate one is if one of the power supplies failed in your external boxes you can still be backing up your data daily !

the NAS box ? thats the tough one a good one is $1000 empty you fill it with 2 TB on a 6 bay case you only have 10 TB thats close to your data now so you want to put 3 TB HDD inside that puts it over budget ?

your in a tough spot in some ways to do a quality NAS but do what budget is left over I guess ?


if the 2 TB itunes is along with the 6 media meaning 8 total ? I would put all that on a NAS box since its all kinda media ?
the other reason I would want to leave that raid 10 with %40 open all the time more the better for performance ! Itunes does not need performance your aperture does :)

so thinking more 3 TB of data and 8 of media 11 total


not sure this is best but some idea ?

really their are a million ways to configure things :)
 
So if DATOptics is using really cheap (short lifespan) hardware, then what's 'better', in the 'beer budget/champagne taste' category - i.e. reliable NAS that accepts 3TB disks?
The quality between the parts used in either unit isn't much off, but the DAT Optic unit can at least handle RAID 5 without issue. The NAS cannot (purely software based).

So it really comes down to which product is a better fit for what you will be doing. I've had the impression you would be using both (NAS to serve movies to multiple sets, and a backup system for the MP).

Now if you're looking to use a single solution for both, then the NAS would be the way to go, so long as it can handle the capacity and come within budget. 3TB disks aren't cheap afterall...

I currently use about 30GB for my system files (I presently use a Velociraptor). 40GB would work for scratch. My initial thought was I could use the system disk for scratch use too, since it's so fast anyway (or is it still better to use a separate scratch disk?)
Separate disks are better, as scratch will wear out an SSD far faster than OS/applications use (SSD cells have a limited number of write cycles before they die). OS/applications usage is almost exclusively reads by comparison, so it doesn't wear the cells like scratch writes will. This is why you should separate them (cheaper to replace a single, smaller drive than one disk that combines both). There's other advantages too, such as increased bandwidth per usage (due to separate disks, each on their own SATA port), and ease of operation (i.e. scratch dies, you replace it - no need to re-install the OS and applications as would be the case if sharing a drive).

I don't necessarily like the idea of using OSX to run RAID 50. Seems like not a great idea? What if the MP boot drive fails? What happens to the software RAID - still ok? Better to use an 8-bay Sans Digital tower with a Firmtek Seritek e6G card (cheaper than Areca cards - is that b/c it's not actually a RAID controller, just a 'port multiplier'? If so, better to spring for the Areca raid controller?) Jumbled mess of a series of questions... sorry.
The RAID setup data is stored on the GPT partition for the RAID set itself, so if the RAID dies, the RAID is still there (IIRC, the re-installation will pick it up, but it's best to keep a clone of the OS/applications disk anyway on a cheap HDD to avoid any possible issues - it's also faster to restore this way = I'm a big fan of OS/application clones).

That said, I don't really care for a hybrid setup like this myself, but there are instances that it gets used (mostly due to budget issues).

So what would you recommend for managing a 2TB Aperture library, about 1TB of raw video, 2TB iTunes library, about 6TB of Media Files (serving other mac minis), and <1TB of local docs? How would you spend your own money to accomodate and provide backup for all that, with room to grow it about 2TB/yr?
How much throughput do you need for each?

I ask, as some of it may be better to be split rather than going for a single solution (thinking in terms of Aperture Libraries = may be better served via a stripe set <say 2x 1TB disks, as it's too big for SSD>, as I presume you have that on media of some sort in the event it needs to be restored). You can also keep that on inexpensive mass storage (backup copy).

Another way (better IMO), would be use a proper DAS implementation (i.e. based on an Areca card) for both the Aperture Libraries and Raw video (working data). Local Docs too, if they're not shared with other systems. At least this way, you can get redundancy + speed, without having to put in a massive amount of time in front of the system in the event of a failure (you could keep this to internal disks to keep clutter and costs down too, so it's not that bad). There's a few cards that can do the trick, but ideally, the ARC-1880i would be a good one, and it has room to grow (includes able to run SSD's, as it's a 6.0Gb/s model). Assuming this card + internal kit from MaxUpgrades should be ~$670 + disks (you will need enterprise grade for this).

Then use an inexpensive mass storage system for media, which the NAS would be a good fit as it can function as the Media Server you're after (ready-made or DIY - you're choice).

What you need to keep in mind, is do not make the backup more extensive in terms of redundancy than your primary data location/s. Put your redundancy and speed efforts into the primary locations, and keep the backup the same or simpler (fundamental rule that always applies). This way, your data is better protected and you won't need to put in as much effort to fix a problem (i.e. not having to sit in front of the system the entire time).

I'm still not sure of what your budgets are, as what numbers have been listed, where costs you found (not sure if you can actually cover pricing like that). But the level of safety you seem to be familiar/comfortable with, isn't going to be cheap if applied to all storage. This is part of the reason for making the backup system simpler than the primary as well (keeps it cheaper, as the backup usually doesn't have to be as robust or fast as the primary locations for a DAS system).

nanofrog
Look at the RAID GUI USER GUIDE! and JM394 Product brief link. You know the drill, it does not mention, it does not mean you can not do it :D
Damn did they bury it.... I had to read down to page 15, and it was in the RAID Manager GUI image, not the text.

Why on earth couldn't they havel listed "0/1/Large/3/Clone/5/10" in the specifications in the first place?!?!?!?

That's why I wasn't convinced that it was the JM394 over the Silicon Image 4726.
 
So if DATOptics is using really cheap (short lifespan) hardware

If they do, they would not offer 3yrs warranty, they would went out business a long time ago LOL
 
The quality between the parts used in either unit isn't much off, but the DAT Optic unit can at least handle RAID 5 without issue. The NAS cannot (purely software based).

So it really comes down to which product is a better fit for what you will be doing. I've had the impression you would be using both (NAS to serve movies to multiple sets, and a backup system for the MP).

So then, this?:

MP 5,1 3.33 6-core
Areca 1880i, connected to the below:
120GB SSD (system) (will this connect to the Areca as a basic disk (nonRAID), but make use of the 6gb/s connection to the motherboard?)
60GB SSD (scratch, non-RAID)
4x 3TB internal, RAID1, Use for all working files and docs. Use Hitachi 3TB Deskstar drives (noisy, hot, loud?)

Then, just for media serving:
5x 3TB NAS RAID5 (DATOptic 5-Bay) (15TB) Serving media files to network, connected to Areca 1880i

Backup EVERYTHING on what? Sans Digital 8-bay on RAID5 once they approve the 3TB drives, also connected to the Areca 1880i?
 
Sarge

On the NAS, it's a turn-key system, just turn on change IP and use. You dont need to add ANY additional card or adapter.

You access the data via Gb Ethernet

For the back up you can have TWO option:

Add additional 15TB to DATOptic NAS (it capable handle up to 10 drives)
So you can see TWO volumes up to 15TB eack
- One for Media
- One for BU - run Rsync to automatic back up
Or
14TB Sans Digital 8-bay on RAID5 with a SAS raid card, wait til 3.0TB then move up
Or
26TB - DATOptic Ten Bay - if using 3.0TB HDD
Or
http://eshop.macsales.com/item/Sans Digital/TR8MB/
which uses SiI SiI3726, where nanofrog or Honumaui does not like it
This box should work with 3.0TB HDD
 
Last edited:
its the raid 5 implementation they do I am not a fan of ? not sure the exact chips ? dont care to much :) had them fail which is fine cause sometimes you have to have stuff to try and use

my Areca stuff has never let me down my old in the old days 3ware never let me down my firmtek stuff has never let me down some other cheap cases have and sil running raid has

for PM sil works just fine :)
 
did you mean raid 0 for the internal ?
raid 1 with 4 discs would be overkill :)


if I was to get a 1880 card I would be putting my main storage on it and putting my SSD boot in a icy dock adapter and put it in sled 1 in the mac pro
sled 2 I would put my 60 gig scratch with a icy adapter

I would get a sans digital case with SFF-8088 connectors on the back and you might need to get a SFF-8087 to SFF-8088 cable to run the 4 extra drives inside the sans digital box ?
but I would for sure go that route put some 2 TB enterprise drives inside figure about $250 each 8 of them in raid 5 will give you 14 TB of storage plenty of room to grow
you could get 1.5 TB drives figure about $180 each so you end up with about 10.5 TB of storage which would be enough for your main working stuff

create a JBOD case out of a sans digital PM case and some 2 TB HDDs
use that for time machine
I would say create another one for your NAS backup also

my reasons for two cases for BU again if a power supply ever fails on one case you are out of bu for a while :) with two you can at least still be going and get data on or off if you need to depending on how quick you can get a new case in

the NAS wont hook up to the areca it will be its own unit
if you put 3 TB HDDs inside it and run it raid 5 you are going to have about 12 TB of storage
if you ran it as Raid 0 or JBOD you could then get 15 TB out of it
 
If they do, they would not offer 3yrs warranty, they would went out business a long time ago LOL
They're inexpensive parts, regardless of the warranty offered.

Service life has as much to do with the design as the actual parts used (i.e. cheap IC part with clean and well regulated power will be more stable and run longer than the same part with crappy power that damages the chip). Of course, these differences are typically seen in the rest of the parts selection as well (i.e. better PSU = need to use better parts to begin with, and don't cut any of the sections).

To put the JMB394 into perspective in terms of quantity pricing, I'll compare it to an ARM9 chip (the JMB394 is likely ARM based, but such chips are usually only ARM7 + some additional circuits to make an RoC). Which in quantity, an ARM9 can be had for just under $11USD per (here - this isn't a specialized version such as an RoC, but should give a good idea as to quantity pricing).

For actual pricing, it would take an email to JMicron's sales dept (everywhere I looked didn't even offer them as special order).

So there's a lot of markup in DAT Optic's units compared to Areca in terms of the PCB and it's parts (or similar), which use far more extensive (= expensive) processors for their RAID designs. The enclosure, fans, PSU, internal cables, and backplane PCB's add cost of course, but the bare bones Sans Digital units can give an idea as to what that retails for (just missing the RoC controller PCB is all).

So then, this?:

MP 5,1 3.33 6-core
Areca 1880i, connected to the below:
120GB SSD (system) (will this connect to the Areca as a basic disk (nonRAID), but make use of the 6gb/s connection to the motherboard?)
60GB SSD (scratch, non-RAID)
4x 3TB internal, RAID1, Use for all working files and docs. Use Hitachi 3TB Deskstar drives (noisy, hot, loud?)
You can boot off of the Areca once it's been flashed with the EFI (EBC actually) if you go that route, but I'd leave the boot disk on the ICH (ICH = system's SATA ports). Even though it's 3.0Gb/s, it's still fast enough for a single OWC SSD right now (can sustain ~275MB/s, and the disk is a bit slower @ ~250MB/s). The 285MB/s figure you see on OWC's site is a burst rate, not sustained. If you do this, you'd use one of the included internal cables and connect it to the DATA side of the drive. You'd also need to make/get an extension cable and tie in another power cable to power it.

How To (retains the DATA signal on the optical bay cable):You'll need to cut off the male SATA end, and splice the power cables together (tie the wires you just cut to those on the Backplane Extension Cable; don't leave any power lines un-connected). Just follow the wire colors and locations, and connect using solder + heatshrink tubing, crimp connectors, or wire nuts (any of these will work; cleanest = solder + heatshrink tubing IMO). No matter the splicing method however, it gets power without sacrificing the DATA line on the original cable to the optical bay and won't void the warranty. :)

However - - I'd go with the following:
As per where to fit the SSD and leave it on the ICH, you can either place it in an Icy Dock adapter (here), or place it in the empty optical bay. Both will fit actually, but there's only a single data connection per optical bay. So one of the SSD's will have to go in an HDD bay, unless you pull the OEM optical disk, or take the SSD external, but there's no need to do this (read on).

As per the 4x disks in a RAID 1, that's not really not the best way to go. A RAID 1 is really only meant to be 2x disks, where one of them is an identical copy of the other (you need a backup here too, as any mistake is duplicated on both disks, such as an accidental file deletion).

So you can save 2x HDD bays as well as the cost of 2x 3TB disks too.

Or, if you're after some additional performance and want to use 4x disks, you can use a level 10.

But as you're looking at running the Areca, you can place 3 or 4x disks in a RAID 5, get n = 1 redundancy, and performance that exceeds a level 10 (stripe set is faster, but no redundancy).

You do have options as to how you can set up the Areca, as there's 8x ports (can attach 8x disks without using SAS Expanders - up to 128 disks with SAS Expanders - it's a serious RAID card that offers a lot of bang-for-the-buck, as it's the fastest line of cards out last I checked, which was recent).

In this instance, if you do use 4x HDD's in the internal bays for a RAID (primary data location), then you can go ahead and hook up one of the SSD's to the card via one of the internal cables that come with the Areca (it's just that 3x of the ports aren't used, and there's insufficient room to do much with them, save a unit that can hold 4 or 8x 2.5" disks in a single ODD bay). This is why I didn't recommend going this way, as it puts some limits on future expansion, particularly with 3.5' disks (you can take a port external to a MiniSAS enclosure, but you need the whole port to do that = you'd loose the SSD on that port if you had to do this).

I know, this stuff can get confusing. But read carefully, and it should make sense. :eek: :p

There's other cards in that series as well that offer more ports (up to 24, but they're more expensive as a result).

BTW, you also need to run a proper UPS that has a pure sine wave output inverter. A line interactive model will be fine, but it does need the inverter type I just listed to keep from causing problems (damage potentially) to the MP's PSU, as switched types don't play well with Active PFC PSU's (what the MP uses). Refurbished work fine, and is a good option IMO, as you can get such a unit for quite a discount (i.e. half or more off). A 1500VA unit such as the SUA1500 by APC can be had for ~$250 this way (SMT1500 is similar, but has an LCD display which adds to the cost).

Then, just for media serving:
5x 3TB NAS RAID5 (DATOptic 5-Bay) (15TB) Serving media files to network, connected to Areca 1880i
No. You don't connect the NAS to any controller at all.

Just connect power and get it on the network (make sure it's configured correctly).

Backup EVERYTHING on what? Sans Digital 8-bay on RAID5 once they approve the 3TB drives, also connected to the Areca 1880i?
You'd use a simple eSATA card that supports Port Multipliers, which comes with the enclosure (this unit is being used successfully by other MR members, so we know the cards that come in the kit work properly via driver support). Those cards do not have boot support, but you don't need it. Booting will be done via the ICH (system's SATA ports).
properly.
 
I am using a Drobo Pro via iSCSI on a dedicated Ethernet port for Time Machine on my Mac Pro. I have 12 TB physical in the Drobo Pro, soon to be 20 TB when the 3 TB hard disk firmware update ships next month. Both reads and writes to the Drobo Pro run upwards of 90-110 MB/s, which I am happy with for Time Machine--this is close to the max of GigE. Firewire was slower for me. The uncached read I/O performance is very bad with 4K blocks. I have WD green drives in it, which doesn't help.

The Drobo supports thin-provisioning, so you can grow the volume under the file system without reformatting. There's a lot to like in this approach. I have two 8 TB volumes provisioned, even though I only have 7 TB of logical blocks available for those file systems with dual-disk redundancy on.

My primary storage is a 8 bay SAS RAID that nanofrog helped me sort out (thanks!). Areca 1680x card, 8 TB drives, 1 is a spare, 5 TB usable, and faster performance than SSDs except for uncached 4K random reads. :) Generally it runs at ~800 MB/s for reads and 120--700 MB/s for writes.

This combination has worked pretty well for me and I've done plenty of recovers with Time Machine.
 
So there's a lot of markup in DAT Optic's units compared to Areca in terms of the PCB and it's parts (or similar), which use far more extensive (= expensive) processors for their RAID designs. The enclosure, fans, PSU, internal cables, and backplane PCB's add cost of course, but the bare bones Sans Digital units can give an idea as to what that retails for (just missing the RoC controller PCB is all).

You can boot off of the Areca once it's been flashed with the EFI (EBC actually) if you go that route, but I'd leave the boot disk on the ICH (ICH = system's SATA ports). Even though it's 3.0Gb/s, it's still fast enough for a single OWC SSD right now (can sustain ~275MB/s, and the disk is a bit slower @ ~250MB/s). The 285MB/s figure you see on OWC's site is a burst rate, not sustained. If you do this, you'd use one of the included internal cables and connect it to the DATA side of the drive. You'd also need to make/get an extension cable and tie in another power cable to power it.
Thanks - that's good info.

As per the 4x disks in a RAID 1, that's not really not the best way to go. A RAID 1 is really only meant to be 2x disks, where one of them is an identical copy of the other (you need a backup here too, as any mistake is duplicated on both disks, such as an accidental file deletion).

So you can save 2x HDD bays as well as the cost of 2x 3TB disks too.

Or, if you're after some additional performance and want to use 4x disks, you can use a level 10.

But as you're looking at running the Areca, you can place 3 or 4x disks in a RAID 5, get n = 1 redundancy, and performance that exceeds a level 10 (stripe set is faster, but no redundancy).[/QUOTE]

I'm thinking about RAID5 with two partitions.

You do have options as to how you can set up the Areca, as there's 8x ports (can attach 8x disks without using SAS Expanders - up to 128 disks with SAS Expanders - it's a serious RAID card that offers a lot of bang-for-the-buck, as it's the fastest line of cards out last I checked, which was recent).

In this instance, if you do use 4x HDD's in the internal bays for a RAID (primary data location), then you can go ahead and hook up one of the SSD's to the card via one of the internal cables that come with the Areca (it's just that 3x of the ports aren't used, and there's insufficient room to do much with them, save a unit that can hold 4 or 8x 2.5" disks in a single ODD bay). This is why I didn't recommend going this way, as it puts some limits on future expansion, particularly with 3.5' disks (you can take a port external to a MiniSAS enclosure, but you need the whole port to do that = you'd loose the SSD on that port if you had to do this).

I know, this stuff can get confusing. But read carefully, and it should make sense. :eek: :p

There's other cards in that series as well that offer more ports (up to 24, but they're more expensive as a result).

BTW, you also need to run a proper UPS that has a pure sine wave output inverter. A line interactive model will be fine, but it does need the inverter type I just listed to keep from causing problems (damage potentially) to the MP's PSU, as switched types don't play well with Active PFC PSU's (what the MP uses). Refurbished work fine, and is a good option IMO, as you can get such a unit for quite a discount (i.e. half or more off). A 1500VA unit such as the SUA1500 by APC can be had for ~$250 this way (SMT1500 is similar, but has an LCD display which adds to the cost).

Also helpful, as two of my three UPS just recently kicked the bucket.

No. You don't connect the NAS to any controller at all.

Wouldn't it be faster to have a direct connection? For instance, my XRAID is currently connected via fiber channel. Wouldn't an eSATA connection also outpace giga ethernet?

You'd use a simple eSATA card that supports Port Multipliers, which comes with the enclosure (this unit is being used successfully by other MR members, so we know the cards that come in the kit work properly via driver support). Those cards do not have boot support, but you don't need it. Booting will be done via the ICH (system's SATA ports).
properly.

Isn't that the same as a direct connection?

My present thought, after considering all that's been said/advised, is to set up the MP with SSD system and scratch disks, 4x 3TB internal two RAID0 with two partitions (one backing up the SSD system), connected to my XRAID (RAID5) via fiber channel. This would give me working capacity of 9TB internal and 9TB XRAID, both in RAID5.

I've started ripping a lot of my Video TS folders to high quality mkv files, and I can reduce my media library to under 5TB total (slightly over half the 9TB XRAID capacity). I can serve all that from my XRAID. I can install 4x3TB in my MP in RAID5 with a small partition to back up my SSD system (using the Areca 1880i), and just forgo a 'real' third backup array (i.e. 8-bay tower) until the 3TB enterprise drives come down in price and are more widely tested/supported by either Areca or Sans Digital.

I've never had a drive failure in the XRAID or any MP, so perhaps I'm not as 'afraid' as I should be, but it seems as long as everything is running in RAID5 I'm safe enough to roll the dice another six months, until the 3TB drives get a little better track record and RAID box/vendor support.

Six months from now it will probably cost $1500 for a 24TB 8-bay RAID with 3TB consumer drives, and $2500 for enterprise drives. eSATA is good enough for backup speed. If I need to replace my XRAID to go to a NAS with more capacity, the same argument will hold true for 3TB support there, too.

The issue that drove me to this problem in the first place was insufficient storage/backup capacity, and between shrinking my present collection (mainly via HQ ripping) and expanding my internal MP storage it seems more reasonable to 'risk it' by running everything RAID5 and waiting about six months for better 3TB testing/support/pricing.

I could tack on a cheap 8-bay setup with Sans Digital + 2Tb WD Cav Green drives for about $1200 that would connect via aSATA to the MP. That 16TB (12TB in RAID5) would be enough to backup near-term needs, and I could replace it with 3TB drive array later. It just seems I need 3TB drives, and for the most part they're a bit of an issue.

Which reminds me of one other problem: the MP will see 3TB drives installed individually, but not with the Apple-sanctioned RAID card. I don't actually know if the Areca 1880i will recognize the 3TB internal drives as 3TB...?

Oi vey. :confused:
 
I'm thinking about RAID5 with two partitions.
What are the partitions for?

Also helpful, as two of my three UPS just recently kicked the bucket.
Glad it was timely then. :)

Wouldn't it be faster to have a direct connection? For instance, my XRAID is currently connected via fiber channel. Wouldn't an eSATA connection also outpace giga ethernet?
The eSATA port on the back of the NAS is so the unit can access an additional 5x drives (Port Multiplier based enclosure), not via an eSATA card to the Mac Pro. So for the computers it's to make data available to, you'll have to use Ethernet (10/100/1G for the DAT Optic unit you're looking at to a switch/router, which can be WiFi if you wish from that point to each of the systems).

Personally, I prefer wired, as it's less of a security risk (in the case of personal use, it keeps bandwidth theives off your home network).

Isn't that the same as a direct connection?
Yes, but it's for the BACKUP system, not the NAS. Big difference, as you're looking at 2x different storage systems for different purposes.

My present thought, after considering all that's been said/advised, is to set up the MP with SSD system and scratch disks, 4x 3TB internal two RAID0 with two partitions (one backing up the SSD system), connected to my XRAID (RAID5) via fiber channel. This would give me working capacity of 9TB internal and 9TB XRAID, both in RAID5.
This is a bit confusing, so see if this helps (not sure if you're intending to use a DAS, or just recycle your XRAID system).
  1. I don't recommend using a RAID 0 as a backup. For the SSD's, just use a single disk as a clone for the OS/applications disk.
  2. You do not need to backup the scratch disk, as that's just temporary data (waste of time, effort, and money).
  3. To use FC, you'll have to install an FC card in the MP (not sure what you actually have on hand you can recycle).

I've started ripping a lot of my Video TS folders to high quality mkv files, and I can reduce my media library to under 5TB total (slightly over half the 9TB XRAID capacity). I can serve all that from my XRAID. I can install 4x3TB in my MP in RAID5 with a small partition to back up my SSD system (using the Areca 1880i), and just forgo a 'real' third backup array (i.e. 8-bay tower) until the 3TB enterprise drives come down in price and are more widely tested/supported by either Areca or Sans Digital.
Assuming you've DVD's/BD's of your movies, you don't have to keep a backup, as you have the original disks they came from. But the advantage to having a backup system for such a database, is to keep from having to re-perform all those rips if the primary location looses that data (sounds like you'd have a massive amount of time invested in moving your original sources to HDD).

I've never had a drive failure in the XRAID or any MP, so perhaps I'm not as 'afraid' as I should be, but it seems as long as everything is running in RAID5 I'm safe enough to roll the dice another six months, until the 3TB drives get a little better track record and RAID box/vendor support.
You have to be carefull, as a good RAID system can make you think you'll never have problems. This isn't always the case, even with proper configurations (i.e. hardware can still fail, no matter who made it or how the array was implemented).

Six months from now it will probably cost $1500 for a 24TB 8-bay RAID with 3TB consumer drives, and $2500 for enterprise drives. eSATA is good enough for backup speed. If I need to replace my XRAID to go to a NAS with more capacity, the same argument will hold true for 3TB support there, too.
I go for more smaller capacity disks (i.e. use 1TB models, as they're cheap). It usually works out for enclosures as well, as the largest disk capacity is always expensive.

This is where the larger port cards come in, and if speed requirements is less than insane, SAS Expanders can work as well (more than adequate in your case - think of them as Port Multiplier enclosures for SAS based cards).

I could tack on a cheap 8-bay setup with Sans Digital + 2Tb WD Cav Green drives for about $1200 that would connect via aSATA to the MP. That 16TB (12TB in RAID5) would be enough to backup near-term needs, and I could replace it with 3TB drive array later. It just seems I need 3TB drives, and for the most part they're a bit of an issue.
Use additional enclosures and smaller disks (i.e. 4 port card and 2x 8 or 10 bay enclosures per). Cheaper too.

Please note that slots 3 and 4 actually share the same 4x PCIe lanes, so if used simultaneously, things will slow down (done via a PCIe switch soldered down to the backpane board in the MP = board with the PCIe slots on it for 2009/10 model MP's).

Which reminds me of one other problem: the MP will see 3TB drives installed individually, but not with the Apple-sanctioned RAID card. I don't actually know if the Areca 1880i will recognize the 3TB internal drives as 3TB...?
At some point, Yes. They may actually work now, but you'd be a guinea pig if you tried it. But as mentioned already, more disks of smaller capacity will be the cheaper way to go so long as you're making proper comparisons, such as MiniSAS to MiniSAS, PM to PM, ... (say 2TB disks max - greens don't have the best track record in RAID, save the Western Digital RE-4GP's). Greens would be fine in PM enclosures in a JBOD configuration though, and more than fast enough for movies (I still prefer 7200rpm disks or better for primary locations though).

And you don't need super fast speeds for movies either (40MB/s is all you need for uncompressed 1080p).
 
What are the partitions for?

I was thinking of two partitions, one backing up the other. This would only really be to protect against accidental file deletion type situations.

The eSATA port on the back of the NAS is so the unit can access an additional 5x drives (Port Multiplier based enclosure), not via an eSATA card to the Mac Pro. So for the computers it's to make data available to, you'll have to use Ethernet (10/100/1G for the DAT Optic unit you're looking at to a switch/router, which can be WiFi if you wish from that point to each of the systems).

On units like the Sans Digital, the eSATA ports are the only connection to the MP. My thought was to use a simple system like that for backups.

This is a bit confusing, so see if this helps (not sure if you're intending to use a DAS, or just recycle your XRAID system).
  1. I don't recommend using a RAID 0 as a backup. For the SSD's, just use a single disk as a clone for the OS/applications disk.

  1. I would back up the System SSD on the internal drives (def not worried about scratch) on a separate small partition.
    [*]To use FC, you'll have to install an FC card in the MP (not sure what you actually have on hand you can recycle).

I have two Apple-supplied four port FC cards right now, one each in my MP 1,1 and 2,1. I assume I can swap one into a 5,1 MP without any problem? I could hang two XRAIDs off one MP (the XRAID has two FC ports, one on each 7x bank). Given a loaded XRAID (14x 750gb) can be had for $2500 on ebay, I may just buy another one of those. I was thinking I'd sell mine and buy a new NAS and DAS (as per prior conversation), but maybe its best to wait another 6 months.

the advantage to having a backup system for such a database, is to keep from having to re-perform all those rips if the primary location looses that data (sounds like you'd have a massive amount of time invested in moving your original sources to HDD).
Exactly. Many, many hours went into ripping the DVD's and now into ripping the mkv files. I would NOT want to have to do that over...

I go for more smaller capacity disks (i.e. use 1TB models, as they're cheap). It usually works out for enclosures as well, as the largest disk capacity is always expensive.

I always think 'more drives = more chance for failure' - less is more (reliability)... no? Also more power consumption.

This is where the larger port cards come in, and if speed requirements is less than insane, SAS Expanders can work as well (more than adequate in your case - think of them as Port Multiplier enclosures for SAS based cards).

Use additional enclosures and smaller disks (i.e. 4 port card and 2x 8 or 10 bay enclosures per). Cheaper too.
Again, though, more parts to fail = higher probability of failure.:( No?

Please note that slots 3 and 4 actually share the same 4x PCIe lanes, so if used simultaneously, things will slow down (done via a PCIe switch soldered down to the backpane board in the MP = board with the PCIe slots on it for 2009/10 model MP's).

I'm planning to use a Radeon 5770 (single PCIE), four port FC card, the Areca 1880i, and a USB PM card (could do without it, but I don't like external USB hubs). How would you configure it all (i.e. what in what slots?)

You'd be a guinea pig if you tried it.
Always a good time ;)
But as mentioned already, more disks of smaller capacity will be the cheaper way to go so long as you're making proper comparisons, such as MiniSAS to MiniSAS, PM to PM, ... (say 2TB disks max - greens don't have the best track record in RAID, save the Western Digital RE-4GP's). Greens would be fine in PM enclosures in a JBOD configuration though, and more than fast enough for movies (I still prefer 7200rpm disks or better for primary locations though)..
I'd like to run the 4x 3TB internal disks in RAID5, but I'd also like a quiet machine. What are the quietest 3TB 7200rpm drives? WD Black? Hitachi reviews sound 'noisy'... Others? I've not had much success finding a direct 3TB drive comparison test. Any experience with 3TB Hitachi or Seagate vs WD Black?

And you don't need super fast speeds for movies either (40MB/s is all you need for uncompressed 1080p).

I get that. The main reason I'm thinking I should stick with my XRAID right now is it probably won't go down in value much in the next six months, but everything related to 3TB drive RAID systems should be better ironed out and cheaper.

The only problem then is hoping I don't have an unrecoverable RAID failure in the interim. :(
 
I was thinking of two partitions, one backing up the other. This would only really be to protect against accidental file deletion type situations.
It's not a proper backup solution as the data is still on the same disks. As per doing what you're indicating, it's doable, but it's a waste of resources and time IMO, particularly as you need a proper backup anyway.

Partitioning can be used to improve performance (aka short stroke partition), as it keeps your data off of the slowest tracks on the disk (drives slow down as you fill them). As a result, you don't want to go past the 50% full mark if performance is critical. But if you plan around the worst case conditions, you're fine throughout the capacity. It's just more expensive to do so (best to build a system you can add disks and keep the capacity at less than full).

On units like the Sans Digital, the eSATA ports are the only connection to the MP. My thought was to use a simple system like that for backups.
This is fine for backups (why I've directed you towards it).

It seems the confusion is a result of asking about both a backup system and a NAS system, as the NAS has to be networked since it's connected to multiple systems. The PM enclosure + eSATA card backup configuration is just attached to a single system (= DAS), which is why it's suitable for that.

I would back up the System SSD on the internal drives (def not worried about scratch) on a separate small partition.
For OS/applications disks, make a clone. It will save you a lot of time and effort in restoring it if needed (includes software issues).

I have two Apple-supplied four port FC cards right now, one each in my MP 1,1 and 2,1. I assume I can swap one into a 5,1 MP without any problem? I could hang two XRAIDs off one MP (the XRAID has two FC ports, one on each 7x bank). Given a loaded XRAID (14x 750gb) can be had for $2500 on ebay, I may just buy another one of those. I was thinking I'd sell mine and buy a new NAS and DAS (as per prior conversation), but maybe its best to wait another 6 months.
The newer equipment is the better way to go IMO. Just get 3TB disks out of your mind. It's actually cheaper to get the additional bays and use cheaper drives (smaller capacity, but the cost/GB is actually lower).

Exactly. Many, many hours went into ripping the DVD's and now into ripping the mkv files. I would NOT want to have to do that over...
That's a lot of work, and I wouldn't want to do it over again either in your position.

I've not yet bothered because of the number of DVD's and BD's I've collected over the years due to the hours of work (just did my music collection, and that was a job...). As the movie collection is actually larger, and each disk is much larger than CD's, I'm definitely not looking forward to that at all. Unfortunately, I keep adding disks, so the pile won't ever be easier in the future (I can't seem to bring myself to sell any of them). :p

I always think 'more drives = more chance for failure' - less is more (reliability)... no? Also more power consumption.
Yes and No. It depends on the configuration (number of arrays and members in each) and power management settings (i.e. there's something called MAID with Areca's cards, which spins down disks when left idle for a period of time). Of course, you have to wait for all of the disks in the set to spin up before they can be accessed once they've timed out.

There is a feasible limit on the member count for any RAID level. Parity based arrays are no different (i.e. I've gone as many as 12 members, but prefer to keep them to 8x). Rebuilds take too long past that, as well as the increased odds of another disk/s failure during the rebuild process (age of the disks matters as well).

I'm planning to use a Radeon 5770 (single PCIE), four port FC card, the Areca 1880i, and a USB PM card (could do without it, but I don't like external USB hubs). How would you configure it all (i.e. what in what slots?)
Skip the FC card.

Slot 1 = 5770
Slot 2 = RAID card (8x lane card, so you really want it to run on all lanes; not an absolute necessity in your case right now, but if you add disks, you could bottleneck if you were running on a 4x lane slot such as 3 or 4).
Slot 3 = eSATA card (for the PM enclosure used for Backup)
Slot 4 = USB card (use this sparingly if running the PM enclosure at the same time, as those are sharing the same 4x lanes via a PCIe switch).​

I'd like to run the 4x 3TB internal disks in RAID5, but I'd also like a quiet machine. What are the quietest 3TB 7200rpm drives? WD Black? Hitachi reviews sound 'noisy'... Others? I've not had much success finding a direct 3TB drive comparison test. Any experience with 3TB Hitachi or Seagate vs WD Black?
I've not run 3TB disks yet, as the cost/GB is too high, and worse, they're problematic on some hardware (i.e. firmware can't address the entire capacity yet = need to partition the disks in order to do so). Not my idea of usable yet.

But with the number of disks you're looking at, you're going to have some noise. Especially the NAS and PM enclosure IMO. Using Green models can help, but you've still got a fair few that will be running.

I get that. The main reason I'm thinking I should stick with my XRAID right now is it probably won't go down in value much in the next six months, but everything related to 3TB drive RAID systems should be better ironed out and cheaper.

The only problem then is hoping I don't have an unrecoverable RAID failure in the interim. :(
You can do it cheaper now by using additional members of smaller capacity disks NOW.

You can wait for 3TB disks if you wish, but you'll always be in the same boat when you need to upgrade capacity (need the largest capacity disks at the time to make it work as you can't just add disks). At least with additional bays, you have additional options.

Ideally, you create what you need now, and have additional ports on the card so you can just add the hardware you need later. This keeps you from having to swap out all the disks each time you need additional capacity. It also can increase your speed and allow for migration to other RAID levels if you need more redundancy (speed to, such as moving from 5/6 to 50/60).

As per total data loss, that's why you need a proper backup system. The odds are reduced using a redundant level of RAID, but it's not statistically impossible to encounter total data loss either (seen it happen with too many members in an array due to additional aging disks die during the rebuild process). This is why you have to keep the member count in check.

There really is a lot involved when creating RAID, and your capacity requirements are dictating it's needed to prevent a massive amount of time restoring by hand (i.e. re-ripping all your movies and music).
 
To put the JMB394 into perspective in terms of quantity pricing, I'll compare it to an ARM9 chip (the JMB394 is likely ARM based, but such chips are usually only ARM7 + some additional circuits to make an RoC). Which in quantity, an ARM9 can be had for just under $11USD per (here - this isn't a specialized version such as an RoC, but should give a good idea as to quantity pricing).

This is 2nd time you assume some thing is incorrect! I just at odd with someone like you rambling about a chip-set that you know nothing about,

Think about it, man. Would ARM9 can it calculate a parity raid (raid5) over 200MB/sec with FIVE drives.

Where is the ARM can control a RAID5 over 200MB/sec??!!!!

IO Processor has improved a lot, FIFO per channel, PHY layer more efficient, Independent command fetch, scatter/gather, and command execution. NCQ...All of these make raid5 more affordable now

Just like old time, when we mention server we think about Xserve, but now we have Mac mini server. Would that be cheap and not use-able?

FYI, I recommend my clients to use ARC cards all the time, but I also try to save my client money :)
 
Last edited:
This is 2nd time you assume some thing is incorrect! I just at odd with someone like you rambling about a chip-set that you know nothing about,
How so?

I've asked you for some proof, and all you've done is write a post calling me an idiot. :rolleyes:

What I did do, is use the listed specifications of DAT Optics product in question. Hell, I had to dig in the GUI User guide to find the additional levels (3 and Clone mode - BTW, are only supported if the EEPROM contains the necessary code, so its possible to use the JM394 and not support the additional levels)....

If there's more to it, then post it (proof of why you've come to your conclusions - that's what I've not seen from you yet). That's all I asked for.

Think about it, man. Would ARM9 can it calculate a parity raid (raid5) over 200MB/sec with FIVE drives.

Where is the ARM can control a RAID5 over 200MB/sec??!!!!
First off, the ARM9 was mentioned as a price comparison only. But in fact, the ARM7 can do 200MB/s in RAID 5 for 4x disks (specifically, the Oxford/PLX 936QSE).

As the ARM9 is a more advanced design (faster clocks don't hurt either), it's not unreasonable that it could produce 200MB/s with an additional disk.

IO Processor has improved a lot, FIFO per channel, PHY layer more efficient, Independent command fetch, scatter/gather, and command execution. NCQ...All of these make raid5 more affordable now

Just like old time, when we mention server we think about Xserve, but now we have Mac mini server. Would that be cheap and not use-able?
I never argued to the contrary here.

But JMicron's no different than their competitors in that respect (Oxford/PLX, Silicon Image, and a few others). This is how technology goes.... it gets cheaper over time (rather quickly compared to other industries). They're low cost parts for entry level hardware solutions.

FYI, I recommend my clients to use ARC cards all the time, but I try to save my client money :)
So do I. The combination of Areca and Sans Digital for external enclosures make for a nice price/performance ratio (particularly over the included cables, as those can add up quickly).

But this has nothing to do with any discussion of inexpensive RoC's. Of which, JMicron competes in the same market segment as companies such as Oxford/PLX, Silicon Image, the lower end Marvell's, and even LSI's 1064 (it's getting long in the tooth these days, and the newer offerings by companies such as JMicron, Marvell, ... are competing, even exceeding the performance it can produce without an added processor).
 
Skip the FC card.

Slot 1 = 5770
Slot 2 = RAID card (8x lane card, so you really want it to run on all lanes; not an absolute necessity in your case right now, but if you add disks, you could bottleneck if you were running on a 4x lane slot such as 3 or 4).
Slot 3 = eSATA card (for the PM enclosure used for Backup)
Slot 4 = USB card (use this sparingly if running the PM enclosure at the same time, as those are sharing the same 4x lanes via a PCIe switch).​

The USB card is not critical. If I ditch the FC card, I also ditch the XServe RAID.

For this purpose, what box would you use for NAS and DAS, knowing that current data uses 6TB NAS (movies and music served to the house) and 3TB working files (photo and video files, some docs as well). And you need to grow at least 1TB/yr on the NAS and DAS, each.

That means I need at least one 8-bay NAS and one 10+ bay DAS.

I've not run 3TB disks yet, as the cost/GB is too high, and worse, they're problematic on some hardware (i.e. firmware can't address the entire capacity yet = need to partition the disks in order to do so). Not my idea of usable yet.

But with the number of disks you're looking at, you're going to have some noise. Especially the NAS and PM enclosure IMO. Using Green models can help, but you've still got a fair few that will be running.

The NAS and DAS will be located in my basement, directly beneath my office, so I will only potentially hear the MP RAID, not the NAS or DAS (I only need about 3' of cable, potentially less but I prefer not to make things too tight for maintenance purposes).

The problem is still cost though. My existing XRAID NAS has 9TB of RAID5 capacity and a very fast FC connection. I can sell that and my two quad port FC cards for around $3,000-$3,500 on ebay.

The least expensive SANS DIGITAL EliteNAS EN208L+BXE is $2,500. If you install 8x2TB Hitachi Ultrastar drives it's $4,300 for 14TB of capacity of RAID5. Basically that's an increase of 5TB for $800. Sort of worth it, but how much slower is the connection from the MP vs my current FC connection?

As I understand it, that box is setup as two separate RAIDs, so if you're running RAID5 you really have to fill bays in slots of 4 when you're using 2TB drives.

Is there a less expensive 8-bay NAS solution that will provide a sufficient level of performance and excellent reliability?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.