Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mackpro

macrumors member
Original poster
Feb 1, 2008
77
0
Indiana University
build your own

Might be worth a read

This looks like a very real possibility. I've got three main questions.

1. How on earth can I build a case like that, or what options exist for building a case like this?
2. What type of software is available to run something like this, assuming I have a working knowledge of Linux (GUI Versions)?
3. What are the pros/cons of a setup like this compared to just getting a storage array and connecting it to a Mac Pro considering the fact that I want to be able to pull data from this remotely.

...And doesn't 4GB of ram seem like too little?
 

jtara

macrumors 68020
Mar 23, 2009
2,008
536
This looks like a very real possibility. I've got three main questions.

1. How on earth can I build a case like that, or what options exist for building a case like this?

In a machine shop. They provide the 3D drawings. If they're open-licensing (or even closed-licensing) this, then there might be somebody building and selling the case, as well.

Might also want to look at the recently open-sourced Facebook plans. They've open-licensed and open-sourced their entire datacenter, including the servers they build themselves.

But I suspect there are plenty of products available like this off the shelf. It's just some sheet metal and off-the-shelf circuit boards.

2. What type of software is available to run something like this, assuming I have a working knowledge of Linux (GUI Versions)?

Any Linux distribution. All it is is a high-density PC with a whole BUNCH of SATA drives plugged-in. If you want a GUI, plug in a video card.

3. What are the pros/cons of a setup like this compared to just getting a storage array and connecting it to a Mac Pro considering the fact that I want to be able to pull data from this remotely.

It IS a storage array. Implemented as a Linux server.

However, any RAID is just going to be software RAID. It's just a BOD. (= "bunch of drives")

...And doesn't 4GB of ram seem like too little?

They're just using it as a file server. It isn't doing anything else. I'm assuming they determined that 4GB is sufficient for filesystem cache, given the possible throughput of the drives and network interface.

BTW, hot-plugging SATA drives works just fine in current Linux kernels. I do it all the time. I have a couple of carrier-less SATA docks on the front of my Linux system. You open a little door, pop in the bare SATA drive, close the door and in a few seconds the system recognizes the drive. I'm not using RAID - I just do this for making backups. SATA is designed for hot-plugging, assuming the backplane is designed for that. Of course, you should flush and dismount any filesystems on the drive before removing. I'm assuming in a software RAID configuration, you'd be able to hotplug the drives and that would be recognized and the array would be re-built.

If you do this, make sure you use very, very, very reliable fans! The fans fail, and this system is toast. Literally!

Nice thing is, this all fits in 4U, so it's cheap colo.
 
Last edited:

robzr

macrumors member
May 4, 2006
92
18
Portland, OR
If you do this, make sure you use very, very, very reliable fans! The fans fail, and this system is toast. Literally!

Nice thing is, this all fits in 4U, so it's cheap colo.

Thats pretty cool, but you'd probably be better off with an off-the-shelf case for this project. Fans, that is an area where you will see a difference in the quality of case design. If you look at mid-high range rackmount equipment you'll see they often have rows of fans two deep for redundancy, if one fan fails there is another one right behind it; and the ducting is designed to provide sufficient and even airflow over all critical components. Like jtara says if a fan goes and you don't catch it, components can heat up and excessive heat is a killer.

If you go the Linux route, you'd be wise to run smartd to monitor hard drive temperatures and email/sms you if the temp hits an upper limit.

You should also choose a chassis with good airflow design (ie: ducting such that the air blows evenly over the drives, and also covers the other components appropriately). Too low of temp is just as bad as too high of temp for drives, a poor chassis design will result in some drives being too warm and some being too cool. Check out Googles study for details on this, it's a must read for anyone interested in DIY RAID.

You can also look into fan speed controllers, to keep airflow from being excessive, some are thermostat regulated. This is the type of thing that is built into a high quality rackmount system, one of the reasons they are so expensive.

Rob
 

jtara

macrumors 68020
Mar 23, 2009
2,008
536
If you look at mid-high range rackmount equipment you'll see they often have rows of fans two deep for redundancy, if one fan fails there is another one right behind it; and the ducting is designed to provide sufficient and even airflow over all critical components.
If you go the Linux route, you'd be wise to run smartd to monitor hard drive temperatures and email/sms you if the temp hits an upper limit.

This is one of the reasons I'm a fan of IBM servers. They do the double row of fans, they have a dual power supply option, and both the fans and power supplies are hot-pluggable. If you do the cabling right (and they supply a bracket for this) you can just slide the box out of the rack on the rollers, take the top off (flip lever), pull a fan (flip a little latch, it comes right out), and plop a new one in, and never skip a beat.

Of course, the system will notify you if something requires service. There is a separate network connection for the onboard management processor, and you can do remote reboots (watching the BIOS messages), even update the BIOS remotely.

Similar set-up for power supplies.

Their on-site service is also great. I had a couple of servers installed in a Wall St. location and required service. (OK, Broad St. to be exact.) When IBM says two hours, they mean two hours. (You pay more for better response time.)

The thermal design is of course excellent. Beautiful clear plastic ducts for the CPUs. Not the typical box where they just blow a bunch of fans over the components and pray.
 

Consultant

macrumors G5
Jun 27, 2007
13,314
36
This looks like a very real possibility. I've got three main questions.

1. How on earth can I build a case like that, or what options exist for building a case like this?
2. What type of software is available to run something like this, assuming I have a working knowledge of Linux (GUI Versions)?
3. What are the pros/cons of a setup like this compared to just getting a storage array and connecting it to a Mac Pro considering the fact that I want to be able to pull data from this remotely.

...And doesn't 4GB of ram seem like too little?

Does not include labor and support costs.

Custom built case will be more expensive for 1 copy. Diagram is included on that page.
 

Transporteur

macrumors 68030
Nov 30, 2008
2,729
3
UK
Whoops, missed that.

Well, why wouldn't a Mac recognise 32TB? Of course it will, however, you only get 32TB of usable storage if you configure that device as JBOD or RAID0 (which is bonkers).

But to be honest, the probability that everyone here has got a Mac with that very unit, so that he can tell you specific details is almost zero.
So your best bet is to call the Dulce and ask them anything you want to know.

For $5000 (does it actually include drives? If not, add another $3000) I definitely wouldn't buy that thing without knowing every little detail about it.
 

mackpro

macrumors member
Original poster
Feb 1, 2008
77
0
Indiana University
Whoops, missed that.

Well, why wouldn't a Mac recognise 32TB? Of course it will, however, you only get 32TB of usable storage if you configure that device as JBOD or RAID0 (which is bonkers).

But to be honest, the probability that everyone here has got a Mac with that very unit, so that he can tell you specific details is almost zero.
So your best bet is to call the Dulce and ask them anything you want to know.

For $5000 (does it actually include drives? If not, add another $3000) I definitely wouldn't buy that thing without knowing every little detail about it.

In this particular case I was planning on using RAID 5, but the Mac would still need to be able to handle managing 32TB of data.

I am in the process of researching this device and several other mass-storage solutions. This very thread is actually a part of my research because crowd sourcing on a board like this is great for solving a complex IT problem.
 

Transporteur

macrumors 68030
Nov 30, 2008
2,729
3
UK
In this particular case I was planning on using RAID 5, but the Mac would still need to be able to handle managing 32TB of data.

I am in the process of researching this device and several other mass-storage solutions. This very thread is actually a part of my research because crowd sourcing on a board like this is great for solving a complex IT problem.

Why exactly do you want to use the Quad Pack Rack? It features exchangeable quad drive boxes, which aren't useful unless you need to transfer them from one rack to another across different locations.
Check out the other Dulce systems instead.

If you configure a 16 drive unit as a RAID 5 (which I personally wouldn't to as you'd have only a single parity drive) the Mac would have to handle 30TB in case that you use 2TB drives.

Anyhow, practically there is no limitation to volume sizes in OS X. Well, in fact there is, about 8EB, but you most likely won't be able to go over this limit in the next 20 years. ;)

Edit: And now the most important recommendation:
If you want really good recommendations and background information, drop the user nanofrog a message. He's the storage expert in this forum.
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
If you insist on doing this in a single box, consider running Linux. There are quite a number of good vendors for reliable, high-density Linux boxes and storage systems.
Linux is definitely the way to go. Unfortunately, using a ready-made solution will definitely exceed the budget.

DIY would be better cost wise, but it comes with the compromise of self-support. And it's likely to exceed the budget as well. :(

I'm surprised nobody has mentioned 3 TB drives, you can get 7200 RPM 3TB drives for $179 or 5400 RPM 3TB drives for $149 these days, why not reduce the number of bays you need by ~50%.
At this level, enterprise disks would be required with a hardware RAID card for stability reasons (RAID6 given the member count), and recommended if done via a software implementation (i.e. multiple arrays in Z-RAID2 + 2 hot spares per array due to both the member count, and it's to be used for remote access <I treating it as a remote system too = no person available 24/7 to fix a fault>).

A Hitachi 3TB Ultrastar goes for $350 each from newegg (here).

2 arrays @ 7 members each + 2 hot spares per (RAID6 or Z-RAID2), gives a usable capacity of 30TB.
  • Cost = $6300.
That doesn't leave much, and it certainly won't cover both the server and a backup solution (assuming a JBOD configuration, there's another $1800 in 3TB consumer grade Hitachi Deskstars <not my favorite brand either for consumer disks due to failures>).
  • Total disks = $8100 (no boards, enclosures, or cables). :eek: :(
Keep in mind RAID is not a replacement for backup...
Absolutely.

Given the amount of data involved, I'd skip trying to use anything but a proper backup (there are inexpensive ways of doing it for the requirements, but eSATA may not be an option; i.e. single eSATA port would take ~2 days @ 200MB/s). Adding more disks (consumer units would work here due to the lower duty cycle) on a second set of non-RAID controllers would probably be the cheapest way to go (may need another disk enclosure; say 3TB disks and a 24 bay Norco enclosure).

There's also a bit of a concern with SATA due to the cable length limitations (1.0 meters), as the SFF-8088 to fanout adapters used tend to introduce too much contact resistance (results in unstable arrays - seen this before). In such cases, shorter cables are needed, which means a custom order (say 0.5 to 0.75 meters).

RAID rebuild time is going to be more affected by RAID-6 instead of RAID-5, by software RAID instead of hardware RAID than by 3 TB vs 2/1 TB. The double parity overhead of RAID-6 makes it very slow, but for a budget 30 TB file system I can't think of a better trade-off.
RAID 6 or Z-RAID2 would be the way to go in terms of redundancy vs. performance trade-off for the given capacity requirement IMO as well.

Advantages of 3 TB drives - chassis needs fewer bays, future expandability with the same bays is better, administration is easier (fewer drives, fewer potential points of failure), power draw is significantly less - if you've shopped for colo's lately you'll find that power is pretty expensive these days, moreso than rackspace or arguably bandwidth.
There are cases where using smaller capacity drives has an advantage (no need to keep buying enclosures), and additional performance due to the additional parallelism from additional members in a set.

But in this case, the budget may not work out (need to go back and see if an additional enclosure + 2TB disks is cheaper, but I don't think it will be).

RAID-5 would be ridiculous especially if you were to go with 31 1TB drives, that would be so risky it's ludicrous. Two drives go out and you just lost all your data. Thats 3% of your drives can fail and you're SOL.
Exactly. Even if someone was there 24/7 to catch a failed disk (i.e. no hot spare), there's still the risk of a second failure during the rebuild process, which is becoming more critical with increased platter densities.

If you want to save some money, build your own ZFS box based on OpenSolaris or FreeBSD.
Given the budget, I don't see away around this approach.

Edit: And now the most important recommendation:
If you want really good recommendations and background information, drop the user nanofrog a message. He's the storage expert in this forum.
Gee. Thanks for the mess you got me into... :eek: :D :p
 

mrichmon

macrumors 6502a
Jun 17, 2003
873
3
I am wishing for help and advice deciding what type of hardware to purchase for a 30TB server. It must meet the following requirements:

• 30TB RAID (Redundancy important for backup purposes)

One thing you should consider is that using a RAID array is not a backup strategy. A RAID array will help guard against a failed hard drive though at the expense of increased overall risk.

Individual hard drives have a probability of failure that is expressed as a "Mean Time Before Failure" (MTBF). Typically the MTBF for a hard drive is in the thousands of hours (eg MTBF = 500000 operation hours). With a single volume stored on one hard disk then the volume is dependent on a single hard drive so the MTBF for that volume is equal to the MTBF for the hard disk. With a volume stored on a RAID array the volume is dependent on some number of hard drives. With every additional drive in the RAID array the probability of a hard drive failure in the array increases.

With the right RAID levels you can guard against these hard drive failures and provide recoverability of the data. But being able to recover the data by rebuilding an array is not a backup strategy. RAID will also provide much faster average data rates than the bare storage media will provide. (This was the original motivation for developing RAID approaches.)

If you have 30TB of data that you care about then you might want to consider a backup strategy that will protect against:
  • catastrophic hardware failure (fire/over-voltage/etc) that would eliminate more drives than your RAID level can rebuild from,
  • building damage/theft of hardware.

Not trying to come down on your plans. I know from experience designing multi-TB and small-PB storage systems that TB-scale storage without a backup strategy is a very bad day waiting to happen.

If this 30TB storage system is a scratch store then maybe you don't need a backup strategy. I've rarely seen a store of 10+TB where the users could afford to loose the contents. Sure, you can configure a large data-store and have it run successfully for a long time. But hardware failure does happen, it is only a matter of time.
 

kram96

macrumors newbie
Apr 18, 2011
2
0
Synology

Have you considered a Synology NAS?
It can do everything that you've asked for and more but for way less.

http://www.synology.com/us/products/RS2211RP+/

Capable of 30TB and scalable to 66TB.
Link aggregation with throughput of 198MB/sec

$3k for the server
$140 3TB Drives x 10 = $1400

VPN could be a good solution for a tunnel you can install open VPN on the box or get a hardware VPN router

With your budget, buy a second and use as a backup. The units can sync automatically and encrypted on the same network or to a remote location.

This is assuming that you have a switch that will handle two sets of aggregated links.
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
$140 3TB Drives x 10 = $1400.
Even though it's only 10x disks (which would only allow for either a JBOD or RAID 0), consumer grade disks aren't the way to go for this much capacity used for primary storage. As the backup pool, consumer grade disks are fine due to the lower duty cycle (say 8 hrs per day on average).

It's cheap, but it's not going to be reliable enough for a primary pool. Enterprise grade 3TB disks aren't cheap ($350 per, which is more than double the 5400rpm consumer versions, not quite that much for the 7200rpm consumer units; all 3TB units).

Some might consider this splitting hairs, but I see it as a critical difference for the uptime requirement.
 

robzr

macrumors member
May 4, 2006
92
18
Portland, OR
Even though it's only 10x disks (which would only allow for either a JBOD or RAID 0), consumer grade disks aren't the way to go for this much capacity used for primary storage. As the backup pool, consumer grade disks are fine due to the lower duty cycle (say 8 hrs per day on average).

It's cheap, but it's not going to be reliable enough for a primary pool. Enterprise grade 3TB disks aren't cheap ($350 per, which is more than double the 5400rpm consumer versions, not quite that much for the 7200rpm consumer units; all 3TB units).

Some might consider this splitting hairs, but I see it as a critical difference for the uptime requirement.

http://storagemojo.com/2007/02/20/everything-you-know-about-disks-is-wrong/

From what I've seen, there isn't much of a benefit to "enterprise" hard drives within the same family, that is to say a Consumer SATA 3 TB 7200 rpm vs a Enterprise SATA 3 TB 7200 rpm. Google's study shows they use "consumer" grade, and numerous studies have shown that there is no significant difference in failure rate.

I say save the money, spend it on a few more hot spares and you'll be better off; cheaper and safe. If moneys no object, or you know what firmware tweaks the Enterprise class drives have and whether or not they'll affect you, then this isn't your thread ;)

Also the common trend seems to indicate that MTBF/MTTF is pretty meaningless in the real world...

The OP indicated that he has a Mac Pro he can use, if thats the case then go DAS by all means, Linux will be a compromise...

Rob
 

Matty-p

macrumors regular
Apr 3, 2010
170
0
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_0 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8A293 Safari/6531.22.7)

Op
You have thought about a 1 k Mac mini / eBay xserve with a 5k 16bay promise filled with 16*2tb drived at 1 k that's still only 7k and will fit in a 4u Colo
http://www.google.com/m/products/detail?gl=uk&client=ms-android-google&source=mog&q=promise+16+bay&cid=12414427445159617004
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
From what I've seen, there isn't much of a benefit to "enterprise" hard drives within the same family, that is to say a Consumer SATA 3 TB 7200 rpm vs a Enterprise SATA 3 TB 7200 rpm. Google's study shows they use "consumer" grade, and numerous studies have shown that there is no significant difference in failure rate.
I realize where you're coming from.

Disks nowadays are definitely made from the same primary components (enterprise models used to provide different mechanicals, such as spindle motors and servos). But there are some differences (whether they matter or not is debatable, such as "cherry picked platters" and additional sensors), but as you mention, the firmware is significant in some cases (definitely with hardware RAID cards).

If running a Linux box and using ZFS, consumer disks could be used. But I've seen too many failures lately, particularly of consumer grade disks (Seagate and Hitachi in particular), so I'm hesitant to trust them for high duty cycle requirements (definitely agree that stated specifications aren't what happens in the real world, such as ~31% failure rates with Seagate 7200.12's just off of newegg's user reviews <qualified them by actual failure rates I could determine from the reviews, just presume the 1 or 2 egg ratings = failures, as some got 5 stars due to newegg's return policy>). I even take the time to surface scan disks when possible out of the bag, and find that the enterprise models don't exhibit as many bad sectors in general (can lend one to think the "cherry picked" platters do have a positive influence).

I'm not convinced of the previous heat arguments (sudden death; but in general with electronics, heat over time tends to have a negative effect on overall lifespan). As per vibration, I'm a bit torn (multiple arrays in the same enclosure and no rubber dampening, I'm concerned that the vibration could be a problem on the newer disk types as the platter densities are high and the heads ride closer to the surface than past disk designs - I did note that the articles have some age to them, and don't reflect disks made in the last few years; article is 4 years old).

The OP indicated that he has a Mac Pro he can use, if thats the case then go DAS by all means, Linux will be a compromise...
Then you're talking about a RAID card, enclosures, and enterprise disks (need them for the recovery time settings), where the disks alone would eat the majority of the budget ($8100 of it). That would leave $1900 for card/s and enclosures (SAS expander for backup disks and 1:1 port/disk ratio for the primary pool would likely be more cost effective).

Linux is a compromise, but based on the stated budget, it seems to be the way to go.
 

robzr

macrumors member
May 4, 2006
92
18
Portland, OR
I realize where you're coming from.

Disks nowadays are definitely made from the same primary components (enterprise models used to provide different mechanicals, such as spindle motors and servos). But there are some differences (whether they matter or not is debatable, such as "cherry picked platters" and additional sensors), but as you mention, the firmware is significant in some cases (definitely with hardware RAID cards).

I'll believe that when I see a study that shows that, but with 200,000 disks in the Google & CMU studies showing, and Netapp's rep stating that theres not a reliability advantage, I think I'll save the extra cash unless I know I need to outlay it.


I'm not convinced of the previous heat arguments (sudden death; but in general with electronics, heat over time tends to have a negative effect on overall lifespan). As per vibration, I'm a bit torn (multiple arrays in the same enclosure and no rubber dampening, I'm concerned that the vibration could be a problem on the newer disk types as the platter densities are high and the heads ride closer to the surface than past disk designs - I did note that the articles have some age to them, and don't reflect disks made in the last few years; article is 4 years old).

If you check out the Google study (http://labs.google.com/papers/disk_failures.pdf) theres a pretty clear correlation of heat, both too high and too low, to drive failure trends.

Then you're talking about a RAID card, enclosures, and enterprise disks (need them for the recovery time settings), where the disks alone would eat the majority of the budget ($8100 of it). That would leave $1900 for card/s and enclosures (SAS expander for backup disks and 1:1 port/disk ratio for the primary pool would likely be more cost effective).

Linux is a compromise, but based on the stated budget, it seems to be the way to go.

Sans Digital AS316X2HA, 16 disk RAID 6 DAS, $3295 @ Amazon including a RAID card and all cables, redundant power supplies. 14x Hitachi 0S03230's gives you 30 TB RAID-6 with 2 hot spares, and you still have 2 empty drive bays for expansion. Grand total $5255, and you can get a second identical setup for backup and you're only $510 over the $10k budget.

Run Mac native afpdserver and it will humiliate the Linux box performance-wise while being easier to maintain and integrate. Netatalk is great, but afpd has some serious performance issues, and avahi has some stability issues in my experience.

Rob
 

nanofrog

macrumors G4
May 6, 2008
11,719
3
I'll believe that when I see a study that shows that, but with 200,000 disks in the Google & CMU studies showing, and Netapp's rep stating that theres not a reliability advantage, I think I'll save the extra cash unless I know I need to outlay it.

If you check out the Google study (http://labs.google.com/papers/disk_failures.pdf) theres a pretty clear correlation of heat, both too high and too low, to drive failure trends.
There are instances where this logic makes sense (definitely for backup or some software based storage implementations <no parity based arrays in order to avoid the write hole issue associated with 5/6/...>).

But of the links you've provided, they're 4 - 5 years old, and don't take a couple of things into consideration.
  1. Higher platter densities and smaller distances between the platters and heads.
  2. Cost cutting (i.e. Hitachi shifting their primary manufacturing from Malaysia to China). IIRC, they spun the Shenzen facilities into high gear ~2007, and it's become the sole source of their consumer disks (haven't seen a Malaysian labeled consumer disk in years now). QC's never been the same since...

Sans Digital AS316X2HA, 16 disk RAID 6 DAS, $3295 @ Amazon including a RAID card and all cables, redundant power supplies. 14x Hitachi 0S03230's gives you 30 TB RAID-6 with 2 hot spares, and you still have 2 empty drive bays for expansion. Grand total $5255, and you can get a second identical setup for backup and you're only $510 over the $10k budget.
Have you verified that those disks will work with the controller that comes in that package?

Given it's a RAID card, I wouldn't just presume they would (dealt with too many consumer disks causing stability issues on RAID cards). BTW, the photo of that card resembles Areca (they do offer ODM services, and have provided products to Sans Digital and Highpoint), and I've seen first hand how picky Areca's can be over disks.

If they won't work, they're moot on that controller (or any other they don't work with, and RAID cards tend to be difficult with consumer models due to the recovery timings coded into the firmware). The days of running WDTLER.exe on consumer grade WD disks is over.

BTW, not sure if you've heard, but WD is attempting to acquire HitachiGST (article).
 

rcorbin

macrumors newbie
Jul 10, 2011
1
0
I'm suprised nobody has mentioned 3 TB drives, you can get 7200 RPM 3TB drives for $179 or 5400 RPM 3TB drives for $149 these days, why not reduce the number of bays you need by ~50%.

It doesn't sound like you need incredibly high performance given that it's for archiving video and your internet uplink is only 1.5 MB.

You didn't mention your budget, if it's on the low side, I'd go with a Linux NAS, you could build the whole thing for under $4k.

$335 20-bay SATA chassis http://www.newegg.com/Product/Product.aspx?Item=N82E16811219033
$300 for 2x Intel SASUC8I 8 port SATA controllers
$80 for 4x SFF-8087 cables
$500 for a Mobo, RAM, CPU
$2100 for 14x 3TB 5400 RPM drives @ $149 each
$150 ~800watt quality power supply
$150 misc cooling fans & wiring, dvd drive

Run it RAID-6 with 2x hot spare on Linux. If afpd meets your performance expectations, you're good to go. I have a 10x 5200rpm 2TB RAID-6 linux NAS media server on a 3 or 4 year old low end athlon x2, I get 250 MBps on reads with software RAID-6. afpd is single threaded and will likely be a frustrating bottleneck for writes, it will hang during writes while it syncs to disk periodically. The above config would have no problem saturating gig ethernet during a read and come close to it during a write. Using it as a SAN via iSCSI may allow for higher performance, although you'd need to bounce incoming connections off another server.

Hey Rob,

Was looking into my next project (home media storage) and I came across this thread and your post. I put together a quick wishlist for what you described above

http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=20749087

If you have time can you take a quick glance? Trying to see if I am missing anything like cables/fans or there are any gotchas like incompatibilities in hardware. I only want 30TB in RAID6 but I wanted two extra drives at home in case of failures, but want 2 drives for OS in RAID1..so 14 drives total plugged in.

Thanks!

-r
 

snakebitesmac

macrumors newbie
Nov 4, 2010
11
0
I would recommend something similar to my setup.

My server runs on OpenIndiana(Solaris). I have about 10tb of ZFS raid storage.

I'm usually a little paranoid about data integrity.

This might work for you.
2x 150gb 2.5 sata drives for mirrored system disks
14x 3tb hds(2x 7 drive arrays in dual parity )

You can expose shares to mac users via Netatalk.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.