Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

beaker7

Cancelled
Original poster
Mar 16, 2009
920
5,010
Just wondering, if I were to have 2 separate PCI Express RAID controllers, each with a RAID 0 of SSD's attached, could I do a software RAID 1 on those and have redundancy?
 
You could, but you wouldn't even need a hardware RAID controller to run the stripe sets (RAID 0).

I can put you onto cards that will work in a MP that have sufficient bandwidth for 6.0Gb/s SATA disks, RAID or non-RAID. But neither are as inexpensive as say a newertech 6.0Gb/s 2 port eSATA card, as they offer additional bandwidth (performance always costs money). But additional details would be needed to get you into the right configuration without over-spending or creating an insufficient configuration for your specific needs.

Another thing to note, that the configuration you've described is a level 0+1. Level 10, aka 1+0 is better (= make a pair of RAID 1's first, then stripe those 2x sets together; hence 1+0). For the details as to why, take a look at the RAID Wiki, and pay close attention to those two levels (in Nested RAID levels).

Hope this helps get you started. :)
 
I ask this actually in the context of a Windows machine, perhaps within a Mac Pro. I have an Areca ARC-1880-ix-12 running 4 OWC 6G SSD's in RAID 0 in a Mac Pro, and have had great success with it.

For background, we are building a server that is going to use 24 SSD's in RAID 10 and are considering the above card's big brother, the ARC-1880-ix-24. I have 2 concerns with this, though.

1) We may hit a wall in regards to IOPs with 24 OWC 6G SSDs with the LSI2108 ROC.

2) We may hit a throughput wall on the PCI-E x8 bus.

We have a windows workstation here that has an LSI 9265-8i in it, which has also been great, and is based on the newer dual core LSI2208, enabling higher IOPS.

Since there is not yet a 24 port card using the LSI2208 to my knowledge, I was wondering if the better option would be

1) go with the ARC-1880-ix-24 and consider upgrading it when a 24 port LSI2208 comes to market.

2) 2 or 3 9265-8i's with some sort of RAID implementation on top.

We need this server to be able to saturate a 6-port 10Gb Ethernet card from Small Tree.
 
What is the exact machine you'll be running this on?

I ask, as 6 * 10Gb ports will generate ~ 6GB/s sustained (assuming worst case here), and an 8x PCIe 2.0 lanes is good for 4GB/s. So you will need at least 2x cards running on PCIe 2.0 lanes (each slot wired @ 8x lanes per). This would also allow you to reach the theoretical limit of 10Gb/s, which would be 7.5GB/s (1.25GB/s per port), which gives you some additional headroom for bursts (2x 8 lane PCIe 2.0 slots = 8GB/s).

This would be a problem in a MP, unless you remove the graphics card. As it would be far easier to accomplish this with PC systems, particularly if you're going to build it, I'd recommend the PC route instead.

As per cards, I'd go with the Areca's over LSI (faster and have a better cost/performance ratio). And as you've used one, I expect you're familiar with it.

BTW, how many SSD's do you plan on running, and why not use RAID 5 (it can actually exceed 10 for database use on recent cards)?
 
What is the exact machine you'll be running this on?

I ask, as 6 * 10Gb ports will generate ~ 6GB/s sustained (assuming worst case here), and an 8x PCIe 2.0 lanes is good for 4GB/s. So you will need at least 2x cards running on PCIe 2.0 lanes (each slot wired @ 8x lanes per). This would also allow you to reach the theoretical limit of 10Gb/s, which would be 7.5GB/s (1.25GB/s per port), which gives you some additional headroom for bursts (2x 8 lane PCIe 2.0 slots = 8GB/s).

This would be a problem in a MP, unless you remove the graphics card. As it would be far easier to accomplish this with PC systems, particularly if you're going to build it, I'd recommend the PC route instead.

As per cards, I'd go with the Areca's over LSI (faster and have a better cost/performance ratio). And as you've used one, I expect you're familiar with it.

BTW, how many SSD's do you plan on running, and why not use RAID 5 (it can actually exceed 10 for database use on recent cards)?

We're still getting everything planned out but it's going to likely be based on a Supermicro board with 5520 chipset. Most of them have built in graphics, so we'll have:

1x PCI-E x16 Taken up by x16 NIC
1x PCI-E x16 open
1x PCI-E x8 open
1x PCI-E x8 open

As for LSI vs Areca, I don't have a preference really, at 8-port the LSI seems to be faster, at least on paper, as it has the newer RoC chip. At 24-port the Areca seems to be better with the bigger cache. The Areca cards are based on LSI chips so its probably cache and firmware making the difference.

We could live with NOT saturating that NIC full for now until a 24-port card comes out based on the newer RoC. I haven't found any reviews online of people pushing the 24 port card this hard so no idea if it'll top out at it's theoretical peak or lower. Main concern here is if there is anything risky about putting a software RAID layer on top of hardware one.

Not necessarily set on RAID 10 vs 5 though. We're looking at either Intel 510 or OWC 6G 120GB SSDs and want ~1.2 TB usable. I was thinking of RAID 10 with a couple of hot spares.

The system drive is separate and not part of this consideration.

If it makes a difference, this server is a file server for a roughly 50 client render farm.
 
I'm using ARC-1210 and it has too slow processor for my SSD's.

Each of my 4 SSD's can handle 230/75 MB/s so it should be about 920/300 MB/s in total, but I got only 455/329 MB/s.
 
We're still getting everything planned out but it's going to likely be based on a Supermicro board with 5520 chipset. Most of them have built in graphics, so we'll have:

1x PCI-E x16 Taken up by x16 NIC
1x PCI-E x16 open
1x PCI-E x8 open
1x PCI-E x8 open
This or a similar board would be the way to go.

As for LSI vs Areca, I don't have a preference really, at 8-port the LSI seems to be faster, at least on paper, as it has the newer RoC chip. At 24-port the Areca seems to be better with the bigger cache. The Areca cards are based on LSI chips so its probably cache and firmware making the difference.
Which LSI's are you talking about?

I ask, as what I've seen so far, is the Areca's are beating LSI's products (have been for a few years at least for top tier models).

BTW, the 1880 series is using custom ASIC designed by Areca (not sure who they have fabbing them out, but if I had to guess, it's probably TSMC).

We could live with NOT saturating that NIC full for now until a 24-port card comes out based on the newer RoC. I haven't found any reviews online of people pushing the 24 port card this hard so no idea if it'll top out at it's theoretical peak or lower. Main concern here is if there is anything risky about putting a software RAID layer on top of hardware one.
I don't think a software layer on top of a hardware one will be a problem, but I've not tried it with that many SSD's (not a problem with mechanical, and stuck to 0 or 1 as they're not heavy on overhead; i.e 5 on hardware, 0 on software to make a 50 out of a couple of sets on cards that were only good for 0/1/10/5/JBOD).

Not necessarily set on RAID 10 vs 5 though. We're looking at either Intel 510 or OWC 6G 120GB SSDs and want ~1.2 TB usable. I was thinking of RAID 10 with a couple of hot spares.
Since it seems you're interested in MLC based drives, I'd recommend sticking with 10 then, as a parity configuration would wear the drives too quickly (not unrealistic to see drives fail before the planned replacement schedule under such usage IMO).

The system drive is separate and not part of this consideration.
That's good.

BTW, if the board has a small hardware RAID controller, it might be a good idea to use it for a RAID 1, given what you're trying to do.

If it makes a difference, this server is a file server for a roughly 50 client render farm.
It helps. I was getting the impression you were after a high performance relational database (needed SSD's for their random access performance).

I'm using ARC-1210 and it has too slow processor for my SSD's.

Each of my 4 SSD's can handle 230/75 MB/s so it should be about 920/300 MB/s in total, but I got only 455/329 MB/s.
That particular card is definitely underpowered for SSD's (was designed only for mechanicals).
 
Which LSI's are you talking about?

I ask, as what I've seen so far, is the Areca's are beating LSI's products (have been for a few years at least for top tier models).

BTW, the 1880 series is using custom ASIC designed by Areca (not sure who they have fabbing them out, but if I had to guess, it's probably TSMC).


The SSD Review's article is saying the 1880 series is based on the LSI2108.

It goes on then in this article saying the LSI 9265-8i is the first card on the market using the new LSi2208, basically a dual-core version of the LSI2108.

What i glean from this is that right now, at 8 ports, the LSI 9265-8i is the fastest on paper.

At all other port counts, both LSI and Areca products are using the 2108, so the Arecas may be faster.

No idea how reputable a site this is, it's rare to find RAID card reviews at the high end :(

I'm hoping for a 24 port single card, PCI-E x16, that has the 2208 and a big fat cache :)

Since it seems you're interested in MLC based drives, I'd recommend sticking with 10 then, as a parity configuration would wear the drives too quickly (not unrealistic to see drives fail before the planned replacement schedule under such usage IMO).

I'd love to go SLC but we need the entire storage system (card+drives) to come in under 10k, while providing at least 1TB usable and as much speed as possible.


BTW, if the board has a small hardware RAID controller, it might be a good idea to use it for a RAID 1, given what you're trying to do.

That's the plan. System will be 2 SSD's in RAID 1 on the motherboard.
 
The SSD Review's article is saying the 1880 series is based on the LSI2108.
Odd.

I say this, as it seems each of the chips on the Areca's are 16x ports per, not 8 (LSI's are 8x per, which is why the 8 port cards only have a single chip). And they're big on systems engineering in a series (just add additional chips to bring the port count to the desired quantity).

ATTO does use the LSI's though (note the 4x heatsinks on the 24 port model = big clue), and the Areca does a bit better with the same port count and cache capacity (R6xx series vs. 1880 series). Firmware differences can explain some of it, but I'm not sure it's all equal in terms of hardware between these two products.

It goes on then in this article saying the LSI 9265-8i is the first card on the market using the new LSi2208, basically a dual-core version of the LSI2108.
This one could be interesting, but they need to get more than an 8 port version out.

As per their statements on "LSI being the only company that can out LSI is LSI", I disagree. Intel's IOP341 (dual core ARM based IIRC) beat LSI's comparable product at the time, and both ATTO and Areca used them. So I see this as a strong bias, and am uncertain as to the validity of all of their claims. Hopefully at least, the test data is accurate, and can be relied upon for basing purchase decisions on.

Unfortunately, Intel hasn't yet released a 6.0Gb/s version yet (beginning to wonder if they will).

At all other port counts, both LSI and Areca products are using the 2108, so the Arecas may be faster.
I'm not convinced, as they didn't show a picture of what was under the heatsinks.

Call me jaded, but the specs and design don't match up from my POV (hardware engineer).

No idea how reputable a site this is, it's rare to find RAID card reviews at the high end :(
There's a few out there, but they are hard to find.

arecaraid.cineraid.com has a forum that's been useful on performance data for the 1880 series (checked yesterday, and discovered it's down).

I'm hoping for a 24 port single card, PCI-E x16, that has the 2208 and a big fat cache :)
How long can you wait?

Not sure how long it will take to get a 24 port model out, but it's likely to be at least 13 weeks, and I'm surprised there isn't an entire line out simultaneously (usually the case). :confused:

I'd love to go SLC but we need the entire storage system (card+drives) to come in under 10k, while providing at least 1TB usable and as much speed as possible.
Quite understandable, as SLC is horrible in terms of cost/GB. :(
 
I'm hoping for a 24 port single card, PCI-E x16, that has the 2208 and a big fat cache :)

Is that really going to get you more throughput? At some point going through the SATA/SAS expander becomes a bottleneck. SATA/SAS is point-to-point 6Gbps. However, if you are trying to hook 24 points to probably 4 points, they all can't hook-up at the same time. You're going to hit a switch. It is not an issue when HDD were much slower than SAS/SATA but with SSDs you can't mob a single switch/controller.

Seems likely that a 24 port model will save you an external expander cost, but not really buy much more in throughput. (e.g., 4 ports on the ROC would still cap you at 3GBps so an 8x PCI-e card would still be enough.). The LSISAS2208 ROC seems to have 8 ports but still would need either an expander with 8 or two 4 port expanders.
 
Is that really going to get you more throughput? At some point going through the SATA/SAS expander becomes a bottleneck. SATA/SAS is point-to-point 6Gbps. However, if you are trying to hook 24 points to probably 4 points, they all can't hook-up at the same time. You're going to hit a switch. It is not an issue when HDD were much slower than SAS/SATA but with SSDs you can't mob a single switch/controller.

Seems likely that a 24 port model will save you an external expander cost, but not really buy much more in throughput. (e.g., 4 ports on the ROC would still cap you at 3GBps so an 8x PCI-e card would still be enough.). The LSISAS2208 ROC seems to have 8 ports but still would need either an expander with 8 or two 4 port expanders.

I don't understand your question. The 24 port cards such as the Areca have 6 SFF-8087 connectors which each carry 4 6Gb channels/ports.

Therefore, if we assume each SSD can deliver 500 MB/s for easy math, the drive set is capable of pushing 12 GB/s to the card (on paper) if the cards' ROC and cache can handle it.

The question then becomes, what becomes the bottleneck first? The RoC/cache configuration? Or the x8 connection?

If the card can only handle 3 GB/s then the x8 is sufficient. If the card can handle more than 4 GB/s, then the x8 is a bottleneck.

PCI-E 2.0 x8 is going to top out at 4 GB/s. An x16 card would top out around 8 GB/s, which isn't 12, but certainly better than 4.
 
I heard back from LSI that there is no ETA on a 24 port version of the 9265.

We're leaning towards the ARC-1880-ix-24 w 4GB cache for now.

Can upgrade it later when the time comes.
 
Therefore, if we assume each SSD can deliver 500 MB/s for easy math, the drive set is capable of pushing 12 GB/s to the card (on paper) if the cards' ROC and cache can handle it.

The question then becomes, what becomes the bottleneck first? The RoC/cache configuration? Or the x8 connection?

If the card can only handle 3 GB/s then the x8 is sufficient. If the card can handle more than 4 GB/s, then the x8 is a bottleneck.

PCI-E 2.0 x8 is going to top out at 4 GB/s. An x16 card would top out around 8 GB/s, which isn't 12, but certainly better than 4.
What exactly are you trying to do in terms of 10G Ethernet bandwidth to start with?

I ask, as I was under the impression that you were after 6 ports @ 10Gb running full bore, which will definitely exceed what an 8x lane PCIe 2.0 card can deliver (you'd need a pair, assuming the RoC doesn't throttle in the desired level, requiring yet another card or more).

Now assuming the above is the case (saturate 6x 10G Ethernet ports), and we're talking about a level 10 configuration, it's not much of a load to run the RAID 1's on the card, then handle the stripe via software in the OS. So it's possible a pair could get it done.

To be sure (whatever you actually need ATM), you'd need to get a single card and place it in a test bed. Then use real data to see if it throttles in the desired configuration or not, and go from there (a bit time consuming, but it will make sure you fulfill your requirements).

The other thing I'm wondering, is why a DP board (you can get an SP board with a sufficient PCIe slot configuration), as well as whether or not you've considered running the array via the ZFS filesystem for example (or even 10 off of EXT3). Could keep you within your budget it comes down to the wire via multiple hardware controllers (starting to think this might be the case, particularly using a DP board and processor).

I heard back from LSI that there is no ETA on a 24 port version of the 9265.

We're leaning towards the ARC-1880-ix-24 w 4GB cache for now.

Can upgrade it later when the time comes.
LSI does make a 24 port version based off of the 2108 (MegaRAID SAS 24i4e).
 
What exactly are you trying to do in terms of 10G Ethernet bandwidth to start with?

I ask, as I was under the impression that you were after 6 ports @ 10Gb running full bore, which will definitely exceed what an 8x lane PCIe 2.0 card can deliver (you'd need a pair, assuming the RoC doesn't throttle in the desired level, requiring yet another card or more).

Now assuming the above is the case (saturate 6x 10G Ethernet ports), and we're talking about a level 10 configuration, it's not much of a load to run the RAID 1's on the card, then handle the stripe via software in the OS. So it's possible a pair could get it done.

To be sure (whatever you actually need ATM), you'd need to get a single card and place it in a test bed. Then use real data to see if it throttles in the desired configuration or not, and go from there (a bit time consuming, but it will make sure you fulfill your requirements).

The other thing I'm wondering, is why a DP board (you can get an SP board with a sufficient PCIe slot configuration), as well as whether or not you've considered running the array via the ZFS filesystem for example (or even 10 off of EXT3). Could keep you within your budget it comes down to the wire via multiple hardware controllers (starting to think this might be the case, particularly using a DP board and processor).


LSI does make a 24 port version based off of the 2108 (MegaRAID SAS 24i4e).

We are going to have around 60 clients hitting the server, each with a 1Gb link. We want to feed all of those 1Gb links with as much throughput as possible.

The more I think about it, I may consider doing a RAID 100 setup with 3 9265-8i cards. So each card would run a RAID 1 with 8 drives, then RAID 0 the 3 resulting RAID 1's. This should, in theory get us in the neighborhood of 6 Gb/s.

As for the DP setup, that is the only option for the barebones Supermicro system we're considering. I suppose i could reduce the CPU speed but I was concerned that decreased memory bandwidth of the lower end Xeons may complicate moving this my throughput through PCI-E?
 
We are going to have around 60 clients hitting the server, each with a 1Gb link. We want to feed all of those 1Gb links with as much throughput as possible.
OK, so you are planning to saturate the 10G ports if at all possible (desired combined throughput of 60Gb/s).

OK, that's a solid figure to work from. ;)

The more I think about it, I may consider doing a RAID 100 setup with 3 9265-8i cards. So each card would run a RAID 1 with 8 drives, then RAID 0 the 3 resulting RAID 1's. This should, in theory get us in the neighborhood of 6 Gb/s.
I hope you don't mean 8x disks in a single RAID 1 (only get the capacity of a single disk, with the data duplicated on 8x disks), and performance would only be that of a 3x disk stripe set. :eek: And that's 10, not 100 (100 = (1 + 0) + 0 of the 10's).

Now you could make 10's on the cards, then stripe those via the OS and get a 100 configuration.

As for the DP setup, that is the only option for the barebones Supermicro system we're considering. I suppose i could reduce the CPU speed but I was concerned that decreased memory bandwidth of the lower end Xeons may complicate moving this my throughput through PCI-E?
What is the system going to be doing besides SAN operations?

I ask, as if it's a SAN only, I don't see the memory being a problem. Remember, most other boards have 6x DIMM slots per CPU, and there are 16GB DIMM's currently available (so 96GB is doable). Now I realize your concern is bandwidth rather than capacity, but 3x channels are quick. More than quick enough for a SAN, even when they're interleaved (still faster than any FSB based system). And will your host software be keeping data in the SAN's memory (I expect not)?

Here's an example of an SP board that would do what you need (no integrated GPU), but it has the slots (run 1*16x slot for the NIC, with 4*8x lane slots for a GPU, and 3x RAID cards, leaving 1*4x lane slot open; all are PCIe 2.0). It also runs Xeons and ECC memory (though it's marketing lingo is aimed at gamers, it's a true workstation board - I actually use one of these in my personal system).

Just trying to come up with some alternatives, as I don't know if you've the ability to get additional funds via a justified argument if needed (3x cards has me a bit concerned that you'll be over the $10k limit by the time you figure everything else, particularly the SSD's themselves).
 
OK, so you are planning to saturate the 10G ports if at all possible (desired combined throughput of 60Gb/s).

OK, that's a solid figure to work from. ;)


I hope you don't mean 8x disks in a single RAID 1 (only get the capacity of a single disk, with the data duplicated on 8x disks), and performance would only be that of a 3x disk stripe set. :eek: And that's 10, not 100 (100 = (1 + 0) + 0 of the 10's).

Now you could make 10's on the cards, then stripe those via the OS and get a 100 configuration.


What is the system going to be doing besides SAN operations?

I ask, as if it's a SAN only, I don't see the memory being a problem. Remember, most other boards have 6x DIMM slots per CPU, and there are 16GB DIMM's currently available (so 96GB is doable). Now I realize your concern is bandwidth rather than capacity, but 3x channels are quick. More than quick enough for a SAN, even when they're interleaved (still faster than any FSB based system). And will your host software be keeping data in the SAN's memory (I expect not)?

Here's an example of an SP board that would do what you need (no integrated GPU), but it has the slots (run 1*16x slot for the NIC, with 4*8x lane slots for a GPU, and 3x RAID cards, leaving 1*4x lane slot open; all are PCIe 2.0). It also runs Xeons and ECC memory (though it's marketing lingo is aimed at gamers, it's a true workstation board - I actually use one of these in my personal system).

Just trying to come up with some alternatives, as I don't know if you've the ability to get additional funds via a justified argument if needed (3x cards has me a bit concerned that you'll be over the $10k limit by the time you figure everything else, particularly the SSD's themselves).


The system is going to run SMB file sharing on Windows Server 2008.

Yes, I misspoke on the RAID level. The chassis backplane has 24 slots (not counting system drive) to play with.

Option 1 would be to just get the 24 port card, and make one big RAID 10.

Option 2 could be to get 3 of the 9265-8i's, and put 8 drives on each in a RAID 10. Each of these would then get the throughput of 4 drives. Then run a software RAID 0 of the 3 resulting volumes.

Sorry for so many questions / thoughts out loud. There is very little information out there on the web for these more rare RAID setups.
 
Do *NOT* use a RAID-0 with more than 3 drives. You are just inviting hardware failure. Even if you RAID-1 with a second set, it's still ugly.

With 24 drives, you're looking at losing one drive from each RAID-0 and you're out of commission. Use a RAID-6 with that many drives. Preferably a RAID-6 with a hot spare. And you'll *STILL* have more usable space than a RAID-10 (or RAID-0+1, depending on which way you implement it.)
 
The system is going to run SMB file sharing on Windows Server 2008.
Ah, OK.

Do you know how well this will work as it pertains to memory bandwidth usage on the server?

I realize a DP solution is the best way to go about it, but I'm concerned about budget, particularly with such a system running 24x SSD's and 3x RAID cards.

When I run the numbers, I get this...
  • 24 * Intel 510 SSD's @ 120GB (figuring on a 10 configuration ATM, so usable capacity is 1440GB; same in a 100 configuration) = $5688, which includes rebates that are currently available
  • 3 * LSI MegaRAID SAS 9265's = $1920
  • The LSI's don't appear to include cables, so you'll need 6 internal fanouts, at $30 per = $180

A bit under $7800 USD just for cards and disks, and you've not even got the actual system or NIC yet. Hard to do for $2200 ($2400 if the cards do actually include the cables).

Unless the stated $10k budget is just for drives and cards, in which case you're fine. More than fine actually, as it's under budget. ;) And that always looks good to the boss. :D

Yes, I misspoke on the RAID level. The chassis backplane has 24 slots (not counting system drive) to play with.
You have options, even with the limits imposed by MLC based SSD's, but not much (10 or 100).

Option 1 would be to just get the 24 port card, and make one big RAID 10.
I only see one problem... getting the NIC performance you're after.

No matter how fast the single card solution is, there's still the slot limitation imposed by a single 8 lane PCIe 2.0 slot, which will limit you to 4GB/s best case (assumes the card's performance doesn't peak out below this value).

Option 2 could be to get 3 of the 9265-8i's, and put 8 drives on each in a RAID 10. Each of these would then get the throughput of 4 drives. Then run a software RAID 0 of the 3 resulting volumes.
The idea is good, but at 8x disks per card in a 10, then use the OS to create a 100 of those 3 sets, performance would be around the 6GB/s range.

I assume you've thought this through, and it's good enough per your requirements (not absolutely certain, but getting the impression it is).

If not however, you'll need to add another couple of members to each card (since you must add disks in pairs for 10), which will push the storage costs up another $1422 going off of the current pricing and rebate program. Still under $10k, but I'm figuring a worst case scenario, and assuming that budget figure is for the entire system, not just the storage solution that will be attached to the server.

Sorry for so many questions / thoughts out loud. There is very little information out there on the web for these more rare RAID setups.
Quite understandable, but there's a big reason why. Real data is usually needed for the specific situations (actual disk performance, card performance, ... to determine the actual throughputs possible for the intended configuration), so a test bed is usually implemented to discover what that is. Once that's done, the real world results are applied to the planning, and the correct solution is determined.

For example, you could get 8x of the intended SSD's (could do even with just 4), and a single card. Then see if the card will throttle in whatever configuration/s you're interested (10, 100 ...). Extrapolating the results will get you the solution you need to meet your requirements.

That said, the 3 card solution would almost certainly work in terms of sustaining 60Gb/s of 10G bandwidth.

Do *NOT* use a RAID-0 with more than 3 drives. You are just inviting hardware failure. Even if you RAID-1 with a second set, it's still ugly.

With 24 drives, you're looking at losing one drive from each RAID-0 and you're out of commission. Use a RAID-6 with that many drives. Preferably a RAID-6 with a hot spare. And you'll *STILL* have more usable space than a RAID-10 (or RAID-0+1, depending on which way you implement it.)
There's an issue with using parity in this instance though, and that's the disks will be MLC based SSD's, which aren't suited to parity based levels (SLC is too expensive for the budget).

So 10 or 100 are the best choices currently. If they can swap over to SLC based disks at some future date, then moving to a parity based level would make much more sense.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.