What is the exact machine you'll be running this on?
I ask, as 6 * 10Gb ports will generate ~ 6GB/s sustained (assuming worst case here), and an 8x PCIe 2.0 lanes is good for 4GB/s. So you will need at least 2x cards running on PCIe 2.0 lanes (each slot wired @ 8x lanes per). This would also allow you to reach the theoretical limit of 10Gb/s, which would be 7.5GB/s (1.25GB/s per port), which gives you some additional headroom for bursts (2x 8 lane PCIe 2.0 slots = 8GB/s).
This would be a problem in a MP, unless you remove the graphics card. As it would be far easier to accomplish this with PC systems, particularly if you're going to build it, I'd recommend the PC route instead.
As per cards, I'd go with the Areca's over LSI (faster and have a better cost/performance ratio). And as you've used one, I expect you're familiar with it.
BTW, how many SSD's do you plan on running, and why not use RAID 5 (it can actually exceed 10 for database use on recent cards)?
This or a similar board would be the way to go.We're still getting everything planned out but it's going to likely be based on a Supermicro board with 5520 chipset. Most of them have built in graphics, so we'll have:
1x PCI-E x16 Taken up by x16 NIC
1x PCI-E x16 open
1x PCI-E x8 open
1x PCI-E x8 open
Which LSI's are you talking about?As for LSI vs Areca, I don't have a preference really, at 8-port the LSI seems to be faster, at least on paper, as it has the newer RoC chip. At 24-port the Areca seems to be better with the bigger cache. The Areca cards are based on LSI chips so its probably cache and firmware making the difference.
I don't think a software layer on top of a hardware one will be a problem, but I've not tried it with that many SSD's (not a problem with mechanical, and stuck to 0 or 1 as they're not heavy on overhead; i.e 5 on hardware, 0 on software to make a 50 out of a couple of sets on cards that were only good for 0/1/10/5/JBOD).We could live with NOT saturating that NIC full for now until a 24-port card comes out based on the newer RoC. I haven't found any reviews online of people pushing the 24 port card this hard so no idea if it'll top out at it's theoretical peak or lower. Main concern here is if there is anything risky about putting a software RAID layer on top of hardware one.
Since it seems you're interested in MLC based drives, I'd recommend sticking with 10 then, as a parity configuration would wear the drives too quickly (not unrealistic to see drives fail before the planned replacement schedule under such usage IMO).Not necessarily set on RAID 10 vs 5 though. We're looking at either Intel 510 or OWC 6G 120GB SSDs and want ~1.2 TB usable. I was thinking of RAID 10 with a couple of hot spares.
That's good.The system drive is separate and not part of this consideration.
It helps. I was getting the impression you were after a high performance relational database (needed SSD's for their random access performance).If it makes a difference, this server is a file server for a roughly 50 client render farm.
That particular card is definitely underpowered for SSD's (was designed only for mechanicals).I'm using ARC-1210 and it has too slow processor for my SSD's.
Each of my 4 SSD's can handle 230/75 MB/s so it should be about 920/300 MB/s in total, but I got only 455/329 MB/s.
Which LSI's are you talking about?
I ask, as what I've seen so far, is the Areca's are beating LSI's products (have been for a few years at least for top tier models).
BTW, the 1880 series is using custom ASIC designed by Areca (not sure who they have fabbing them out, but if I had to guess, it's probably TSMC).
Since it seems you're interested in MLC based drives, I'd recommend sticking with 10 then, as a parity configuration would wear the drives too quickly (not unrealistic to see drives fail before the planned replacement schedule under such usage IMO).
BTW, if the board has a small hardware RAID controller, it might be a good idea to use it for a RAID 1, given what you're trying to do.
Odd.The SSD Review's article is saying the 1880 series is based on the LSI2108.
This one could be interesting, but they need to get more than an 8 port version out.It goes on then in this article saying the LSI 9265-8i is the first card on the market using the new LSi2208, basically a dual-core version of the LSI2108.
I'm not convinced, as they didn't show a picture of what was under the heatsinks.At all other port counts, both LSI and Areca products are using the 2108, so the Arecas may be faster.
There's a few out there, but they are hard to find.No idea how reputable a site this is, it's rare to find RAID card reviews at the high end![]()
How long can you wait?I'm hoping for a 24 port single card, PCI-E x16, that has the 2208 and a big fat cache![]()
Quite understandable, as SLC is horrible in terms of cost/GB.I'd love to go SLC but we need the entire storage system (card+drives) to come in under 10k, while providing at least 1TB usable and as much speed as possible.
I'm hoping for a 24 port single card, PCI-E x16, that has the 2208 and a big fat cache![]()
Is that really going to get you more throughput? At some point going through the SATA/SAS expander becomes a bottleneck. SATA/SAS is point-to-point 6Gbps. However, if you are trying to hook 24 points to probably 4 points, they all can't hook-up at the same time. You're going to hit a switch. It is not an issue when HDD were much slower than SAS/SATA but with SSDs you can't mob a single switch/controller.
Seems likely that a 24 port model will save you an external expander cost, but not really buy much more in throughput. (e.g., 4 ports on the ROC would still cap you at 3GBps so an 8x PCI-e card would still be enough.). The LSISAS2208 ROC seems to have 8 ports but still would need either an expander with 8 or two 4 port expanders.
What exactly are you trying to do in terms of 10G Ethernet bandwidth to start with?Therefore, if we assume each SSD can deliver 500 MB/s for easy math, the drive set is capable of pushing 12 GB/s to the card (on paper) if the cards' ROC and cache can handle it.
The question then becomes, what becomes the bottleneck first? The RoC/cache configuration? Or the x8 connection?
If the card can only handle 3 GB/s then the x8 is sufficient. If the card can handle more than 4 GB/s, then the x8 is a bottleneck.
PCI-E 2.0 x8 is going to top out at 4 GB/s. An x16 card would top out around 8 GB/s, which isn't 12, but certainly better than 4.
LSI does make a 24 port version based off of the 2108 (MegaRAID SAS 24i4e).I heard back from LSI that there is no ETA on a 24 port version of the 9265.
We're leaning towards the ARC-1880-ix-24 w 4GB cache for now.
Can upgrade it later when the time comes.
What exactly are you trying to do in terms of 10G Ethernet bandwidth to start with?
I ask, as I was under the impression that you were after 6 ports @ 10Gb running full bore, which will definitely exceed what an 8x lane PCIe 2.0 card can deliver (you'd need a pair, assuming the RoC doesn't throttle in the desired level, requiring yet another card or more).
Now assuming the above is the case (saturate 6x 10G Ethernet ports), and we're talking about a level 10 configuration, it's not much of a load to run the RAID 1's on the card, then handle the stripe via software in the OS. So it's possible a pair could get it done.
To be sure (whatever you actually need ATM), you'd need to get a single card and place it in a test bed. Then use real data to see if it throttles in the desired configuration or not, and go from there (a bit time consuming, but it will make sure you fulfill your requirements).
The other thing I'm wondering, is why a DP board (you can get an SP board with a sufficient PCIe slot configuration), as well as whether or not you've considered running the array via the ZFS filesystem for example (or even 10 off of EXT3). Could keep you within your budget it comes down to the wire via multiple hardware controllers (starting to think this might be the case, particularly using a DP board and processor).
LSI does make a 24 port version based off of the 2108 (MegaRAID SAS 24i4e).
OK, so you are planning to saturate the 10G ports if at all possible (desired combined throughput of 60Gb/s).We are going to have around 60 clients hitting the server, each with a 1Gb link. We want to feed all of those 1Gb links with as much throughput as possible.
I hope you don't mean 8x disks in a single RAID 1 (only get the capacity of a single disk, with the data duplicated on 8x disks), and performance would only be that of a 3x disk stripe set.The more I think about it, I may consider doing a RAID 100 setup with 3 9265-8i cards. So each card would run a RAID 1 with 8 drives, then RAID 0 the 3 resulting RAID 1's. This should, in theory get us in the neighborhood of 6 Gb/s.
What is the system going to be doing besides SAN operations?As for the DP setup, that is the only option for the barebones Supermicro system we're considering. I suppose i could reduce the CPU speed but I was concerned that decreased memory bandwidth of the lower end Xeons may complicate moving this my throughput through PCI-E?
OK, so you are planning to saturate the 10G ports if at all possible (desired combined throughput of 60Gb/s).
OK, that's a solid figure to work from.
I hope you don't mean 8x disks in a single RAID 1 (only get the capacity of a single disk, with the data duplicated on 8x disks), and performance would only be that of a 3x disk stripe set.And that's 10, not 100 (100 = (1 + 0) + 0 of the 10's).
Now you could make 10's on the cards, then stripe those via the OS and get a 100 configuration.
What is the system going to be doing besides SAN operations?
I ask, as if it's a SAN only, I don't see the memory being a problem. Remember, most other boards have 6x DIMM slots per CPU, and there are 16GB DIMM's currently available (so 96GB is doable). Now I realize your concern is bandwidth rather than capacity, but 3x channels are quick. More than quick enough for a SAN, even when they're interleaved (still faster than any FSB based system). And will your host software be keeping data in the SAN's memory (I expect not)?
Here's an example of an SP board that would do what you need (no integrated GPU), but it has the slots (run 1*16x slot for the NIC, with 4*8x lane slots for a GPU, and 3x RAID cards, leaving 1*4x lane slot open; all are PCIe 2.0). It also runs Xeons and ECC memory (though it's marketing lingo is aimed at gamers, it's a true workstation board - I actually use one of these in my personal system).
Just trying to come up with some alternatives, as I don't know if you've the ability to get additional funds via a justified argument if needed (3x cards has me a bit concerned that you'll be over the $10k limit by the time you figure everything else, particularly the SSD's themselves).
Ah, OK.The system is going to run SMB file sharing on Windows Server 2008.
You have options, even with the limits imposed by MLC based SSD's, but not much (10 or 100).Yes, I misspoke on the RAID level. The chassis backplane has 24 slots (not counting system drive) to play with.
I only see one problem... getting the NIC performance you're after.Option 1 would be to just get the 24 port card, and make one big RAID 10.
The idea is good, but at 8x disks per card in a 10, then use the OS to create a 100 of those 3 sets, performance would be around the 6GB/s range.Option 2 could be to get 3 of the 9265-8i's, and put 8 drives on each in a RAID 10. Each of these would then get the throughput of 4 drives. Then run a software RAID 0 of the 3 resulting volumes.
Quite understandable, but there's a big reason why. Real data is usually needed for the specific situations (actual disk performance, card performance, ... to determine the actual throughputs possible for the intended configuration), so a test bed is usually implemented to discover what that is. Once that's done, the real world results are applied to the planning, and the correct solution is determined.Sorry for so many questions / thoughts out loud. There is very little information out there on the web for these more rare RAID setups.
There's an issue with using parity in this instance though, and that's the disks will be MLC based SSD's, which aren't suited to parity based levels (SLC is too expensive for the budget).Do *NOT* use a RAID-0 with more than 3 drives. You are just inviting hardware failure. Even if you RAID-1 with a second set, it's still ugly.
With 24 drives, you're looking at losing one drive from each RAID-0 and you're out of commission. Use a RAID-6 with that many drives. Preferably a RAID-6 with a hot spare. And you'll *STILL* have more usable space than a RAID-10 (or RAID-0+1, depending on which way you implement it.)