I see a bunch of things that need to be changed, but let's start simply (get the performance up first).
First off, let's start with the settings in the
HDD Power Management section (bet you didn't expect to start here...

).
Disable the following:
- Time To HDD Low Power Idle (minutes)
- Time To HDD Low RPM Mode (minutes)
- Time To Spin Down Idle HDD (minutes)
The reasoning behind this, is that the disks may have been in a reduced RPM or spun down, which will affect your performance (i.e. start an AJA run when the disks are running slow or spun down, so they have to get back up to speed before any data can be written = negative effect on the performance data reported back by AJA).
This may not fix your problem (there are other things, including the stripe size, aka Queue Depth setting, but let's keep this and the other settings as they are for the moment). If the above works, you'll see a performance increase.
___________________________________________
BTW, if you don't have a backup system in place, I can't stress how much you need to correct this immediately. Things go wrong with any storage configuration, including RAID implementations.
If this is the case, get an eSATA controller and a Port Multiplier based eSATA enclosure (
example kit = enclosure + card), add drives (green models are great for backup), and go.
You can leave them as individual disks, or create a JBOD (concatenation = appears as a single volume to the system). Once it's up, make a backup of all of your data.
_______________________________________
Do not run any other disks (those in Pass Through) on the card while testing the RAID volume, as those disks are competing for bandwidth (PCIe only, as it seems they're physically located internally in the MP from what I get out of the screen shots).
One thing I find interesting is that the box you linked to has only one SAS cable connection for the 8 disks. I have a box by Sans Digital that has two SAS input connections for the eight, four disks on each cable in order to match each disk to a port on the Areca card.
His enclosure contains a SAS Expander, which will allow him to daisy chain enclosures as a means of adding additional drives.
The downside however, is that this does reduce performance a bit, particularly as the drive count gets higher (switches between disks over the 4x ports the SAS cable carries <SFF-8088>).
BTW, 4x SATA III ports have a combined bandwidth of ~2.2GB/s (between the disks and the card only), so unless there's enough disks to saturate this, it's usually not that big of a deal. In this case, the PCIe slot being used is a bigger concern from this POV, and that's currently mitigated by the fact the set is comprised of 5x mechanical disks (= currently not an issue). But may be in the future once enough disks are added to the set so that it can generate ~1GB/s.
What's worse, is if the exact model of 1880 contains more than 8 ports, then it already has a SAS Expander built onto the board (how they get 16 and 24 port configurations as the LSI controllers used only have 8 ports, and there's only 1x controller; the second chip is the SAS Expander, and both are located under the heatsink). I'm not all that happy that they did this without making it public (granted, the PCIe lanes will be the final limit as you can't push more data than the system can take in the case of a 2008 when using Slot 3 or 4).
It would seem in the current configuration, you're only going to see 1000MB/sec to that box at a maximum, where the card is an x8 lane compliant PCI 2.0 design. You're using 25% of the capability for the card. Just pointing that out. I don't know if that has any impact on the speed issue you're having.
Fortunately, at 5x mechanical disks, this isn't really an issue as they can't generate 1GB/s anyway (even in a stripe configuration).
It would eventually be a limitation however, if enough disks are added (this is when you'd really see the SAS Expander's influence, as 1GB/s wouldn't happen; not a huge loss, but the performance numbers would reflect it if the cache was disabled).
How long is the one SAS cable you have between the Mac Pro and the box? My line of thinking is that the cable and card arrangement could be the bottleneck.
It's not the cable length, but the fact it's a single cable rather than multiples that create a 1:1 (disk to port) ratio.
As already mentioned, this isn't that big a deal right now, given the member count.
But as disks are added, or the Pass Through disks run simultaneously, it will be noticeable.
The cable is only about 2 feet long.
This isn't a problem then (rather short cable, as the smallest found is 1 meter <3.3 ft>, and go up from there).
I wasn't aware the slots were limited to first 2 for good performance. 250mb/sec is the maximum I can put through them?
For Slots 3 and 4 in a 2008 (3,1), Yes.
Only Slots 1 and 2 are PCIe 2.0 in that particular system. You could relocate one of the graphics cards if you want to test out to see if it's the slot, but assuming only the RAID volume is active (Pass Through disks aren't being accessed), then 5x mechanical disks won't be able to saturate a 4x lane slot @ PCIe 1.0.