Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

pprior

macrumors 65816
Original poster
Aug 1, 2007
1,448
9
I have an areca 1880 in a Mac Pro. I am getting really bad speeds, here is my aja file:

6490313113_2a4e1c3710.jpg


This is a 5 disk RAID 6 array using ST32000644NS drives (2tb enterprise drives). I have run consistency check on it and all volume states are listed as normal.

Heck one disk should be about like that.

I get 249mb/s write and 268mb/s read off my SSD boot drive

Raid info:

Raid Subsystem Information
Controller Name ARC-1880
Firmware Version V1.49 2010-12-10
BOOT ROM Version V1.49 2010-12-10
PL Firmware Version 7.0.0.0
Serial Number xxxxx
Unit Serial #
Main Processor 800MHz PPC440
CPU ICache Size 32KBytes
CPU DCache Size 32KBytes/Write Back
System Memory 4096MB/800MHz/ECC
Current IP Address 192.168.1.100
 
Is the card in at least an x4 lane PCI slot? Did you test the set in RAID0 first to see how fast it runs that way? Most importantly, how is it connected to the set? (SAS cables or something slow like eSATA?)
 
Last edited:
I say rebuild as RAID 0 stripe and see what speeds you see. Should be about 5x speed of single disk.
 
Last edited:
The card is in the 3rd PCI slot from the bottom (the first 2 are taken up by the ATI graphics card) in my mac pro (2008).

I'm running the enclosure http://www.istoragepro.com/prod.php?id=it8sae6g via a mini-SAS cable, rated for much higher speed than I'm getting.

I've been using this array for about a year and never really tested it for speed. It's about 1/2 full of data (3TB out of 6) so I'm not going to reformat it at this point! :)
 
Heck one disk should be about like that.
Not for a mechanical disk (even 15k rpm SAS).

For that particular model, I've seen ~ 105 - 106MB/s as the average result for a single disk.

BTW, I'm not a fan of Seagate's SATA disks, including the ES.2 series since 2008 (QC has gone to crap since then = I've seen too many DOA and premature failures for my liking <under 3 years>).

Raid info:

Raid Subsystem Information
Controller Name ARC-1880
Firmware Version V1.49 2010-12-10
BOOT ROM Version V1.49 2010-12-10
PL Firmware Version 7.0.0.0
Serial Number xxxxx
Unit Serial #
Main Processor 800MHz PPC440
CPU ICache Size 32KBytes
CPU DCache Size 32KBytes/Write Back
System Memory 4096MB/800MHz/ECC
Current IP Address 192.168.1.100
Helpful, but it's not enough to go on.

  • Please post screen shots of all of the card's settings, the array's configuration, and logs (could be stripe size to how those disks are interacting on the card and a lot in between).
For example, if one of those 5 disks a Hot Swap drive, then the set is only 4 active members which would reduce the performance. Combine that with a less than ideal stripe size for the intended use, and it could bring you down substantially. Lots if possibilities, and some of the suggestions may involve making some setting changes that don't seem logical on the surface (i.e. shut NCQ/TCQ off, and set the controller for SATA300 or SATA150).

But please post before making any changes, as it's hopefully something simple.

BTW, when you make a change on the card, you need to reboot the system in order for those changes to take effect. So you could simply reboot the system if you've made any changes to the card's settings from the factory defaults (not array configuration settings, such as manual vs. quick setup to get a different stripe size for example).

I say rebuild as RAID 0 stripe and see what speeds you see. Should be about 5x speed of single disk.
This is one way to start. ;)

But I'd prefer to take a look at the current settings before doing anything that's going to make changes to the card/array (could cause more work if it's something simple).
 
The card is in the 3rd PCI slot from the bottom (the first 2 are taken up by the ATI graphics card) in my mac pro (2008).

I'm running the enclosure http://www.istoragepro.com/prod.php?id=it8sae6g via a mini-SAS cable, rated for much higher speed than I'm getting.

I've been using this array for about a year and never really tested it for speed. It's about 1/2 full of data (3TB out of 6) so I'm not going to reformat it at this point! :)
Certainly understandable that you don't want to reformat, if that 3TB of data isn't backed up somewhere else. Nanofrog is right (as always) to look at settings prior to making changes. I assumed it was a new build with no data on it.

One thing I find interesting is that the box you linked to has only one SAS cable connection for the 8 disks. I have a box by Sans Digital that has two SAS input connections for the eight, four disks on each cable in order to match each disk to a port on the Areca card.

Another thing I was considering was that the x4 lane your Areca card is on is limited. This old quote from Nanofrog:
All the slots are PCIe (PCI Express).

Slots 1 & 2 are 16x lanes, at PCIe 2.0 specification (500MB/s per lane).
Slots 3 & 4 are 4x lanes, at PCIE 1.0 specification (250MB/s per lane).

Full 2008 Specifications Page
It would seem in the current configuration, you're only going to see 1000MB/sec to that box at a maximum, where the card is an x8 lane compliant PCI 2.0 design. You're using 25% of the capability for the card. Just pointing that out. I don't know if that has any impact on the speed issue you're having.

How long is the one SAS cable you have between the Mac Pro and the box? My line of thinking is that the cable and card arrangement could be the bottleneck.
 
Last edited:
The cable is only about 2 feet long.

I wasn't aware the slots were limited to first 2 for good performance. 250mb/sec is the maximum I can put through them?

Thinking more on that, even if that's true, I'm not getting even close to that on Reads
 
The cable is only about 2 feet long.

I wasn't aware the slots were limited to first 2 for good performance. 250mb/sec is the maximum I can put through them?

Thinking more on that, even if that's true, I'm not getting even close to that on Reads
No, I think that's per lane, so x4 lane would be 1000MB/second. There seems to be more to it than just the lane.
 
...snip...
I see a bunch of things that need to be changed, but let's start simply (get the performance up first).

First off, let's start with the settings in the HDD Power Management section (bet you didn't expect to start here... ;) :D).

Disable the following:
  • Time To HDD Low Power Idle (minutes)
  • Time To HDD Low RPM Mode (minutes)
  • Time To Spin Down Idle HDD (minutes)
The reasoning behind this, is that the disks may have been in a reduced RPM or spun down, which will affect your performance (i.e. start an AJA run when the disks are running slow or spun down, so they have to get back up to speed before any data can be written = negative effect on the performance data reported back by AJA).

This may not fix your problem (there are other things, including the stripe size, aka Queue Depth setting, but let's keep this and the other settings as they are for the moment). If the above works, you'll see a performance increase.

___________________________________________

BTW, if you don't have a backup system in place, I can't stress how much you need to correct this immediately. Things go wrong with any storage configuration, including RAID implementations.

If this is the case, get an eSATA controller and a Port Multiplier based eSATA enclosure (example kit = enclosure + card), add drives (green models are great for backup), and go.

You can leave them as individual disks, or create a JBOD (concatenation = appears as a single volume to the system). Once it's up, make a backup of all of your data.

_______________________________________

Do not run any other disks (those in Pass Through) on the card while testing the RAID volume, as those disks are competing for bandwidth (PCIe only, as it seems they're physically located internally in the MP from what I get out of the screen shots).

One thing I find interesting is that the box you linked to has only one SAS cable connection for the 8 disks. I have a box by Sans Digital that has two SAS input connections for the eight, four disks on each cable in order to match each disk to a port on the Areca card.
His enclosure contains a SAS Expander, which will allow him to daisy chain enclosures as a means of adding additional drives.

The downside however, is that this does reduce performance a bit, particularly as the drive count gets higher (switches between disks over the 4x ports the SAS cable carries <SFF-8088>).

BTW, 4x SATA III ports have a combined bandwidth of ~2.2GB/s (between the disks and the card only), so unless there's enough disks to saturate this, it's usually not that big of a deal. In this case, the PCIe slot being used is a bigger concern from this POV, and that's currently mitigated by the fact the set is comprised of 5x mechanical disks (= currently not an issue). But may be in the future once enough disks are added to the set so that it can generate ~1GB/s.

What's worse, is if the exact model of 1880 contains more than 8 ports, then it already has a SAS Expander built onto the board (how they get 16 and 24 port configurations as the LSI controllers used only have 8 ports, and there's only 1x controller; the second chip is the SAS Expander, and both are located under the heatsink). I'm not all that happy that they did this without making it public (granted, the PCIe lanes will be the final limit as you can't push more data than the system can take in the case of a 2008 when using Slot 3 or 4).

It would seem in the current configuration, you're only going to see 1000MB/sec to that box at a maximum, where the card is an x8 lane compliant PCI 2.0 design. You're using 25% of the capability for the card. Just pointing that out. I don't know if that has any impact on the speed issue you're having.
Fortunately, at 5x mechanical disks, this isn't really an issue as they can't generate 1GB/s anyway (even in a stripe configuration).

It would eventually be a limitation however, if enough disks are added (this is when you'd really see the SAS Expander's influence, as 1GB/s wouldn't happen; not a huge loss, but the performance numbers would reflect it if the cache was disabled).

How long is the one SAS cable you have between the Mac Pro and the box? My line of thinking is that the cable and card arrangement could be the bottleneck.
It's not the cable length, but the fact it's a single cable rather than multiples that create a 1:1 (disk to port) ratio.

As already mentioned, this isn't that big a deal right now, given the member count. But as disks are added, or the Pass Through disks run simultaneously, it will be noticeable.

The cable is only about 2 feet long.
This isn't a problem then (rather short cable, as the smallest found is 1 meter <3.3 ft>, and go up from there). :)

I wasn't aware the slots were limited to first 2 for good performance. 250mb/sec is the maximum I can put through them?
For Slots 3 and 4 in a 2008 (3,1), Yes.

Only Slots 1 and 2 are PCIe 2.0 in that particular system. You could relocate one of the graphics cards if you want to test out to see if it's the slot, but assuming only the RAID volume is active (Pass Through disks aren't being accessed), then 5x mechanical disks won't be able to saturate a 4x lane slot @ PCIe 1.0.
 
Nano - ran the test with power settings changed, got 212write/159.6 read. I get about 2250mb/sec writes at first until frame 500 or so, which I assume is the controller cache, then it plummets (same as seen on my graph posted initially). Interestingly on the graph the reads are MUCH lower, but the final speed is the same.

I back up this array via time machine to a 6TB Raid5 array in my basement, but your advice on that is noted and in place.

In terms of not activating the other disks, I'm not running any other software concurrently that would accessing them, but I've not physically turned them off either. I have 2 separate arrays in the external box (one of which is not being accessed) and the other drives are internal in the mac pro.
 
Nano - ran the test with power settings changed, got 212write/159.6 read. I get about 2250mb/sec writes at first until frame 500 or so, which I assume is the controller cache, then it plummets (same as seen on my graph posted initially). Interestingly on the graph the reads are MUCH lower, but the final speed is the same.
If you didn't do so before testing, reboot the system so the updated Power Settings will take effect.

If that was done, then change the SATA setting to SATA 300 (try SATA 150 if that doesn't help) and turn NCQ = OFF/Disabled and see what that does. If you cannot change the SATA setting, you may have to dig the drives out, and force it via a jumper (would force SATA 150 - don't panic, those disks won't saturate that, though it will be close; ~137MB/s sustained transfers on SATA 1.5Gb/s). Again, reboot between each setting change to make sure it takes effect before you test it.

There's a few other things, particularly increasing the stripe size (32K is too small for large sequential files - you'll want to push this to the max, which is typically 128K on the newer cards, and is the also the largest value I've seen in the 1880 Manual). But try the other suggestions first.

I back up this array via time machine to a 6TB Raid5 array in my basement, but your advice on that is noted and in place.
Is that array a hardware or software implementation?

I ask, as if it's software based, it's not a good idea to run a parity level as software cannot deal with the write hole issue (requires hardware). The reason this is important, is it's possible you'll end up with a corrupted write, which would return corrupted data if you ever restored it to the primary array. Not a good spot to find yourself... :(

There is a software alternative, in the form of ZFS's RAID-Z or RAID-Z2 (equivalent to levels 5 and 6 respectively, but operate in a fashion that eliminates the write hole). It means either Open Solaris or Linux (for official support), though there has been a way to get it working under Snow Leopard (not sure if this hack works with Lion or not).

In terms of not activating the other disks, I'm not running any other software concurrently that would accessing them, but I've not physically turned them off either. I have 2 separate arrays in the external box (one of which is not being accessed) and the other drives are internal in the mac pro.
This should be fine.
 
RAID 6 is one of the slowest raid levels. that's yor problem right there. it build on raid 5, which is already slow (especially the write performance). with raid 6 you add a second parity disk, which means twice as much overhead versus raid 5.

drives are cheap. get some more and switch to raid 10.
 
Have you considered the fact that RAID 6 has crazy amounts of overhead? Disk performance takes a pretty huge hit because of all the dual parity calculations. How much of a hit is dependent on the speed of the RAID controller. I'm not completely familiar with the controller you're using though.

Not saying you don't have a problem, but I would make sure the performance you're expecting out of this configuration is justified. RAID 6 is more for failure tolerance than speed.

EDIT: Yea, what funkahdafi said. :)
 
RAID 6 is one of the slowest raid levels. that's yor problem right there. it build on raid 5, which is already slow (especially the write performance). with raid 6 you add a second parity disk, which means twice as much overhead versus raid 5.

drives are cheap. get some more and switch to raid 10.

I should be getting 2-3 times the speed I'm getting. I want data stability above all: I've already had one drive fail in this array which had to be replaced. RAID6 is NOT that slow, see below:

http://macperformanceguide.com/HardwareRAID-performance-raid6.html
 
Have you considered the fact that RAID 6 has crazy amounts of overhead? Disk performance takes a pretty huge hit because of all the dual parity calculations. How much of a hit is dependent on the speed of the RAID controller. I'm not completely familiar with the controller you're using though.

Not saying you don't have a problem, but I would make sure the performance you're expecting out of this configuration is justified. RAID 6 is more for failure tolerance than speed.

EDIT: Yea, what funkahdafi said. :)
For perspective, I'll post my numbers. I'm on a 2009 Mac Pro that's been upgraded to 2010 firmware, 3.33GHz Hex, 32GB RAM and ATI 5870. An Areca 1880ix-12 with standard 1GB cache resides in slot #2 (PCI 2.0 x16 lane) and runs two 8087-8088 mini-SAS cables (these) from two of the internal ports to a Sans Digital 8-disk box with two 8088 inputs, which means each cable runs four of the disks. All eight slots on the box are filled with WD2003FYYS 2TB RE4 disks in RAID6:
R6-16GB-disab.png


Here is the same test ran during a full rebuild after pulling a disk:
R6-rebuilding-1-disk.png


This is starting to look like your speed test, which made me think initially that your system was rebuilding during the test.

The same array set up in RAID0 gets speeds over 1100MB/second read and write, if I recall correctly. Only dropping 400MB/second while gaining dual parity seems pretty good to me. Breaking it down to individual disks, I'm getting about 137.5MB/sec on each disk in RAID0, and 119MB/sec worst case per disk in RAID6... only 17.5MB/sec loss through parity.

For this reason, I have to disagree that RAID6 is slow or suffers from crazy amounts of overhead. If you read the specs from Western Digital's website on my disks, they rate the RE4 sustained at 138MB/second, which is proven by my RAID0 test speeds of just over 1100MB/second divided by 8. If everything is set up correctly, speeds should be right where mine are at.

I'm interested in seeing the same test run with eight disks in RAID10. With half the data going to a mirror, I would expect about half of 1100MB/sec = 550MB/sec both read and write, with only 8TB of storage instead of 12TB provided by RAID6. RAID10 only writes simultaneously to half the array, so 4 of 8 disks, where RAID6 writes to 6 of 8 at once (gaining 137.5MB x2= 275MB) minus the minor parity calculation hit. Plus, with RAID6, ANY of the eight disks can fail. With RAID10, of a pair of mirrors fails, the whole array fails.

Where I think RAID10 can win is speed of builds and rebuilds, but that's it.

----------

Just for fun, I ran that same test during a rebuild of the RAID6 with the disk cache enabled, since I figure that will be how it runs during real-world usage. Here is that test:
R6-rebuilding-1-disk-cache-on.png


I went ahead and started editing full HD video while the rebuild went on, and didn't notice any problems or slowdowns. Seems like RAID6 works amazingly well, both during tests and real-world scenarios.
 
RAID 6 is one of the slowest raid levels. that's your problem right there. it build on raid 5, which is already slow (especially the write performance). with raid 6 you add a second parity disk, which means twice as much overhead versus raid 5.

drives are cheap. get some more and switch to raid 10.
RAID 5 is faster than 10 these days (hardware implementation), and a few of the more recent controllers put's level 6 performance faster as well. :eek:

As it happens, Areca makes such cards, and the 1880/1882 series does.

Have you considered the fact that RAID 6 has crazy amounts of overhead? Disk performance takes a pretty huge hit because of all the dual parity calculations. How much of a hit is dependent on the speed of the RAID controller. I'm not completely familiar with the controller you're using though.
See above. ;)

Things have definitely changed in the past 5 years... :D
 
OP: I set up a R5 array with the SAS version of those drives (ST32000444SS) in an external expander box using an 1880ix card. When the array was first set up I was pulling my hair out trying to figure out why I was getting periodic transfer speeds as low as 30MB/S per drive.

Turns out these drives when first put into service do an internal consistency check that takes about 24 hours. During this check they will "work" fine (i.e. no errors) but operate in a degraded state. The problem was solved by just letting them sit powered on for 24 hours. Surprisingly even the vendor I bought the drives and expander case from was not aware of this.

Not sure if this applies to the NS version of these drives, but it is worth a try.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.