Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks, nanofrog! I don't have Windows installed right now, but I noticed the boot time is way faster without the Apple RAID card in. I'd say it's at least two or three times faster!

I'll give it a run as soon as this RAID3 is done building... probably when the sun rises for me. :)
RAID card firmware can add anything between an additional 25 seconds to over a minute, depending on the card. The disk count attached to the card can matter as well (staggered spin-up).

As it happens, Areca's are rather quick (I typically see ~ 25 second mark on avg.). :)
 
RAID card firmware can add anything between an additional 25 seconds to over a minute, depending on the card. The disk count attached to the card can matter as well (staggered spin-up).

As it happens, Areca's are rather quick (I typically see ~ 25 second mark on avg.). :)

It is about that... 25-30 seconds. I'll have to get an official time.

So, do you think the long initialization time and 4K read speeds are normal from my earlier test of RAID0? I read that this should take from 6-12 hours to complete, and I'm coming up to hour 11 right now... 26.8% complete.
 
...I noticed the boot time is way faster without the Apple RAID card in. I'd say it's at least two or three times faster!
As I said, the Apple RAID Pro is s-l-o-w in every respect. :eek: :p

Well, I wonder if something is wrong. I started initializing this RAID3 array 10.5 hours ago, and it's now 26% complete.

I do have the BBU installed, and it was charging for part of that time. It now shows 100% charged.
I don't know what your card's settings are, but I suspect the Initialization setting needs to be changed..

Set it to Initialization = Foreground, and see if that doesn't speed things up for you (default setting = retain processor cycles for other actions, as using another array that's functioning properly, as would be the case in a system that's already been deployed, such as a working SAN). ;)

Also, I have the enclosure connected via the two different parts of the card... one via an internal port with the SFF-8087 to SFF-8088 cable, and the other via the external SFF-8088 port.

I have the eight disks inside sharing the load of the RAID set such that three of the disks on each enclosure make up the set, and the fourth disk in each half are a hot spare. (Slots 1,2,3,5,6 and 7 make up the RAID3, and slots 4&8 make up global hot spares.) I thought that might best distribute the data across the two cables / enclosures / ports on Areca.
Just make sure the disks aren't trying to share the same ports (read up on the manual - if it's not there or you're unsure, contact Areca as there are instances where the external SFF-8088 shares the same ports as one of the internal SFF-8087 ports).

I don't think this is the case however, given the performance you've posted on a RAID 0 set of those disks.

It's a good idea to be sure is all, particularly for future implementations on that card.

All eight disks are WD2003FYYS RE4 2TB models, bought at three different times from Amazon and Provantage. The Areca has the standard 1GB memory. The rebuild time is set to low.
This is definitely your problem then... (see above).

What has me wondering is the 4K read times from my testing of the five disk RAID0 test above. I had set the block size to 128K, so that made me think it was due to the smaller blocks, but then why was the 4K write speed still so good?
Cache. ;)

So, do you think the long initialization time and 4K read speeds are normal from my earlier test of RAID0? I read that this should take from 6-12 hours to complete, and I'm coming up to hour 11 right now... 26.8% complete.
Initialization time is due to the current setting.

As per the write performance, that's where the 1GB cache kicks in (system unloads data to be written to the cache, and once this transfer is complete, the performance data is calculated = fast numbers - even though it may not actually be on the disks yet).

If you want to see what the disks can do without the cache engaged, check the Disable Cache box in AJA. It's still fast, but you'll see the influence the cache makes during writes. ;)
 
Ok, that makes sense.

It looks like I could change the settings from low to high on the rebuild right now while it's still initializing. If I understand correctly, setting it to high will make rebuilds (and I presume initialization) faster, but boot times slower (which I'm used to anyway) and performance slower during a rebuild, right? If that's it, then I'm fine with changing it now. In fact, I'll do that. And if it doesn't make it speed up, perhaps I just need to figure out how to stop the initialization and shut it down to try again.

Ok, I set it to high priority background, and when I started it, I set it to foreground initialization.

Slot assignments:

SLOT 01(F) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
SLOT 02(10) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
SLOT 03(11) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
SLOT 04(12) Hot Spare 2000.4GB WDC WD2003FYYS-02W0B0 [Global]
SLOT 05 N.A. N.A. N.A.
SLOT 06 N.A. N.A. N.A.
SLOT 07 N.A. N.A. N.A.
SLOT 08 N.A. N.A. N.A.
SLOT 09 N.A. N.A. N.A.
SLOT 10 N.A. N.A. N.A.
SLOT 11 N.A. N.A. N.A.
SLOT 12 N.A. N.A. N.A.
EXTP 01(A) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
EXTP 02(B) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
EXTP 03(D) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
EXTP 04(E) Hot Spare 2000.4GB WDC WD2003FYYS-02W0B0 [Global]
 
Last edited:
It looks like I could change the settings from low to high on the rebuild right now while it's still initializing.
Yes you can. :)

If I understand correctly, setting it to high will make rebuilds (and I presume initialization) faster, but boot times slower (which I'm used to anyway) and performance slower during a rebuild, right?
You've not quite grasped it, but that's not uncommon (takes time to figure it all out).

Initialization only affects that function, not rebuilds (Background Process deals with rebuild speeds).

Read the Manual carefully, as there's a wealth of information in there. Multiple times if you have to, as it will help understand what all is going on with the settings (don't just skip around, or you will likely be left scratching your head). ;)
 
Well, I'm 2/3 done now at 66.9%. I guess it will be done in about 12 or 13 more hours. I'll do tests then, and consider if I want to delete it and set up a RAID5 to compare. Given that this will have taken about 40 hours to build a RAID3, I am not sure I'll be up for waiting that long again. :p
 
92.3%... almost there! Been 37 hours.

I've read this user's manual thoroughly, and took note of mention that this is an 1880ix-12 (the number 12 must mean 12 ports, not 16) and it says there are "12 internal ports with [an] additional 4 external ports." Then it says, "...attaches directly to SATA/SAS midplanes with 3 SFF-8087 internal connectors or increase capacity using one additional SFF-8088 external connector."

This seems to say there are 16 ports, but I believe it's more like how you (nanofrog) mentioned... that there are 12 ports, and some of the internal ones are shared with the external 4 ports somehow. The fact that my RAID hierarchy shows half of the drives on internal slots and the other half on external slots (leaving 8 internal slots unused) makes me think they must be sharing somehow.

Something else I find curious is that there are two "enclosures" listed, like this:
Enclosure#1 : ARECA SAS RAID AdapterV1.0
Device Usage Capacity Model
Slot#1 N.A. N.A. N.A.
Slot#2 N.A. N.A. N.A.
Slot#3 N.A. N.A. N.A.
Slot#4 N.A. N.A. N.A.
Slot#5 N.A. N.A. N.A.
Slot#6 N.A. N.A. N.A.
Slot#7 N.A. N.A. N.A.
Slot#8 N.A. N.A. N.A.
Enclosure#2 : Areca ARC-8018-.01.07.0107(C)[5001B4DB0091E03F]
Device Usage Capacity Model
SLOT 01(F) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
SLOT 02(10) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
SLOT 03(11) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
SLOT 04(12) Hot Spare 2000.4GB WDC WD2003FYYS-02W0B0 [6HDD RAID 1 ]
SLOT 05 N.A. N.A. N.A.
SLOT 06 N.A. N.A. N.A.
SLOT 07 N.A. N.A. N.A.
SLOT 08 N.A. N.A. N.A.
SLOT 09 N.A. N.A. N.A.
SLOT 10 N.A. N.A. N.A.
SLOT 11 N.A. N.A. N.A.
SLOT 12 N.A. N.A. N.A.
EXTP 01(A) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
EXTP 02(B) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
EXTP 03(D) 6HDD RAID 1 2000.4GB WDC WD2003FYYS-02W0B0
EXTP 04 N.A. N.A. N.A.

Enclosure #1 is empty, with 8 slots shown. I don't understand this.
Enclosure #2 seems like the only "real" enclosure, and makes it look like there are 16 individual slots. I'm going to email Areca now. I'll let you know what they say.

92.9%!
 
Keep in mind, that it may take them a few days to reply via email as they're based in Taipei, Taiwan. ;)

In the mean-time, you can play. :D
 
Keep in mind, that it may take them a few days to reply via email as they're based in Taipei, Taiwan. ;)

In the mean-time, you can play. :D
That explains the interesting English! ;)

I'm so close now... 95.2%! I just sent the email. I should think they'll get it instantly, but maybe take some time to translate, respond, and translate back to English... hahaha!

I also ordered another cable to use internal ports only. If nothing else, I'll be able to hook up another 4-bay tower.
 
That explains the interesting English! ;)

I'm so close now... 95.2%! I just sent the email. I should think they'll get it instantly, but maybe take some time to translate, respond, and translate back to English... hahaha!

I also ordered another cable to use internal ports only. If nothing else, I'll be able to hook up another 4-bay tower.
Actually, they do speak English (you can pick up the phone and call them :eek: :D).

If you've already dealt with them via email, their responses can be a bit cryptic though and take time to figure out (I've run into this from time to time). As per the delay, that's likely to do with both where the dates fall (i.e. weekday in one country is a week-end in another), and how much of a load they've got (first come, first served).

Not a bad idea to have that extra cable on hand, but it's not a necessity (just figure out what ports are shared and don't use them 2x if you can help it).

BTW, their chips are custom and control 12x disks, so the external port would have to share drives with one of the internal ports (1x chip on the PCB under the heat sink).
 
Not a bad idea to have that extra cable on hand, but it's not a necessity (just figure out what ports are shared and don't use them 2x if you can help it).

BTW, their chips are custom and control 12x disks, so the external port would have to share drives with one of the internal ports (1x chip on the PCB under the heat sink).
I used the "first" plug for the internal ports... the 1-4 ones, thinking the last ones (9-12) would be more likely to be the shared ones, but I have no idea. Just gotta wait until they tell me.

Maybe I should have plugged into the "middle" one. (5-8)
 
I used the "first" plug for the internal ports... the 1-4 ones, thinking the last ones (9-12) would be more likely to be the shared ones, but I have no idea. Just gotta wait until they tell me.

Maybe I should have plugged into the "middle" one. (5-8)
You could also move the internal connector to other ports (while you wait for a response), and see what happens in the information sections in ARCHTTP (web access to card's control panel). ;)
 
You could also move the internal connector to other ports (while you wait for a response), and see what happens in the information sections in ARCHTTP (web access to card's control panel). ;)
But you're not suggesting I do that when my initialization is at 98.6% completion, right? :p

Can I swap the cable (I'm thinking with power off) to another internal port and not screw up the RAID? I was afraid to try that, thinking I'd have to start all over.
 
Last edited:
But you're not suggesting I do that when my initialization is at 98.6% completion, right? :p
I'd wait until the initialization process is complete, to be safe. ;)

But the card is actually designed to pick back up if a fault happens during this process, such as loss of power (unless you've disabled it in the settings). :eek: These sorts of features are another reason a hardware RAID controller stands out from software implementations (and still be able to use the system for other tasks). :D

Can I swap the cable (I'm thinking with power off) to another internal port and not screw up the RAID?
Yes.

The MP does not have an inrush current limiter (what you need for Hot Plug support), so do make sure the power is OFF before moving the cable. :p
 
I'd wait until the initialization process is complete, to be safe. ;)

But the card is actually designed to pick back up if a fault happens during this process, such as loss of power (unless you've disabled it in the settings). :eek: These sorts of features are another reason a hardware RAID controller stands out from software implementations (and still be able to use the system for other tasks). :D


Yes.

The MP does not have an inrush current limiter (what you need for Hot Plug support), so do make sure the power is OFF before moving the cable. :p

Excellent! I will try that very soon. This last one percent is going to kill me. :p 99%!
 
Chewed through all of your fingernails and pencils yet? :eek: :p
Dude... I wish I'd known this would take so long... I'd have taken a road trip or gone camping or something.

It has given me the time to read the manual twice or more, and learn a lot from you. :cool:
 
Dude... I wish I'd known this would take so long... I'd have taken a road trip or gone camping or something.

It has given me the time to read the manual twice or more, and learn a lot from you. :cool:
I've seen others running into this, but they've all started on the Default settings, so I don't know if this particular model is slow or not.

Generally speaking, Areca's aren't too bad for initialization times vs. other makes (most Areca's I've worked with are 12x1ML's <SATA only & released in 2006> and 1680 series models <SAS/SATA released in 2008>, which where the top of the line model at the time, as the 1880 series is now).

Capacity and member count matter too, so it's been hard to nail down. But I suspect it's the use of the Default settings that has been such a PITA...
 
You know, I was reading up on the drives I'm using (WD 2TB RE4 WD2003FYYS) and noticed the sustained data transfer rate is only 138MB/sec, whereas my stock Apple 1TB Hitachi drives are 178-ish MB/sec. That surprised me. So yeah, 2TB x 6 probably didn't help so much.
 
You know, I was reading up on the drives I'm using (WD 2TB RE4 WD2003FYYS) and noticed the sustained data transfer rate is only 138MB/sec, whereas my stock Apple 1TB Hitachi drives are 178-ish MB/sec. That surprised me. So yeah, 2TB x 6 probably didn't help so much.
Where those Hitachi's SAS? Particularly 15k rpm units?

I ask, as 15k would be able to generate those kinds of throughputs (and they could be used with your card - you'd have to test them, but all SAS disks are enterprise grade, so it has the right timings). If you've the Exact model number of those Hitachi's, then you can check to see if they're on the HDD Compatibility List, which I expect they would be.

Now once it's done with the initialization, you've a lot of work to do (and learning first hand how the card will react).
  1. Performance testing.
  2. Failure testing.
Place test/dummy data on the array (after performance testing), and do the following...
  1. Start a write, and pull the power (tests out the UPS).
  2. Do this again with the UPS out of the loop (plug the system directly into the wall = tests out the BBU).
  3. Start another read and pull a disk (pull it off of the external, as it does have an inrush current limiter for Hot Plugging).
  4. Pay attention to the performance of RAID 3.
  5. Replace the disk, and watch what the card does again, including performance.
  6. Do the same with a write (yank disk as above, and again, pay attention to the performance).
It's a lot of work, but you'll thank yourself later, as you'll get a good understanding of how the card actually works, and what to expect in the event of a real failure (most data loss is the result of user error, even in fault states/degraded arrays on hardware RAID controllers).

So it really is in your best interest to figure this out now rather than later (when it's important and can either cost you your data, or a lot of additional time that could have been prevented by knowing the right methodology to begin with). ;)
 
Where those Hitachi's SAS? Particularly 15k rpm units?

I ask, as 15k would be able to generate those kinds of throughputs (and they could be used with your card - you'd have to test them, but all SAS disks are enterprise grade, so it has the right timings). If you've the Exact model number of those Hitachi's, then you can check to see if they're on the HDD Compatibility List, which I expect they would be.

Now once it's done with the initialization, you've a lot of work to do (and learning first hand how the card will react).
  1. Performance testing.
  2. Failure testing.
Place test/dummy data on the array (after performance testing), and do the following...
  1. Start a write, and pull the power (tests out the UPS).
  2. Do this again with the UPS out of the loop (plug the system directly into the wall = tests out the BBU).
  3. Start another read and pull a disk (pull it off of the external, as it does have an inrush current limiter for Hot Plugging).
  4. Pay attention to the performance of RAID 3.
  5. Replace the disk, and watch what the card does again, including performance.
  6. Do the same with a write (yank disk as above, and again, pay attention to the performance).
It's a lot of work, but you'll thank yourself later, as you'll get a good understanding of how the card actually works, and what to expect in the event of a real failure (most data loss is the result of user error, even in fault states/degraded arrays on hardware RAID controllers).

So it really is in your best interest to figure this out now rather than later (when it's important and can either cost you your data, or a lot of additional time that could have been prevented by knowing the right methodology to begin with). ;)
Hitachi model # HDE721010SLA330
Just looked it up, and it says 167.6MB/sec, conversion from 1406Mbps off a Hitachi specs sheet I found.

Just finished! Testing begins! I'm so excited.
 
First RAID3 test with AJA: (screenshot attached)
I chose read/write 16GB test with video frame 1920x1080 8-bit.
 

Attachments

  • Screen shot 2011-08-07 at 5.15.40 PM.png
    Screen shot 2011-08-07 at 5.15.40 PM.png
    64.6 KB · Views: 65
Hitachi model # HDE721010SLA330
Just looked it up, and it says 167.6MB/sec, conversion from 1406Mbps off a Hitachi specs sheet I found.

Just finished! Testing begins! I'm so excited.
Surprising for a 7200 rpm SATA, but it is a consumer grade Hitachi, so getting your data off of that was a good idea.
 
Here's Xbench 1.3:

This is all prior to swapping any cables around, of course.
 

Attachments

  • Screen shot 2011-08-07 at 5.19.28 PM.png
    Screen shot 2011-08-07 at 5.19.28 PM.png
    61.5 KB · Views: 53
This is my current RAID and volume setup. I'm going to shut down and switch the internal cable to another port first, and see what changes.
 

Attachments

  • Screen shot 2011-08-07 at 5.24.40 PM.png
    Screen shot 2011-08-07 at 5.24.40 PM.png
    54.8 KB · Views: 79
  • Screen shot 2011-08-07 at 5.27.13 PM.png
    Screen shot 2011-08-07 at 5.27.13 PM.png
    150.2 KB · Views: 59
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.