Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mluters

macrumors member
Original poster
Jan 25, 2010
44
0
I have a 2009 MP with a 3.32 Quad with 24gb ram and I just installed a ARC-1882-IX-12 raid card... No problems with install, setup or boot from the card.

I'm running 2 Crucial C300 256gb in raid 0 for boot, OS X, programs and apps. I'm using 8 Western Digital RE4 2Tb in raid 10 for mass storage of videos, photos, music and movies.

Here is the weird part... my read/write numbers for the raid 10 are all over the place? When using AJA system test I get read 629MB/s and write 2196 MB/s when testing a 1gb file size. Shouldn't those numbers be the other way around? When testing a 4gb file size I get read 582 Mb/s and write 642 MB/s? Again, shouldn't it be the other way around?

When using Xbench I get these results:

Results 1375.47
System Info
Xbench Version 1.3
System Version 10.7.2 (11C74)
Physical RAM 24576 MB
Model MacPro4,1
Drive Type Areca ARC-1882-VOL#000
Disk Test 1375.47
Sequential 956.06
Uncached Write 1880.23 1154.43 MB/sec [4K blocks]
Uncached Write 1942.85 1099.26 MB/sec [256K blocks]
Uncached Read 373.61 109.34 MB/sec [4K blocks]
Uncached Read 2170.51 1090.88 MB/sec [256K blocks]
Random 2450.49
Uncached Write 2100.83 222.40 MB/sec [4K blocks]
Uncached Write 1081.18 346.13 MB/sec [256K blocks]
Uncached Read 14162.44 100.36 MB/sec [4K blocks]
Uncached Read 6218.92 1153.97 MB/sec [256K blocks]


When testing my 2 C300 256gb SSD's in raid 0 with AJA system test I get reads of 283 MB/s and writes of 1537 MB/s when testing a 1gb file size? Aren't those numbers all messed up?

When using Xbench to test my SSD's in raid 0 I get these results:

Results 1641.98
System Info
Xbench Version 1.3
System Version 10.7.2 (11C74)
Physical RAM 24576 MB
Model MacPro4,1
Drive Type Areca ARC-1882 Mac Pro
Disk Test 1641.98
Sequential 1011.40
Uncached Write 1886.69 1158.40 MB/sec [4K blocks]
Uncached Write 1967.94 1113.46 MB/sec [256K blocks]
Uncached Read 401.79 117.59 MB/sec [4K blocks]
Uncached Read 2337.19 1174.65 MB/sec [256K blocks]
Random 4360.78
Uncached Write 2161.51 228.82 MB/sec [4K blocks]
Uncached Write 4302.86 1377.50 MB/sec [256K blocks]
Uncached Read 15824.19 112.14 MB/sec [4K blocks]
Uncached Read 6288.15 1166.81 MB/sec [256K blocks]

I'm no pro when it comes to disk testing but I just can't see how the raid 10 writes are faster than the reads? I also don't understand how my SSD raid 0 can have such slow reads? I have a tech support call into Areca to see if something is wrong with my new card or to help me understand these numbers...please let me know if I'm missing something here?

Thanks in advance,
mluters
 
Those high write values are a result of the 1GB cache on the ARC-1882ix12. The reason for this, is if the file is able to be stored in the cache, the system "sees" it as a completed write, though technically it isn't (stored on cache and still being written to the volume).

Where the numbers became realistic (5xx MB/s read, 6xx MB/s write), is when the file size being written was larger than the cache. If you look, there's a check box in AJA that will allow you to disable the cache, which will show you what the disks can actually do with very large files. BTW, these values aren't bad for a 8x member level 10.

Given you're using that particular card however, is there a particular reason you're not running 5 or 6?

I ask, as 5 can offer better throughputs as well as increase the usable capacity (across the board; 10 no longer beats level 5 for relational database use with modern cards).

If you need n = 2 redundancy, that card should be able to out-perform 10 with a level 6 implementation as well (much closer, so test it and see for yourself). It's biggest advantage however, is the increased usable capacity for that many members (gives you 2x HDD capacity back vs. 10).

As per the SSD performance values, I'd need to see screen shots of all of the card's settings (could be a setting or incompatability between that card and those particular SSD's; for example, it could be as simple as the size of the stripe). Best I can do here ATM, as there's not enough information to go on.
 
nanofrog, thanks for your input and explanation....

I've never really considered raid 5 or 6 because I thought I would get better overall performance from raid 10. I believe both raid 5 and 10 would have the same fault tolerance of 1 failed drive and raid 6 would have a fault tolerance of 2 failed drives.

I really like high performance machines and want some safety factor included. (I maintain a good separate backup of everything) That's why I went with a raid 10...very good performance with some safety.

If I looked at a raid 5 or 6...it seems like I should really go with a raid 50 or 60 because they provide better performance than raid 5 or 6 and allow for a hot spare.

Let me know your thoughts?

mluters
 
Your results are a lot like mine on AJA and Xbench. Test with AJA using the 16GB test file to overcome the cache to see how it will do in a sustained read/write scenario. Mine runs about 700+MB/sec read, 750+MB/sec write speeds sustained (regardless of cache on or off) on a 16GB test cycle when writing to 7 disks with #8 sitting as hot-spare in RAID3. In RAID3, I can lose one drive and still have data, and the hot-spare will automatically rebuild the failed drive while I swap in a replacement, which then becomes the new hot-spare. Backups are on separate external single disks.
 
I've never really considered raid 5 or 6 because I thought I would get better overall performance from raid 10. I believe both raid 5 and 10 would have the same fault tolerance of 1 failed drive and raid 6 would have a fault tolerance of 2 failed drives.
The fault tolerance is as follows:
  • RAID 0 : None
  • RAID 1 : n = 1
  • RAID 5 : n = 1
  • RAID 6 : n = 2
  • RAID 10 : n = 2
Now if you want n = 2 fault tolerance, then RAID 6 will be a bit faster than 10, as well as give you more usable capacity for the same member count.

But if you're at the system at all times, RAID 5 would be sufficient (can swap out disks quickly, unlike a remote system that someone has to get to before the disk can be swapped; think satellite offices that doesn't have any IT staff).

BTW, you can run Hot Spares with any level other than 10 and JBOD (concatenation) on that card.

wonderspark mentioned RAID 3, which is similar to RAID 5, but there are some differences that I don't like (all of the parity data is located on a single disk rather than spread across all members with RAID 5). This results in it's slower to complete a rebuild than RAID 5 (or Online Expansion or Online Migration). And since unlimited time for a rebuild completion isn't typical, that has to be taken into consideration during the implementation design (need it finished ASAP to reduce the risk of losing the array <1+ disk fail past the fault tolerance level> as well as lower human productivity).

If I looked at a raid 5 or 6...it seems like I should really go with a raid 50 or 60 because they provide better performance than raid 5 or 6 and allow for a hot spare
As mentioned above, you can have Hot Spares with levels 5 and 6 (anything other than 0 and JBOD <concatenation> = 1/10/3/5/6/30/50/60/51/61).

As per 50/60, I'm currently under the impression you don't need this. It also tends to be more complicated and expensive (additional disks as the parity levels are duplicated, then striped together). Even if the cards only do 0/1/10/5/6, the nested parity can be done via a software implementation, such as under Disk Utility (you'd only need to do this with a 51/61 implementation on your card as it's capable of 30/50/60), hence where it can get more complicated (knowing how to deal with faults).

Hope this helps. :)
 
wonderspark mentioned RAID 3, which is similar to RAID 5, but there are some differences that I don't like (all of the parity data is located on a single disk rather than spread across all members with RAID 5). This results in it's slower to complete a rebuild than RAID 5 (or Online Expansion or Online Migration). And since unlimited time for a rebuild completion isn't typical, that has to be taken into consideration during the implementation design (need it finished ASAP to reduce the risk of losing the array <1+ disk fail past the fault tolerance level> as well as lower human productivity).
And guess who STILL hasn't had the chance to rebuild it as RAID 5. Me! :eek:
Maybe I should just take a break and do it tonight. I ordered a W3680 to replace my W3580, which will require some downtime in a couple days anyway, so it would be cool to have a new RAID 5 ready to go as well.
 
And guess who STILL hasn't had the chance to rebuild it as RAID 5. Me! :eek:
Maybe I should just take a break and do it tonight. I ordered a W3680 to replace my W3580, which will require some downtime in a couple days anyway, so it would be cool to have a new RAID 5 ready to go as well.
At least you can compare them (force a fault, and test out the rebuild time of the RAID 3 before you change it over to a RAID 5). Then repeat that with the RAID 5, and see for yourself.

Then make up your mind as to which way you want to go (pros and cons with each, so you're the one who has to decide on which has the better balance). ;)

It will take some time, but it should be worth it for the learning experience and a solid answer as it pertains to your usage. :)
 
At least you can compare them (force a fault, and test out the rebuild time of the RAID 3 before you change it over to a RAID 5). Then repeat that with the RAID 5, and see for yourself.

Then make up your mind as to which way you want to go (pros and cons with each, so you're the one who has to decide on which has the better balance). ;)

It will take some time, but it should be worth it for the learning experience and a solid answer as it pertains to your usage. :)
So based on a build time of 40 hours (if I recall) to make the RAID3 set, do you anticipate 40 hours to rebuild? I'm only using 4.6TB out of 12TB available right now. Let's take some bets! Closest guess to actual rebuild time wins something... How about my old 16GB (4GBx4) RAM set from OWC? Hahaha!
 
So based on a build time of 40 hours (if I recall) to make the RAID3 set, do you anticipate 40 hours to rebuild? I'm only using 4.6TB out of 12TB available right now. Let's take some bets! Closest guess to actual rebuild time wins something... How about my old 16GB (4GBx4) RAM set from OWC? Hahaha!
Sadly, it seems the 1880 and 1882 series cards from Areca are slower than previous designs at initialization, online migration, and online expansion (previous designs, such as the 12x1ML and 1680 series used Intel IOP's, which are ARM based; the 1880/1882 are LSI, which is built off of PPC, and require a SAS Expander on the card for more than 8 ports). I'm not too pleased on this, as they don't make any reference to an on-board SAS Expander for those models... But the 8 port models are slower for these functions as well, so it's not all down to the SAS Expander added to the board (result of the LSI chip).

So I expect your card (and any other from either the 1880/82 series') to be very slow vs. what I'm familiar with in the 12x1ML and 1680 series = all bets are off. :eek: :p

Seriously though, I'd just restore the data from backups as that does seem to be quicker with your card from what I've seen both here and in other forums.
 
Now if you want n = 2 fault tolerance, then RAID 6 will be a bit faster than 10, as well as give you more usable capacity for the same member count.

I didn't know raid 6 will be faster that raid 10....I really want speed more than anything.

But if you're at the system at all times, RAID 5 would be sufficient (can swap out disks quickly, unlike a remote system that someone has to get to before the disk can be swapped; think satellite offices that doesn't have any IT staff).

I'm not at my system at all times and I'm out of town a lot, so having a hot spare would be nice.

BTW, you can run Hot Spares with any level other than 10 and JBOD (concatenation) on that card.

I thought raid 10's can still have hot spares?

As per 50/60, I'm currently under the impression you don't need this. It also tends to be more complicated and expensive (additional disks as the parity levels are duplicated, then striped together). Even if the cards only do 0/1/10/5/6, the nested parity can be done via a software implementation, such as under Disk Utility (you'd only need to do this with a 51/61 implementation on your card as it's capable of 30/50/60), hence where it can get more complicated (knowing how to deal with faults).

nanofrog,
I really want the fastest setup possible without using ssd's for mass storage. I can accept some risk because I maintain a good separate backup schedule....so which type raid would you recommend for my setup with the arc-1882 ix12? I thought my raid 10 was going to give me the fastest overall performance with some safety.

Thanks again for your input
 
I didn't know raid 6 will be faster that raid 10....I really want speed more than anything.
It comes down to the member count, but at 8 members, it would. Particularly if that doesn't include a Hot Spare.

I'm not at my system at all times and I'm out of town a lot, so having a hot spare would be nice.
I'm still not sure of the redundancy/fault tolerance you actually need, as you can run a Hot Spare with 1/10/3/5/6/30/50/60 with the ARC-1882 series.

I thought raid 10's can still have hot spares?
That was a typo... :eek: I meant a stripe set (RAID 0), not 10, and of course JBOD (concatenation).

You might also want to note, that you cannot run both RAID and JBOD simultaneously. If you look closely at the settings, it's one or the other (for future reference).

BTW, another note, Safari has had consistent issues with changing settings on Areca's, so it's best to use FireFox (has continued to work over multiple revisions).

,
I really want the fastest setup possible without using ssd's for mass storage. I can accept some risk because I maintain a good separate backup schedule....so which type raid would you recommend for my setup with the arc-1882 ix12? I thought my raid 10 was going to give me the fastest overall performance with some safety.
It will depend on exactly what you're doing.

If your usage is to do with graphics/video creation (very large files), a parity based level will be a better way to go, as it offers you additional capacity for the same member count vs. 10. Performance is also improved, particularly once you're over 4 members.

Which parity level however, I'm still a bit unsure. Go back and examine the fault tolerance for both 5 and 6, then consider how often you'll be able to address any problems (physical access).

For example, if you're usually there, a RAID 5 + Hot Spare would probably make sense. But if you're gone say 50% of the time, a level 6 + Hot Spare would be in order.

There are other considerations as well, such as the member count (8 is about the largest member count I'd recommend with a level 5), but you can change this on-the-fly (aka Online Migration = add disks and change the level used without data loss; it's much slower than just initialization, so if you do this, be prepared to allow for a rather long weekend as that particular card is slow at this). Best to restore from backups IMO as a means of speeding things up (does require a bit more user input, but the time savings is worth it IMO).

Hope this helps. :)
 
Parity based arrays are a no-no for SSD's according to most disk manufacturers. 0,1,10 are ok. Writing parity bits all day not good...
 
Parity based arrays are a no-no for SSD's according to most disk manufacturers. 0,1,10 are ok. Writing parity bits all day not good...
For MLC based SSD's, this is correct. SLC is capable of taking the additional abuse introduced with the parity writes (specifically aimed at the enterprise market), but it's very expensive yet and not feasible for most.

Fortunately, both mluters (OP) and wonderspark are referring to mechanical disks for their parity arrays. :)
 
Finally decided to delete my RAID3 and run some tests.

Short history:
It took about 40 hours (maybe 38?) to initialize the RAID3. I wasn't pleased with that, and feared a rebuild would take the same crazy length of time or longer to complete. I was pleased with the speed of the array: 760MB/sec writes and 700MB/sec reads sustained with seven members+hot spare. Due to how long it took to build, and the fact that I have been too busy with critical editing jobs (that feed me!), I never tested pulling a drive to see how long a rebuild would take. I did read all 76 pages from The Areca Owner's Thread about my RAID card, rebuilds and so on.

Everything I read said that I should be able to initialize a RAID6 in about 5 hours. Nobody talked about RAID3 in there, which I found surprising based on this thread on Adobe Hardware Forums by a guy who thinks RAID3 is the best for video, and loves Areca cards. Seems like maybe RAID3 isn't the best after all, or someone would be using it besides myself and one other person that I can find.

So, I ejected my RAID, deleted it, ran some 8-member RAID0 tests, and started a RAID6 initialization.

RAID0 8x2TB AJA test: sustained speeds >1100MB/sec write, and >3700MB/sec reads (cache on) and 998MB/sec reads (cache off). Looking pretty good, I think! (Areca 1880ix-12 with standard 1GB cache on card)

It's been 42 minutes initializing the R6, and it's 17.6% complete. This is way, way faster than that R3 so far. If this pace continues, it will be a minute or two under 4 hours to complete. After it's done, I'll run more speed tests. Then I'll load some work on the volume and yank a drive to see what happens. Maybe another speed test with the set degraded, then replace the drive and see how fast it goes back to normal state.

Nanofrog has been advising me to do this since I built the RAID about six months ago. I'm finally going to get it done, and it looks like I'll be yet another user convinced that RAID6 is better than RAID3. :p
 
R6-16GB-disab.png

R6-16GB-enab.png


Took 4hrs, 55min to complete RAID 6 initialization. I'm very happy with that!
First disk speed tests above, and I'm really happy with that as well! Faster than my RAID3 which was using 7+1 hot-spare. For some reason, I thought it would be about the same, since it's essentially using the same number of disks for data, but the hot-spare is now an additional parity disk. Very nice!

Going to load my data and go to sleep. Later in the day, I'll pull a disk and see how that goes.

I'm hope you don't mind my posting these results in a thread originally asking about RAID10. I learned some things, and hopefully you all did as well. :p
 
Nobody talked about RAID3 in there, which I found surprising based on this thread on Adobe Hardware Forums by a guy who thinks RAID3 is the best for video, and loves Areca cards. Seems like maybe RAID3 isn't the best after all, or someone would be using it besides myself and one other person that I can find.
I wondered how you landed on RAID 3... (I'll have to give that link a read, but now you know I'm not as crazy as you may have thought :eek: :p).

Nanofrog has been advising me to do this since I built the RAID about six months ago. I'm finally going to get it done, and it looks like I'll be yet another user convinced that RAID6 is better than RAID3. :p
Actually, it was a RAID 5 as it has the same fault tolerance as RAID 3 (n = 1; set is degraded with a single disk failure, but no data loss). And the performance out of it would be better than both RAID 3 and RAID 6.

But given your disks are over 1TB and you're running 8x members (8 members is the limit I tend to use for RAID 5, as the risk of an additional failures starts to get untenable once there's some age on the disks - nasty experiences with this and Seagate ES.2's from 2008 on ... :mad:), a RAID 6 is a better way to go IMO, particularly if you ever add disks and use Online Migration to integrate them into the existing set.

If you continue to add members rather than swap them out, there would eventually be a point where the member count/total capacity would require a nested level in order to preserve data integrity as a means of reducing additional failures during a rebuild that destroy the set = all data is gone. You're not there yet, so there's no need to panic (when you're about to exceed 12 members).

Took 4hrs, 55min to complete RAID 6 initialization. I'm very happy with that!
I should hope so. :D

First disk speed tests above, and I'm really happy with that as well! Faster than my RAID3 which was using 7+1 hot-spare. For some reason, I thought it would be about the same, since it's essentially using the same number of disks for data, but the hot-spare is now an additional parity disk. Very nice!
Additional member parallelism kicked in under this configuration = you should be a really happy camper. :p
 
So I'm thinking I'll just yank one drive out, but then what happens if I replace that disk with all the data still on it? Does the rebuild check what's on the disk first, or just treat it like a new blank disk and rebuild it anyway? I was thinking it might be more realistic to pull it, format it, and then re-insert.
 
So I'm thinking I'll just yank one drive out, but then what happens if I replace that disk with all the data still on it? Does the rebuild check what's on the disk first, or just treat it like a new blank disk and rebuild it anyway? I was thinking it might be more realistic to pull it, format it, and then re-insert.
I've seen it go both ways (depends on the specifics, particularly the procedures used if there's nothing wrong with the disks <seen this in real-world conditions due to time-out errors>).

If the same disk that was pulled is returned to the same location/port without any other disks pulled/issues, then it should skip a rebuild and the volume restored to Normal operation (usually within 2 - 3 seconds or less).

Where I've seen it go into a full rebuild, is when things are put back together in the wrong order (i.e. didn't follow the logs in reverse).

Testing this way is still useful IMO however (or wiping the disk first), as it will give you real-world information on the actual rebuild time of your volume (takes the card, disks, and configuration into account).

For example, as it's a RAID 6, pull 2 disks (degraded at it's fault tolerance limit, but the data is still there), and restore them in their proper places in reverse order. The volume should come back to Normal without a full rebuild taking place. Then do it again, but flip the disk locations and in the wrong order (should force a full rebuild). Or you can wipe them if you prefer.

Either way, you'll get to see the differences as well as get an idea of rebuild times. ;)

BTW, another thing you can do during these tests is run AJA for performance testing (see first hand what your volume will do performance-wise in a degraded state).

Good luck, and have fun. :eek: ;) :p

From my results RAID3 is much faster than RAID5.
Mind posting some data, preferably screen shots of AJA?
 
Pulled one disk for two minutes. It started beeping continuously a few seconds after I pulled it, and events showed "volume degraded," "raidset degraded" and "device removed." I reinserted it those two minutes later, and it said "device inserted," "rebuilding raidset" and "start rebuilding." 17 minutes later, it's at 3.3%, so it looks like it's doing a full rebuild.

Ran an AJA test while it's rebuilding:
R6-rebuilding-1-disk.png


It's interesting to see how fast it still is. Should try with cache on...

----------

Pretty much as I suspected! That isn't too bad. :)

After this is rebuilt, I'll do the same with two drives pulled, and then rebuilding.

R6-rebuilding-1-disk-cache-on.png
 
Pulled one disk for two minutes. It started beeping continuously a few seconds after I pulled it, and events showed "volume degraded," "raidset degraded" and "device removed." I reinserted it those two minutes later, and it said "device inserted," "rebuilding raidset" and "start rebuilding." 17 minutes later, it's at 3.3%, so it looks like it's doing a full rebuild.
This is usually the case, but I've seen Areca's skip this (12x1ML and 1680 series; can't recall if I used a hidden restore command though).

Just so you have it, here it is (case is important):
LeVeL2ReScUe
reboot
SIGNAT​
Keep this around somewhere safe, as it has been able to save arrays that other controllers would have lost (Areca's keep a copy of the partition tables in it's firmware, and this command restores it to the volume - so if the data blocks are unchanged/undamaged, it can restore the set to Normal operation).

Ran an AJA test while it's rebuilding:

It's interesting to see how fast it still is. Should try with cache on

Pretty much as I suspected! That isn't too bad. :)
Not bad at all (not gotten my hands on either the 1880 or 1882 series yet to test it myself).

After this is rebuilt, I'll do the same with two drives pulled, and then rebuilding.
Looking forward to the results. :)

Nice to have a willing guinea pig ... :eek: :D :p
 
Just so you have it, here it is (case is important):
LeVeL2ReScUe
reboot
SIGNAT​
Keep this around somewhere safe, as it has been able to save arrays that other controllers would have lost (Areca's keep a copy of the partition tables in it's firmware, and this command restores it to the volume - so if the data blocks are unchanged/undamaged, it can restore the set to Normal operation).

Where would you enter that command if needed?

Rebuild just finished: 7hrs 58 min. Haha, think I'll just initialize and reload next time. Seems that foreground init is much faster than a rebuild in background, which is what I assume is the difference. My background priority was maxed at 80%, too.
 
Last edited:
Where would you enter that command if needed?
I can't recall for sure, but I'm assuming ARCHTTP doesn't allow command line input (haven't used a MP since 2008), so you'd need to install the CLI Utility if you haven't already.

You can get the latest copy from Areca's FTP site (this is where you'll need to go for specifics, including firmware so just look around when you need something ;)). Start with RaidCards, AP_Drivers, MacOS, Application, then select the right version of OS X, and you'll finally see a few selections, including CLI. ;)

It's really not that bad to find what you need. :)

Rebuild just finished: 7hrs 58 min. Haha, think I'll just initialize and reload next time. Seems that foreground init is much faster than a rebuild in background, which is what I assume is the difference. My background priority was maxed at 80%, too.
Actually, that's not horrible. But keep in mind, that that wasn't a full volume, as once you're over the 50% mark, you hit the inner tracks, which slow you down (gets slower the fuller the volume).

Rebuilds involve more than what the initialization process entails, as it has to read the parity blocks (from the "good" disks = those that were already in the set that still contain data), decode them, then write that data to the new disk/s. Online Expansion and Online Migration are even worse, as there's resizing and additional processing over that.

But they do keep your existing data, and if the system cannot be taken down, allows the system to keep working in a degraded state until the process is complete and volume returned to Normal status.

This isn't an issue for you as a single user, so restoring from backups will usually be faster (unless your backups are run over a slow interface, such as USB 1.1 or something... :eek: :p).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.