Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As per the diagram, it's correct (hard to follow the wire colors as the bend around, but it looks right in terms of location as well as color). :)

What you're asking about (backplane cable that splits into two separate connectors with 4 wires for power) do exist (here). It's not as clean IMO, but functional, and at least from this link, is more expensive (would have expected it to be cheaper, as they're used more often in PC's). If you look, I suspect you'll be able to locate it cheaper (just make sure you've more than 2 wires on the power connector - only get it if it has 4 power wires).

As per soldering, it's not hard, particularly soldering 2 wires together (buying a stick soldering iron, heat shrink tubing and solder won't be that expensive; should be able to manage it for under $20USD <example>, and it's handy for other projects). You can get a pack of heatshrink tubing from Walmart for under $5USD, and stay around the $20USD mark (as I assume you don't have any of the equipment or supplies to do it based on your aversion to soldering).
  • You don't have to solder, but it is the cleanest way to go about it (solid connections, and stays thin when bundled up together).
If you go with wire nuts or crimp connectors, stagger them if you can (won't get "pregnant snake syndrome" when you bundle the wiring up if you do this).


This actually isn't an issue on any other system (have access to the firmware during the boot process to access settings such as to set the boot location). But is on the MP since Apple doesn't grant this form of access to firmware settings.


I've not gone and taken a look at the OWC forums, but I'm wondering if it was for a 2009 system (where modification of the firmware would be necessary for any B1 stepped part).

It actually has been done with special equipment on a 2009 (involves soldering equipment for SMT parts, the Flash ROM that contains the firmware to be specific, and a Universal Programmer). I described the process some time back, and someone either followed it or already knew the process, and did it themselves. Works, but is a lot of effort to do something any other system can do with a free firmware upgrade :)mad: at Apple for doing this).


Thank you! Your suggestions are awesome. I have just adjusted the Stripe size to 128k and found the performance of those 30MB digital camera RAW files (by copying them in/out the volume) is now better.:D For the unexpected of resetting the Pass-Through Disks matter. I will try later tonight in non-office hours.

However, I wonder a hardware RAID card is reasonable to buy after I have it. Because the build-in software RAID function in Snow Leopard is even perform better measured by AJA Testing Software. May be it is due to my Mac equips with a lot of CPU cores and RAMs? (12 cores westmere with 32GB RAM, I got). Is that the reason of why the RAID card is not seem so outperform comparing to the OS software-based RAID-0 (the original processing power of my Mac itself is too powerful even without a hardware RAID card)?
 
Last edited:
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Thank you! Your suggestions are awesome. I have just adjusted the Stripe size to 128k and found the performance of those 30MB digital camera RAW files (by copying them in/out the volume) is now better.:D For the unexpected of resetting the Pass-Through Disks matter. I will try later tonight in non-office hours.
See how this works out with the Pass Through disks, and let me know if it persists (particularly if this happens without any power management features engaged, as it could be disk related - I'm assuming ATM, that all the disks are enterprise grade).

Consumer disks tend to be unstable, and this shows itself by something called drop-outs (disk is available, then suddenly disappears).

However, I wonder a hardware RAID card is reasonable to buy after I have it. Because the build-in software RAID function in Snow Leopard is even perform better measured by AJA Testing Software. May be it is due to my Mac equips with a lot of CPU cores and RAMs? (12 cores westmere with 32GB RAM, I got). Is that the reason of why the RAID card is not seem so outperform comparing to the OS software-based RAID-0 (the original processing power of my Mac itself is too powerful even without a hardware RAID card)?
The number of cores doesn't matter. Software RAID implementations only use 1 core. But that CPU core is much faster (cache only applies to writes, not reads). The card uses a dual core ARM processor running at 800MHz (still fast). Despite all of this, the bottleneck is the actual drives, and a 2 disk stripe set is small (not much parallelism to speed things up).

The other thing to check, is see if the card has the latest firmware on it, as well as drivers (these can have a significant impact). BTW, when you expand the firmware (assuming it needs to be updated), you'll notice it's not one file, but 4 (update all of them except BOOT.BIN if you're booting under OS X).
 
See how this works out with the Pass Through disks, and let me know if it persists (particularly if this happens without any power management features engaged, as it could be disk related - I'm assuming ATM, that all the disks are enterprise grade).

Consumer disks tend to be unstable, and this shows itself by something called drop-outs (disk is available, then suddenly disappears).

I disabled power management for HDD in the 10.6's Preference (No sleep) and no HDD Power Management item is enabled in Acreca's configuration page. Shutdown the box nicely on last night. I went bed 8 hours ago. Now, I just wake up and turn on my Mac. The problem remains. All Pass-Through Disks are not mounted. I checked to the configuration again and all Pass-Through Disks are returned to "Free pool" except one, The SSD! Let me explain the disks combination here:

My Mac Pro has been installed the following disks and connected to ARC-1880LP RAID card. The Red disks returned to Free pool unexpectedly and not mounted after the system was power off for 8 hours:(:


Internal Slots connect to SFF-8087 on ARC-1880LP internal SAS port, they are connected thru Maxupgrade's Backplane Attachment Kit

Slot 1 Pass Through - OCZ Vertex II 3.5' SDD 120GB x 1 (dedicated scratch disk for Photoshop and FCP)
Slot 2 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN11)
Slot 3, 4 RAID-0 - Seagate ST315000341AS 1.5TB x 2 (Desktop Grade HDD, NOT support 24 x 7, Firmware REV - CC1H )



ARC-1888LP external SAS port SFF-8088 connects to Stardom's ESATA x 4 case (Case model: ST5610-4S-S2)

Slot 5 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN11)
Slot 6 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN11)
Slot 7 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN12)
Slot 8 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN12)



What you guess is right! :eek:The problem only occurs on Enterprise Grade HDDs (The 644NS). All Desktop grade HDDs (341AS) , including SDD are working fine! What can I do with this?! To trash all enterprise disks??:confused: Oh NO! These Enterprise Disks (644NS) are expensive and very reliable. They can work 24 x 7 for 5 years based on my past experience! I can't lose them!

The number of cores doesn't matter. Software RAID implementations only use 1 core. But that CPU core is much faster (cache only applies to writes, not reads). The card uses a dual core ARM processor running at 800MHz (still fast). Despite all of this, the bottleneck is the actual drives, and a 2 disk stripe set is small (not much parallelism to speed things up).

The other thing to check, is see if the card has the latest firmware on it, as well as drivers (these can have a significant impact). BTW, when you expand the firmware (assuming it needs to be updated), you'll notice it's not one file, but 4 (update all of them except BOOT.BIN if you're booting under OS X).

I have flashed the Mac EFI BIOS for the card to replace the original factory one before. The EFI BIOS comes with the driver CD. I will try to restore all the PC BIOS (4 images) later to see if the problem persists. Meanwhile, please kindly advise how to fix the "resetting" problem on Enterprise disks (644NS). Many thanks!!

NOTE: I will try to form two 644NS disks (Slot 7 & 8) into a RAID-0 array tonight to see if it is reset even under as an RAID array. So that I guess we will know a bit more about the behavior of the controller and 644NS Enterprise disks. I will report the result on tomorrow.

I hope Areca NOT ONLY SUPPORT TO CHEAP DISK! But also ENTERPRISE!
 
Last edited:
I disabled power management for HDD in the 10.6's Preference (No sleep) and no HDD Power Management item is enabled in Acreca's configuration page. Shutdown the box nicely on last night. I went bed 8 hours ago. Now, I just wake up and turn on my Mac. The problem remains. All Pass-Through Disks are not mounted. I checked to the configuration again and all Pass-Through Disks are returned to "Free pool" except one, The SSD! Let me explain the disks combination here:

My Mac Pro has been installed the following disks and connected to ARC-1880LP RAID card. The Red disks returned to Free pool unexpectedly and not mounted after the system was power off for 8 hours:(:


Internal Slots connect to SFF-8087 on ARC-1880LP internal SAS port, they are connected thru Maxupgrade's Backplane Attachment Kit

Slot 1 Pass Through - OCZ Vertex II 3.5' SDD 120GB x 1 (dedicated scratch disk for Photoshop and FCP)
Slot 2 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN11)
Slot 3, 4 RAID-0 - Seagate ST315000341AS 1.5TB x 2 (Desktop Grade HDD, NOT support 24 x 7, Firmware REV - CC1H )



ARC-1888LP external SAS port SFF-8088 connects to Stardom's ESATA x 4 case (Case model: ST5610-4S-S2)

Slot 5 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN11)
Slot 6 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN11)
Slot 7 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN12)
Slot 8 Pass Through - Seagate ST350000644NS 2TB x 1 (Enterprise Grade HDD, support 24 x 7, Firmware REV - SN12)
First, the ST35000644NS does not exist, but the ST32000644NS does, and was cleared by Areca with firmware rev. SN11 or higher (presume these are the models you're actually running). ;) These sorts of details are critical, so don't think I'm trying to be mean spirited (wrong information tends to cause more time wasted and generates additional aggravation trying to figure out what's going on).

Now I see a significant issue with the information provided this time:
  • All the disks you're having an issue with are in the external enclosure.
So the cause is more likely between the external cable and enclosure, not the disks or the card.

The answers to the following questions will help significantly (must have these answers).
  • How long is the cable used to connect the card to the enclosure?
  • Can you verify the enclosure works on another card?
For the first question, the cable cannot exceed 1.0 meters (3.3ft). Any longer, and you get signal degradation that results in instability or they won't even mount (sound familiar?). If you're running a longer cable, then this is almost certainly the cause of your problem.

Testing the enclosure is a bit harder, but something else to try (more commonly done when say one to a few disks in an enclosure aren't working), is pull and re-seat each disk in it's bay. I doubt this will help in your case, as it's present on all disks in the unit. Ideally, you need to hook it to another computer (can be done with the external cable as well, to see if it's damaged internally).

What you can do with what you have:
Pull one of the enterprise Seagates, and place it in an internal HDD bay, and see if it will mount (as the MP doesn't have an inrush current limiter, turn the system off first, then drop in the disk, and reboot). This will tell you if the disk is good or not (can repeat with each drive, or toss in a pair at a time and make either a stripe set or mirror). Up to you. But it will tell you if the disks are good or not (which I suspect are fine).​

What you guess is right! :eek:The problem only occurs on Enterprise Grade HDDs (The 644NS). All Desktop grade HDDs (341AS) , including SDD are working fine! What can I do with this?! To trash all enterprise disks??:confused: Oh NO! These Enterprise Disks (644NS) are expensive and very reliable. They can work 24 x 7 for 5 years based on my past experience! I can't lose them!
Actually, you have this backwards. Consumer disks are where you have problems (they're more likely to work under some levels, such as Pass Through or RAID 0 <non parity based levels>, which is why the SSD and consumer disk stripe set ar working for you).

You could have a bad batch of enterprise disks (assumes you ordered all of them from the same source), but this likely isn't the case (see above).

I have flashed the Mac EFI BIOS for the card to replace the original factory one before. The EFI BIOS comes with the driver CD. I will try to restore all the PC BIOS (4 images) later to see if the problem persists. Meanwhile, please kindly advise how to fix the "resetting" problem on Enterprise disks (644NS). Many thanks!!
I don't think you need to bother with this, given the issues are with the external disks only.

NOTE: I will try to form two 644NS disks (Slot 7 & 8) into a RAID-0 array tonight to see if it is reset even under as an RAID array. So that I guess we will know a bit more about the behavior of the controller and 644NS Enterprise disks. I will report the result on tomorrow.
From what you've revealed in your most recent post, this won't change anything, as you've a connection issue between the card and enclosure (either the cable or enclosure is the problem). I have seen bad enclosures, but it's usually limited to a single bay or so (bad connector or bent inside that keeps the disk from making proper contact), and more importantly, rare. When the enclosure is bad, it's usually the PSU, though there is the super rare exception that the backplane board is bad (one that has the SATA + power connectors in the enclosure = you can see it when the disk/tray is removed).

The cable length issue OTOH, is quite common with new users to hardware RAID systems (SATA is why the limit is 1.0 meters; SAS can go to 10 meters due to the higher signal voltages). And I do see bad cables from time to time, so that's not unheard of.

I hope Areca NOT ONLY SUPPORT TO CHEAP DISK! But also ENTERPRISE!
They do.

That's why they publish the HDD Compatibility List for their products (saves a ton of headaches when you get a drive that won't work; I've run into this with early enterprise models as well - this is why such a resource is invaluable IMO).
 
First, the ST35000644NS does not exist, but the ST32000644NS does, and was cleared by Areca with firmware rev. SN11 or higher (presume these are the models you're actually running). ;) These sorts of details are critical, so don't think I'm trying to be mean spirited (wrong information tends to cause more time wasted and generates additional aggravation trying to figure out what's going on).

Now I see a significant issue with the information provided this time:
  • All the disks you're having an issue with are in the external enclosure.



  • NO NO NO man, you overlooked the Slot 2 is an Enterprise Grade 644NS (in red) which is an internal single enterprise Pass-Through disk installed in Slot 2 inside my Mac Pro and it did reset to free pool as well unexpectedly:mad: . I am sorry I gave you wrong model no. You are right, it is ST32000644NS! Sorry for my typo! Please help to re-investigate the problem! My Sherlock Holmes!:D

    I do believe both internal and external SAS cable are very fine. It is a Highpoint[tm] external SFF-8088 Min-SAS to eSATA x 4 ports cable (external 1m) and the internal one comes from the Maxupgrade's Backplane Attachment Kit. They are very high quality, I can feel it. Beside this, I provide the two kind of HDDs profile captured in the Areca configuration page here (one working, one NOT working):


    Seagate Desktop Grade HDD - ST31500341AS
    This model ok without problem. It wouldn't reset, did good job, but NOT support 24 x 7!

    341as.png



    Seagate Enterprise Grade HDD - ST32000644NS
    This model reset to free pool each time after Mac Pro is power off on everynight. No matter you install it in the internal slot of Mac Pro or in an external enclosure case (Stardom ST5610-4S-S2, independent eSATA ports x 4). I come up with a conclusion - you paid more, you lose more:mad:. I am now a bit regret to have the hardware RAID controller. I should trust Apple to use their OS software RAID. It runs faster on a 12 cores Mac Pro, less problem, and it paid free.

    644ns.png
 
Last edited:
May be a bad batch of disks, as the SMART Spinup Retries has me concerned @ 100(97) on both of the enterprise Seagate's you've given screen shots on. Just barely in spec, and shouldn't be anywhere near that high for new disks. BTW, I'm not a fan of Seagate's these days (not since 7200.11, and it also affected the enterprise models - it just wasn't as public, known as the "Boot of Death"). Not good these days at all IMO (QC sucks @ss). As a result, I've gone to WD for SATA disks, and have had good luck (one failure since 2008, which is phenomenal).

BTW, I've run into too many issues with Highpoint's products as well (cable should be the one thing they don't screw up, but as cheap as they are, it could be an issue). But the fact that there is one of the 644's in an internal HDD bay, I'm thinking disks.

You could try the other disks in an internal bay (see if the issues are still persistant or not), and even try a different bay (makes sure it's not a bad signal on the internal cable).

But screen shots of the card's settings may help as well (just in case there's a setting issue that causes some sort of conflict = instability).
 
May be a bad batch of disks, as the SMART Spinup Retries has me concerned @ 100(97) on both of the enterprise Seagate's you've given screen shots on. Just barely in spec, and shouldn't be anywhere near that high for new disks. BTW, I'm not a fan of Seagate's these days (not since 7200.11, and it also affected the enterprise models - it just wasn't as public, known as the "Boot of Death"). Not good these days at all IMO (QC sucks @ss). As a result, I've gone to WD for SATA disks, and have had good luck (one failure since 2008, which is phenomenal).

BTW, I've run into too many issues with Highpoint's products as well (cable should be the one thing they don't screw up, but as cheap as they are, it could be an issue). But the fact that there is one of the 644's in an internal HDD bay, I'm thinking disks.

You could try the other disks in an internal bay (see if the issues are still persistant or not), and even try a different bay (makes sure it's not a bad signal on the internal cable).

But screen shots of the card's settings may help as well (just in case there's a setting issue that causes some sort of conflict = instability).


Holmes... :( Thanks for giving me a hand of this. But obviously, not good enough.
We should not blame to all other things else (Highpoint & Seagate) for a single faulty controller in a single incident. I have got a reply just now from Areca support. They said it may cause by the 644NS Enterprise Grade HDDs were "unavailable" before the controller started up. So the ARC-1888LP RAID Controller removed the Pass-Through disks configuration automatically. But I also told them that the 314AS Desktop Grade HDD doesn't have a problem. So I will give it a try to set the spinning gap time from 0.7s to 0.4s set in the Areca's configuration page. I will report the result on tomorrow anyway.

Note that only 644NS is Enterprise Grade. 314AS is NOT. And it is hard to believe that all of my 644NS x 9 disks are having the same problem at once because I purchased them in different months and years from 2009 - 2011 (Firmware SN11 and SN12), They all having the same problem.

And for the SMART spinning value you concerned. The bottom green bar on the screen cap told that "the Larger is better for the value......"


For the Controller configurations you requested for your study in PM, here they are:

Areca ARC-1880LP (HDD Power Management)
hddpowermanagement.png


Areca ARC-1880LP (RAID Set Hierarchy)
raidsethierarchy.png


Areca ARC-1880LP (Raid Subsystem Information)
raidsubsysteminformatio.png
 
Last edited:
Holmes... :( Thanks for giving me a hand of this. But obviously, not good enough.
We should not blame to all other things else
(Highpoint & Seagate) for a single faulty controller in a single incident. I have got a reply just now from Areca support. They said it may cause by the 644NS Enterprise Grade HDDs were "unavailable" before the controller started up. So the ARC-1888LP RAID Controller removed the Pass-Through disks configuration automatically. But I also told them that the 314AS Desktop Grade HDD doesn't have a problem. So I will give it a try to set the spinning gap time from 0.7s to 0.4s set in the Areca's configuration page. I will report the result on tomorrow anyway.
Please understand I do this for a living, and what I bill for this sort of work isn't cheap. So if you're not interested in the assistance, that's fine. But it's not exactly conducive to getting further assistance if you insult the person trying to help you. Especially when it's free.

That said, a bad RAID controller is extremely rare vs. disks and cables in my experience (in terms of hardware). I've never actually had a card from Areca go bad on me (or came out of the box that way), and I've used quite a few of them (thought I did once, and sent it in for repair/replacement to find out it wasn't bad at all - intermittent low voltage on the +3.3V regulator in the PSU was the cause of the issue, and took over a month before I was actually able to see it on a DMM).

As per brands, take a look at Highpoint's reputation with RAID products. It's horrible (support is worthless). Their hardware products are also uneven in terms of how well it works, as it's all done via an ODM model (multiple suppliers, so there's no commonality in functionality, QC, ...). I'd also encourage you to look at the "Boot of Death" issue that occurred with Seagate's enterprise disks I mentioned(enterprise disks aren't booted that often, so it didn't get the numbers that the consumer disks did, and attract the attention of the public anywhere near the same manner). QC of current disks isn't that wonderful either (failure rate is ~31% for the 7200.12 series, and the enterprise disks are built off the same mechanics = doesn't instill much trust for me at least).

So if you'd dealt with this as often as I had (fixing client's issues with equipment they purchased on their own/designing new systems for them), you'd have an unpleasant opinion of gear that didn't work well. :rolleyes: Simply put, get burnt bad enough, and it causes one to look elsewhere for solutions.

As it happens, I've had great luck with Areca and ATTO for cards, and WD for enterprise SATA disks.

As per spin up times being a potential cause, that's why I asked for the card's settings. As I've said before, the smallest detail can have a drastic effect, even if you don't think it's important. This is why the support dept. requests as much information as they do.

Now in terms of single controller/incident, I'd actually be surprised if the card is actually bad (if it were, nothing attached to it would work properly). So other possibilities need to be investigated (may end up being just a setting, or it could be what I've mentioned already). But it takes time to figure this stuff out, especially if some of the pieces are missing. You'll need to exercise patience, and put in the effort to try what's suggested (most of it is to rule things out, as there's a lot of variables with hardware RAID). Solutions don't usually come in a couple of minutes when you're dealing with a problem in RAID (more of a royal PITA, but once it's set up correctly, they tend to work without issue for years - just swap out disks and reconfigure as needed)..
 
Please understand I do this for a living, and what I bill for this sort of work isn't cheap. So if you're not interested in the assistance, that's fine. But it's not exactly conducive to getting further assistance if you insult the person trying to help you. Especially when it's free.

That said, a bad RAID controller is extremely rare vs. disks and cables in my experience (in terms of hardware). I've never actually had a card from Areca go bad on me (or came out of the box that way), and I've used quite a few of them (thought I did once, and sent it in for repair/replacement to find out it wasn't bad at all - intermittent low voltage on the +3.3V regulator in the PSU was the cause of the issue, and took over a month before I was actually able to see it on a DMM).

As per brands, take a look at Highpoint's reputation with RAID products. It's horrible (support is worthless). Their hardware products are also uneven in terms of how well it works, as it's all done via an ODM model (multiple suppliers, so there's no commonality in functionality, QC, ...). I'd also encourage you to look at the "Boot of Death" issue that occurred with Seagate's enterprise disks I mentioned(enterprise disks aren't booted that often, so it didn't get the numbers that the consumer disks did, and attract the attention of the public anywhere near the same manner). QC of current disks isn't that wonderful either (failure rate is ~31% for the 7200.12 series, and the enterprise disks are built off the same mechanics = doesn't instill much trust for me at least).


Sorry man, I really didn't mean that (insult). Your contribution is awesome and very outstanding:rolleyes:. Many thanks. I will contact official support from Areca for further troubleshooting hereafter or to trash the ARC-1880LP RAID controller if it can't be fixed. Please go on to support other people! Your supporting effort and skill-set are highly appreciated. And you are the one the best I have never seen in our world doing so excellent. You are a Pro, like Flynn in Tron Legacy!:D

"The RAID, a digital frontier..... you tried to picture the RAID information moving across the computer. How do they look like? RAID chips, CPU?.... and then one day, you got in." (But I cannot get in at this time. I am just a user)

Relax, relax and relax! You are outstanding!
Thousand Thanks! My sincerely!:eek:
 
Last edited:
Sorry man, I really didn't mean that (insult). Your contribution is awesome and very outstanding:rolleyes:. Many thanks. I will contact official support from Areca for further troubleshooting hereafter or to trash the ARC-1880LP RAID controller if it can't be fixed. Please go on to support other people! Your supporting effort and skill-set are highly appreciated. And you are the one the best I have never seen in our world doing so excellent. You are a Pro, like Flynn in Tron Legacy!:D

"The RAID, a digital frontier..... you tried to picture the RAID information moving across the computer. How do they look like? RAID chips, CPU?.... and then one day, you got in." (But I cannot get in at this time. I am just a user)

Relax, relax and relax! You are outstanding!
Thousand Thanks! My sincerely!:eek:
I realize you're frustrated. Unfortunately, this is rather common for those who are moving from software based RAID's on the ICH (or a simple SATA controller, aka non-RAID Host Bus Adapter) to a hardware RAID card, as there are more difficulties (far more settings, more detail in the hardware selection, specification limits such as cable lengths,... to deal with).

As per helping, more information is needed, and is the case whether it's me, Areca, or anyone else (screen shots of every card setting you have). Easier to ask for them this way, so nothing's missed/forgotten (visual via screen shot is faster/easier than typing the information out IMO; reduces errors due to typos).

Also, are there any jumpers set on the drives?

And what browser are you using to change the settings on the card?
I ask here, as Safari is notorious for problems (changes don't actually take effect; this has particularly been noticed when attempting to flash the card, but other settings are affected as well). Which means, even if you've tried to adjust the Staggered Spinup time, it may not have taken (could still be at the factory default of 0.7 seconds). Firefox has been able to work properly on a MP.
 
Confirmed, shifting from 0.7s to 0.4s spinning time for 644NS HDDs could resolve the problem! I could see all the Seagate ST32000644NS Enterprise Grade HDDs this morning after a 8 hours machine offline!! Thanks all for support!:D

screenshot20110526at125.png
 
Last edited:
Good to hear you finally got it sorted. :)

Say Nano, can I clone the OSX HDD volume on to the bank SSD that's attached to the Areca controller in pass-through mode using any tools that come with OSX ? Or must I use something like carbon copy etc.

I played around with disk utility, it allows me to create an image (MacintoshHD.dmg) however it won't allow me to restore it onto the blank 240GB SSD. Saying something like it first needs to be scanned in and no file system detected when I try to mount the 9.1GB image.
 
Say Nano, can I clone the OSX HDD volume on to the bank SSD that's attached to the Areca controller in pass-through mode using any tools that come with OSX ? Or must I use something like carbon copy etc.
Give the 3rd party software a run.
 
Hi,

I have mounted 4 x internal Seagate Enterprise Grade HDDs (Seagate ST2000NM0011 Constellation 2TB SATA 6Gb/s) into my Mac Pro recently connected to a ARC-1880LP RAID card in form of RAID-5. They are generating huge amount of hot stream inside the box and I can feel the hot wave coming out (hot air) from the rear of the box. The air as hot as the waste-gas of a Jet engine thruster. I live in South Asia Pacific. The weather is quite hot here (everyday over 33'c). Is there any water cooling system can cool down the hard drives x 4 efficiently?:confused:
 
Hi,

I have mounted 4 x internal Seagate Enterprise Grade HDDs (Seagate ST2000NM0011 Constellation 2TB SATA 6Gb/s) into my Mac Pro recently connected to a ARC-1880LP RAID card in form of RAID-5. They are generating huge amount of hot stream inside the box and I can feel the hot wave coming out (hot air) from the rear of the box. The air as hot as the waste-gas of a Jet engine thruster. I live in South Asia Pacific. The weather is quite hot here (everyday over 33'c). Is there any water cooling system can cool down the hard drives x 4 efficiently?:confused:
Such things do exist, though you'd have to put some time into planning it out (best block that fits the MP's internals, where to run the lines, ...). They're not cheap either (say $70 per block <most common price from what I'm seeing>, which is good for 1x disks, then add in a pump, fittings and hoses).

BTW, how hot are the disks currently running when under load?
I ask, as enterprise disks are designed to run 55C for sustained operation (24/7). Cooler is better of course, and I usually see low - mid 40's on avg. under load (WD RE series; ambient is lower though, 25C on avg., and a 7 degree difference can push the working temps ~+10C or so).

Some links that might be of use (US sources, but hopefully you can find them locally).
FrozenCPU
PerformancePCs
Tom's Hardware (thread from the forum)

Also, would you be willing to increase the fan speeds?
It's easy and cheap, but there's the trade-off in noise of course.
 
Such things do exist, though you'd have to put some time into planning it out (best block that fits the MP's internals, where to run the lines, ...). They're not cheap either (say $70 per block <most common price from what I'm seeing>, which is good for 1x disks, then add in a pump, fittings and hoses).

BTW, how hot are the disks currently running when under load?
I ask, as enterprise disks are designed to run 55C for sustained operation (24/7). Cooler is better of course, and I usually see low - mid 40's on avg. under load (WD RE series; ambient is lower though, 25C on avg., and a 7 degree difference can push the working temps ~+10C or so).

Some links that might be of use (US sources, but hopefully you can find them locally).
FrozenCPU
PerformancePCs
Tom's Hardware (thread from the forum)

Also, would you be willing to increase the fan speeds?
It's easy and cheap, but there's the trade-off in noise of course.


Thanks Nano.

Most of the ENT HDDS are 47 ºC - 50ºC as reported by SMART (ARC-1880LP configuration page). I also checked to the websites you listed. However, none of them could fit-into a Mac Pro because the gap space between the HDDs is too narrow and the Mac Pro hard drives are installed in a "up-side-down" manner. So they couldn't be mounted on top of a Mac drive due to the special design of the Mac Pro HDD Mount.

My provisional solution is to remove the side plane cover of the case, and blow the air directly to inside with a real fan (a 12' inch domestic fan). So my Mac is now actually running without a case cover;). It is a terrify solution, it is dangerous as hardware components like RAID card, HDDs, Video cards are exposed. But it works for short... until someone throw some water into it accidentally....

I am also thinking the possibility to attach an external USB fan (~3 inch diameter) onto the Mac Pro front panel to increase the velocity of the air blowing to inside from the front direction. If you know these kind of fan hardware available. Please kindly advise.:D

(Shipping from USA is not a concern to me as most of my Mac components like HDDs, RAID card, RAM modules, Blu-ray drive...etc are ordered from OWC online shop, shipped thru UPS or Fedex to my home in South Asia). I am also a supporter of USA.
 
Last edited:
Most of the ENT HDDS are 47 ºC - 50ºC as reported by SMART (ARC-1880LP configuration page). I also checked to the websites you listed. However, none of them could fit-into a Mac Pro because the gap space between the HDDs is too narrow and the Mac Pro hard drives are installed in a "up-side-down" manner. So they couldn't be mounted on top of a Mac drive due to the special design of the Mac Pro HDD Mount.
The disks aren't at their operating temp limit, though I can understand the concern (spent a small fortune and don't want to see your expensive equipment break). ;)

As per the disk water blocks, I only recalled that they existed and provided some links (site's that I'm aware of for water cooling that came up in a search - I've used FrozenCPU and PerformancePCs for air cooling parts over the years and had good experiences with both of them).

My provisional solution is to remove the side plane cover of the case, and blow the air directly to inside with a real fan (a 12' inch domestic fan). So my Mac is now actually running without a case cover;). It is a terrify solution, it is dangerous as hardware components like RAID card, HDDs, Video cards are exposed. But it works for short... until someone throw some water into it accidentally....
The only real danger I can think of, is that you've removed a baffle that can help to direct airflow. So long as the temps of the other cards and whatnot (RAID processor, GPU, and CPU's...) are within acceptable limits, there's no need to panic.

Dust will get in there anyway, so that's nothing new (really wish Apple would put dust filters on the intake fans). The occasional clean out will still be necessary (still have to do it when dust filters are present, but it's not as often).

I am also thinking the possibility to attach an external USB fan (~3 inch diameter) onto the Mac Pro front panel to increase the velocity of the air blowing to inside from the front direction. If you know these kind of fan hardware available. Please kindly advise.:D
Your choice. :D
(Shipping from USA is not a concern to me as most of my Mac components like HDDs, RAID card, RAM modules, Blu-ray drive...etc are ordered from OWC online shop, shipped thru UPS or Fedex to my home in South Asia). I am also a supporter of USA.
:cool: Good to know. :)

I wasn't sure if customs, shipping costs, and any other additional taxes made life difficult or not, and then there's the "I want it now" aspect to deal with as well... ;) I'm accustomed to international members preferring local sources for parts (VAT + shipping hurts European members for example, and for some, their orders never seem to arrive...).
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Hi Nano,

I got a technical question about the performance of how to allocate RAID drives (RAID logical drives) to OSX Disk Utility.


For the best performance for all partitions. Which one is right?

  1. 4 x 2TB HDDs form a RAID-5 diskgroup, and split into 3 x RAID-5 logical volumes (2TB of each). So partition is allocated in the HW-RAID level and each logical volume has it own parity calculation.
  2. 4 x 2TB HDDs form a RAID-5 diskgroup, and allocated all its space (6TB) into a single HW-RAID logical volume. And to use OSX Disk Utility to split it into 3 x Mac OSX-based logical partitions (2TB of each). So only one parity calculation takes place for all 3 x OSX level logical partitions.


I am asking this question because Pt.1 has a big performance drop on its 2nd and 3rd RAID logical volumes:

  • 1st HW RAID-5 volumes = 400Mbps read/write speed;
  • 2nd HW RAID-5 volumes = 300Mbps read/write speed;
  • 3rd HW RAID-5 volumes = 200Mbps read/write speed.

The performance is diminishing when more HW RAID volumes are allocated. I am now trying Pt.2 solution to see if they perform evenly when all 4 x HDDs space is allocated into just one single HW-RAID-5 volume. And to let OSX for format it into multiple partitions. I am not sure is this theory right?:confused:
 
Last edited:
Hi Nano,

I got a technical question about the performance of how to allocate RAID drives (RAID logical drives) to OSX Disk Utility.


For the best performance for all partitions. Which one is right?

  1. 4 x 2TB HDDs form a RAID-5 diskgroup, and split into 3 x RAID-5 logical volumes (2TB of each). So partition is allocated in the HW-RAID level and each logical volume has it own parity calculation.
  2. 4 x 2TB HDDs form a RAID-5 diskgroup, and allocated all its space (6TB) into a single HW-RAID logical volume. And to use OSX Disk Utility to split it into 3 x Mac OSX-based logical partitions (2TB of each). So only one parity calculation takes place for all 3 x OSX level logical partitions.


I am asking this question because Pt.1 has a big performance drop on its 2nd and 3rd RAID logical volumes:

  • 1st HW RAID-5 volumes = 400Mbps read/write speed;
  • 2nd HW RAID-5 volumes = 300Mbps read/write speed;
  • 3rd HW RAID-5 volumes = 200Mbps read/write speed.

The performance is diminishing when more HW RAID volumes are allocated. I am now trying Pt.2 solution to see if they perform evenly when all 4 x HDDs space is allocated into just one single HW-RAID-5 volume. And to let OSX for format it into multiple partitions. I am not sure is this theory right?:confused:
It's not really going to matter, as the second and third partitions are running on the inner tracks of the disks = slower, not the parity calculations that is causing your performance drops for partitions 2 and 3.

Parity would kick in if enough volumes are used at once (simultaneous use), but this is by far secondary in your case. It's to do with the physics of drive construction (singe servo moving data for all partitions <affects simultaneous use> and track length <circumference> on how much data is moved per rotation).

The first partition starts at the outer most track and consumes the fastest 2TB worth of usable capacity of the unpartitioned volume. Then the second partition picks up from there, and consumes another 2TB's worth of capacity. Leaving the third partition on the slowest tracks of the disks. Once you've hit the 50% or better mark, you're throughput will slow down (physics = less data per track due to a smaller circumference of those tracks).

Solutions:
  1. You could mitigate this by adding more disks to the set (each volume will speed up due to the additional parallelism).
  2. Eliminate it all together by using separate disks per volume (3x separate RAID 5 sets of sufficient disk counts to meet your performance requirements).
Either of these means more money on disks, but it's the only solution if you can't live with the existing performance.

But if possible, you won't have to spend anything if you combine all data on the same volume and keep your used capacity to 50% or less of the available capacity of the volume (if you can live with this).
 
It's not really going to matter, as the second and third partitions are running on the inner tracks of the disks = slower, not the parity calculations that is causing your performance drops for partitions 2 and 3.

Parity would kick in if enough volumes are used at once (simultaneous use), but this is by far secondary in your case. It's to do with the physics of drive construction (singe servo moving data for all partitions <affects simultaneous use> and track length <circumference> on how much data is moved per rotation).

The first partition starts at the outer most track and consumes the fastest 2TB worth of usable capacity of the unpartitioned volume. Then the second partition picks up from there, and consumes another 2TB's worth of capacity. Leaving the third partition on the slowest tracks of the disks. Once you've hit the 50% or better mark, you're throughput will slow down (physics = less data per track due to a smaller circumference of those tracks).

Solutions:
  1. You could mitigate this by adding more disks to the set (each volume will speed up due to the additional parallelism).
  2. Eliminate it all together by using separate disks per volume (3x separate RAID 5 sets of sufficient disk counts to meet your performance requirements).
Either of these means more money on disks, but it's the only solution if you can't live with the existing performance.

But if possible, you won't have to spend anything if you combine all data on the same volume and keep your used capacity to 50% or less of the available capacity of the volume (if you can live with this).


Well, your advise is highly appreciated.:)
I tried the Pt.2 solution on last night. You are right. Even partitions are allocated by Disk Utility (OS-basd partitioning), not by the HW Raid controller. The result remains very same. So I will keep to use Pt.1 for all of my RAID volumes (12 x 2TB HDDs in set of 3 x RAID-5 Diskgroups, splitting into a total of 9 x 2TB HW RAID-5 logical volumes). For adding additional disk to Diskgroup is not my choice because all of my external eSATA enclosures are 4 Bays (all empty slots used up). :(

BTW, I purchased a set of new generation Enterprise Grade 2TB SATA III 6Gbps HDDs (ST2000NM0011) from USA on last week. I realize the performance has a +20% performance growth comparing to the old model (2TB SATA II 3Gpbs HDDs (ST320000644NS). New = 150MB/s, Old = 125MB/s. The SATA III 6Gbps HDDs are not as fast as I expected......:eek:
 
Last edited:
Well, your advise is highly appreciated.:)
I tried the Pt.2 solution on last night. You are right. Even partitions are allocated by Disk Utility (OS-based partitioning), not by the HW Raid controller. The result remains very same. So I will keep to use Pt.1 for all of my RAID volumes (12 x 2TB HDDs in set of 3 x RAID-5 Diskgroups, splitting into a total of 9 x 2TB HW RAID-5 logical volumes). For adding additional disk to Diskgroup is not my choice because all of my external eSATA enclosures are 4 Bays (all empty slots used up). :(

BTW, I purchased a set of new generation Enterprise Grade 2TB SATA III 6Gbps HDDs (ST2000NM0011) from USA on last week. I realize the performance has a +20% performance growth comparing to the old model (2TB SATA II 3Gpbs HDDs (ST320000644NS). New = 150MB/s, Old = 125MB/s. The SATA III 6Gbps HDDs are not as fast as I expected......:eek:
Spinning platters, so it's back to physics. So to get mechanical throughputs higher, there's a couple of ways to go about it. Either speed up the spindles (why you see 15k rpm SAS disks), or increase the platter density (ideally combine both, but you get into other issues, such as the heads not being able to read the data on dense platters if it spins past too quickly).

But at current platter densities for high capacity SATA disks, the heads aren't sensitive enough to read them if the rotational speeds are increased. Which is why they're currently limited to what you're seeing (IIRC, I've seen near the 150MB/s mark with SAS running at 15k rpm, but the platters aren't as dense - highest capacity 15k SAS disk is only 600GB right now; larger capacity SAS versions run at 7200rpm as well, as they're using the same platters - really just different controller chips on the disk's PCB while the mechanics remain the same as their SATA counterparts at this particular spindle rate).

Where these disks can make sense however is a relational database (banking for example). Massive random access reads and writes with large capacity requirements, so SLC based SSD's (or Flash Drives at this point actually) may be too expensive to get the necessary capacity.

Thus SSD's are the only disk technology that can actually push throughputs near the limit of 6.0Gb/s interfaces ATM, and likely will be the only tech that can do it (no one's actually expecting mechanical to push 6.0Gb/s, as it can't even push 3.0Gb/s now for a single drive <random access or sequential throughputs>).

If the disks are attached to the RAID card, they don't need to be in the same enclosure to build an array out of them. So long as the disks are available on the card and unused, they can be configured into whatever you wish so long as there is enough members for the desired level.

BTW, eSATA?
The Areca doesn't use eSATA (connectors = MiniSAS). Or do you mean you're using enclosures with 4x eSATA slots (allows for 1:1 disk/port ratio) and use an external MiniSAS (SFF-8088) to 4x eSATA fanout cable?

I recall Highpoint sells these, and recall you used Highpoint in the past (so I presume there is a good possibility this is how you're running disks). If you need more disks, then you will need a SAS Expander to do that (that card can run up to 128 disks). They're not exactly cheap (not vs. a PM based enclosure), but they do exist and are available if you ever need to go this route.
 
Spinning platters, so it's back to physics. So to get mechanical throughputs higher, there's a couple of ways to go about it. Either speed up the spindles (why you see 15k rpm SAS disks), or increase the platter density (ideally combine both, but you get into other issues, such as the heads not being able to read the data on dense platters if it spins past too quickly).

But at current platter densities for high capacity SATA disks, the heads aren't sensitive enough to read them if the rotational speeds are increased. Which is why they're currently limited to what you're seeing (IIRC, I've seen near the 150MB/s mark with SAS running at 15k rpm, but the platters aren't as dense - highest capacity 15k SAS disk is only 600GB right now; larger capacity SAS versions run at 7200rpm as well, as they're using the same platters - really just different controller chips on the disk's PCB while the mechanics remain the same as their SATA counterparts at this particular spindle rate).

Where these disks can make sense however is a relational database (banking for example). Massive random access reads and writes with large capacity requirements, so SLC based SSD's (or Flash Drives at this point actually) may be too expensive to get the necessary capacity.

Thus SSD's are the only disk technology that can actually push throughputs near the limit of 6.0Gb/s interfaces ATM, and likely will be the only tech that can do it (no one's actually expecting mechanical to push 6.0Gb/s, as it can't even push 3.0Gb/s now for a single drive <random access or sequential throughputs>).

If the disks are attached to the RAID card, they don't need to be in the same enclosure to build an array out of them. So long as the disks are available on the card and unused, they can be configured into whatever you wish so long as there is enough members for the desired level.

BTW, eSATA?
The Areca doesn't use eSATA (connectors = MiniSAS). Or do you mean you're using enclosures with 4x eSATA slots (allows for 1:1 disk/port ratio) and use an external MiniSAS (SFF-8088) to 4x eSATA fanout cable?

I recall Highpoint sells these, and recall you used Highpoint in the past (so I presume there is a good possibility this is how you're running disks). If you need more disks, then you will need a SAS Expander to do that (that card can run up to 128 disks). They're not exactly cheap (not vs. a PM based enclosure), but they do exist and are available if you ever need to go this route.


Thanks Nano,

Do you have any prefer bands for external SAS expander?:D My existing RAID card is Areca ARC-1880LP 6Gbps. For alternative option, What about to add an additional RAID card. I have a bit interest to ARC-1880x RAID card (2 x SFF-8088 ports).
 
Do you have any prefer bands for external SAS expander?:D My existing RAID card is Areca ARC-1880LP 6Gbps. For alternative option, What about to add an additional RAID card. I have a bit interest to ARC-1880x RAID card (2 x SFF-8088 ports).
For ready made, I usually go for Sans Digital and Norco as they offer a lot of value for the money. But I can link a few...

Building is always an option as well, and it can usually be done cheaper vs. the bigger name brands such as Promise (not that hard; just need an enclosure, backplane bays, PSU, and a SAS Expander PCB).

In terms of using SAS Expanders, I tend to only use them when the disk count will exceed 24, as there are 24 port cards. Using 1:1 can be both cheaper and faster, particularly for DAS (disk counts tend not to exceed 24).

But another card is also a possibility.

One particular product can be invaluable (what makes a 24 port card work when you can't get all the disks internally on the system); an internal MiniSAS to external MiniSAS cable. This allows you to use internal ports with enclosures (whether they use SAS expanders or not). Just remember to keep the length to 1.0 meters with SATA disks, and do not use gender adapter brackets with SATA disks (gets installed to an empty PCI bracket) as it will be unstable with SATA (voltages get too low and you'll end up with drop-outs or they won't even show up).

Examples of ready-made SAS Expander Enclosures...
Sans Digital
Norco (ready-made 24 bay unit)
Promise
pc-pitstop.com
Netstore
iStoragePro (review; expensive though for an 8 bay unit @ $1595 MSRP)

Separate External SAS Expander (allows you to attach it to "dumb" non-Expander MiniSAS enclosures)
Areca ARC-8026

SAS Expander Boards (DIY route)
Chenbro 36 port Universal

When going the DIY route, I tend to go for Norco enclosures as they're an incredible value for what they are. Behind that, SuperMicro (more expensive, but highly customizable and built like tanks).
 
Give the 3rd party software a run.

I got a weird issue come up, basically cloned the OSX using Carbon copy on to the Mercury 6G 240GB SSD that's running is pass through mode off the Areca 1880-ix-12 and it works fine and I have now removed the original 1TB HD that came with the Mac Pro. However after installing Windows 7 Ultimate 64-bit on the 2nd SSD (same type) running off the onboard ICH/SATA2 , it installed fine and I can boot into Windows 7 by holding option key etc. But for some odd reason when I go to my computer the (C:\) shows an absurd 64GB of the 233GB available as used! When I browse C: and highlight all folders etc. they take up only 11.4GB as they should, but looking at the drive properties it states that 64GB is used up! Dodgy SSD maybe?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.