Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I got all my stuff last week, and had no time to re-arrange all my hard drives and data since I was in the middle of a project, this week I will finally get to play wit my toys! However I realized while going through some of my equipment, I have 4 1TB hard drives that I could use in my extra 4 slots in my RAID enclosure, however they are not enterprise drives (the Samsung HD103UH drives I mentioned earlier + another one I had somewhere else)

is uses these lower level drives worth it? it would save me from transferring all the data off my enterprise drives that are currently in my mac; and coping my boot drive and replacing it. Also the it would be twice the size. Is it worth thte time to copy everything and have less size to use enterprise drives?
 
I got all my stuff last week, and had no time to re-arrange all my hard drives and data since I was in the middle of a project, this week I will finally get to play wit my toys! However I realized while going through some of my equipment, I have 4 1TB hard drives that I could use in my extra 4 slots in my RAID enclosure, however they are not enterprise drives (the Samsung HD103UH drives I mentioned earlier + another one I had somewhere else)

is uses these lower level drives worth it? it would save me from transferring all the data off my enterprise drives that are currently in my mac; and coping my boot drive and replacing it. Also the it would be twice the size. Is it worth thte time to copy everything and have less size to use enterprise drives?
Absolutely NOT, as the set is only as good as it's weakest link.

They'd be fine for single disk, backup, or archival use, but not attached to a hardware RAID solution (would introduce instability, which isn't something you'll ever want to deal with - it really is that much of a nightmare to experience).
 
Absolutely NOT, as the set is only as good as it's weakest link.

They'd be fine for single disk, backup, or archival use, but not attached to a hardware RAID solution (would introduce instability, which isn't something you'll ever want to deal with - it really is that much of a nightmare to experience).

well they would be on a separate set from my 4 new drives, but if 4 links are a nightmare ill take the time to shuffle around all my data
 
well they would be on a separate set from my 4 new drives, but if 4 links are a nightmare ill take the time to shuffle around all my data
If you're running them as single disks on the controller, you may be able to get away with that (i.e. recovery timings won't be as critical for this particular usage). Might be worth a try anyway... ;)
 
Phase 1 Begins

So I have begun to set everything up
ARC card installed, I have my 4 enterprise drives hooked up; and have begun creating the RAID5;
I must say at first these instructions were confusing, I kept trying to find a way for mac osx to enter the McBIOS raid manager, and isntalled rEFIt and that did not work, but it seems all I needed to do was use the ARCHTTP Prox to set up everything. So here is the status as of now

attachment.php

Is this set up correctly? or did I screw up one of these settings?

also

attachment.php


the initialization seems to be taking an EXTREMELY long time, is this normal? I am used to software raids which are almost instantaneous in comparison; at this rate it will take several hours just to be initialized. about .1% every 15-20 seconds. Also I don't remember initialization on the Mac Raid Card taking this long either.
 

Attachments

  • Picture 4.png
    Picture 4.png
    187.8 KB · Views: 432
  • Picture 10.png
    Picture 10.png
    160.7 KB · Views: 425
That is normal initialization time. It took my RAID6 with 8x2TB disks just under 5 hours to complete.

yeah 5 hours seems about how long its going to take; how come this takes so long yet with other RAIDs I set up it took only several minutes?

----------

another quick question, I purchased the CyberPower UPS, is it safe to hook up my other surge protectors to the battery backup? I know you can overload the power strips if you daisy channel regular surge protectors together, but is the UPS more powerful?
 
yeah 5 hours seems about how long its going to take; how come this takes so long yet with other RAIDs I set up it took only several minutes
There's more going on than in a software implementation, as the card actually has to format the set (maps out the drives in order to create primary data as well as the parity locations). BTW, this is a much lower level than a simple file system format (i.e. HSF+, NTFS,...).

And the larger the capacity, the longer it takes.

another quick question, I purchased the CyberPower UPS, is it safe to hook up my other surge protectors to the battery backup? I know you can overload the power strips if you daisy channel regular surge protectors together, but is the UPS more powerful?
Most UPS manufacturers state that you shouldn't use additional surge suppressors.

If you do, do not daisy chain them, as there could be problems with insufficient current as a result (why the don't recommend using them).
 
Results are in!

This thing is fast!

I am not sure whether having system cache enabled or disabled is more accurate, but I assume with those giant read speeds having it enabled is nonsense

2GB

attachment.php


attachment.php



4GB
attachment.php


attachment.php


and this one caused AJA system test to crash?
attachment.php




So everything seems to be up and working. Thanks to everyone for all your help, when I switch out all my internal hard drives and pop them in the extra slots Ill give you guys some more juicy speed tests!
 
I just see:
2GB
4GB
Are there test figures in between the spaces? I use the biggest / longest test possible with cache off so that I know what the sustained rate is. The cache is helping those speeds in reality, but if you read/write massive files like I often do in video editing, it's good to know what the worst case speed will be.

Glad you're running and happy so far!
 
I just see:
2GB
4GB
Are there test figures in between the spaces? I use the biggest / longest test possible with cache off so that I know what the sustained rate is. The cache is helping those speeds in reality, but if you read/write massive files like I often do in video editing, it's good to know what the worst case speed will be.

Glad you're running and happy so far!

they are embed images, i can see them fine thats weird, try just viewing them

https://forums.macrumors.com/attachment.php?attachmentid=321216&stc=1&d=1327354113
https://forums.macrumors.com/attachment.php?attachmentid=321217&stc=1&d=1327354113
https://forums.macrumors.com/attachment.php?attachmentid=321218&stc=1&d=1327354113
https://forums.macrumors.com/attachment.php?attachmentid=321219&stc=1&d=1327354113
https://forums.macrumors.com/attachment.php?attachmentid=321220&stc=1&d=1327354113
I tried running with 8GB
412.8 WRITE
383.7 READ
however the program froze after it reached the end.
 
Still nothing.
No such Attachment. In most cases this means that you followed an out-of-date link to a forum thread that has been merged into another thread or removed from the forums.

If you were following a link in search results, try another search. If you were following a link on a MacRumors news page, the out-of-date link will be removed when the page cache is next refreshed. In either case, the thread to which you were linking is no longer available.

The speeds look good/normal to me for 4 disks! Which program were you testing with? I have AJA System test v9.0.1. I know a guy who just downloaded it, and he said he has v6.0.1 for some reason.
 
Still nothing.


The speeds look good/normal to me for 4 disks! Which program were you testing with? I have AJA System test v9.0.1. I know a guy who just downloaded it, and he said he has v6.0.1 for some reason.

its AJA System Test Version 8.0 (8.0)
it works fine but for some reason when I use larger files it hangs for a while at the end, like its thinking about making that graph or something

okay lets try this again

2GB
attachment.php


4GB
attachment.php


8GB
attachment.php


My old 3-drive RAID0
attachment.php



great success; I will be whiping my inernal drive and my current boot drive, put them into the 4 open slots in my enclosure and create another RAID5 for backup, then stick my non-entriprise drives in the internal and ditch the old Mac Raid Card. Just backing everything up now!
 

Attachments

  • Picture 9.png
    Picture 9.png
    103.3 KB · Views: 395
  • Picture 10.png
    Picture 10.png
    106.5 KB · Views: 403
  • Picture 11.png
    Picture 11.png
    44.7 KB · Views: 386
  • Picture 14.png
    Picture 14.png
    123.5 KB · Views: 400
so here is a list of stupid things I did today


this morning everything was working fine;
I had the main RAID5 ready with everything backed up on it, a backup of my boot disk on the firewire drive, and one HDD internal ready to transfer my boot drive onto and use as my boot disk.

Then since I didn't need the apple raid card anymore, I tried taking out, it looked to me like the wires from the HDDs went directly into the raid card, however I said screw it and took it out anyway. I figured the worst that could happen is that it would not recognize the hard drives and I would just put it back and nothing would go wrong.

Well first off, it is very apparent that how ever this system was set up in the factory, the Apple raid card has a cable plugged into it that goes to the internal HDDs through some sort of SAS system. I was told I could take that out and sell it? How could I do that? It seems like it would be needlessly complication to get rid of this card, it was a pain to remove and then insert again as it is, but I did not see anywhere on the motherboard to connect the one cable that went into the raid card that all the hard drives ran through.

So anyway while I was doing all this I for some reason thought it was a good idea to start creating the second RAID5 array in my enclosure, although I don't think doing this directly caused any problems, I think the extra strain on the system might of made things worse.

So anyway after putting the raid card back in and trying to boot, I ended up with numerous problems, first off, it seems that the drive I had internally was now dead. Giving me my first run of boot problems... from not booting to booting off the Firewire drive, then having things freeze and crash, and not being able to force quit stuff, then having the hard shut down, eventually I got another hard drive and put that inside, and tried running a clean install of macosx on it just in case, however the install disk said I could not install it on this computer. So after troubleshooting for a while, I manage to have the system running stable now. I have a carbon copy running right now of my firewire drive copying to my internal so I can boot from there. And I canceled the initialization of my second RAID5 array; I think that may have helped.

So what could have caused these problems? and is it worth scrapping this raid card? or should I just leave it in and run another RAID5 array with 4 1TB non-enterprise drives, and use that as a boot drive?
 
That sounds like a mess.

My Mac Pro came with the Apple RAID card, and I pulled it out and sold it. It's only useful for RAID5 internally. It does nothing for RAID 1 or 0.

I have a 2009, so no wires were connected... just the PCI slot. I really don't know much about your 2008 model as far as reattaching that SAS cable you speak of, since you didn't get the 1880ix, which offers internal SAS connections.

As for your boot problems, did you try to reboot while it was building the second RAID5? I'm not clear what you did there.
 
I had the main RAID5 ready with everything backed up on it...
I presume the backup was on a different volume, and attached to a separate controller.

If not, you'll need to rectify this in order to be sure your backup is available to you when it's needed.

Then since I didn't need the apple raid card anymore, I tried taking out, it looked to me like the wires from the HDDs went directly into the raid card, however I said screw it and took it out anyway. I figured the worst that could happen is that it would not recognize the hard drives and I would just put it back and nothing would go wrong.
Where was the boot volume located (what controller)?

I ask, as the HDD's are connected via cables in 2006-08 systems, and that cable is either connected to the logic board (most cases), or to the Apple RAID Pro when that was installed (either by Apple or by the user). That cable handles all 4x HDD bays, so it's all or nothing (can't have 3x on the card, and 1x on the logic board's SATA controller via the HDD bays).

So unless the boot disk was plugged into the logic board (i.e. boot volume located in the 2nd optical bay), the system couldn't find it once you moved the cable (SFF-8087 end, aka internal MiniSAS).

Well first off, it is very apparent that how ever this system was set up in the factory, the Apple raid card has a cable plugged into it that goes to the internal HDDs through some sort of SAS system.
See above as per the physical connections and their locations.

As per the drive controllers, the logic board only contains a SATA controller (part of the chipset/s, depending on the specific design; for 2006-08 systems, it's located in the Southbridge).

It's the Apple RAID Pro that contains the SAS controller, which can run both SAS and SATA disks (not just Apple's card; this is the norm for all SAS controllers = all SAS controllers can operate SATA disks, but SATA controllers cannot operate SAS disks).

I was told I could take that out and sell it?
Yes you can.

How could I do that?
You have to change the boot volume location before you can remove it, or your system will no longer be able to boot.

The easiest way to do this, is with an additional drive.
  1. Clone the boot volume to another drive (internal or external, so long as the controller it's attached to is bootable).
  2. Go into Disk Utility, and set the new clone as the boot volume.
  3. Shutdown.
  4. Install a disk in the location for the permanent boot volume (i.e. empty optical bay).
  5. Restart.
  6. Clone the new disk from the current boot volume (if it's the original OS disk, you can skip this, as it's just a physical relocation).
  7. Set the new disk (final location of the boot disk) as the boot volume.
Now you'll be able to remove the Apple RAID Pro, and do whatever you wish with it.

BTW:
FW800 booting isn't always supported, so IMO, it's best to stick with USB for an external drive in this case (good to keep an external clone anyway, as it makes repairing/restoring boot disk faults both faster and easier). Not so much of an issue with OS X, but if you're running Windows 7 or example, FW800 is no longer supported as a boot volume. YMMV.

It seems like it would be needlessly complication to get rid of this card, it was a pain to remove and then insert again as it is, but I did not see anywhere on the motherboard to connect the one cable that went into the raid card that all the hard drives ran through.
Upper left corner area on the logic board.

Replacing it there will re-attach the HDD bays to the SATA controller in the MP itself.

If you want the HDD bays on the new card, you'll have to connect it to the Areca.

So anyway while I was doing all this I for some reason thought it was a good idea to start creating the second RAID5 array in my enclosure, although I don't think doing this directly caused any problems, I think the extra strain on the system might of made things worse.
If I understand you correctly, the fact you used consumer grade HDD's is what caused your system to go weird on you....

This is why it's critical that you use Enterprise Grade HDD"s on a hardware RAID controller. It's not an option. So don't be a fool to save a few bucks; get the right disks to start with and avoid the hell that consumer disks bring to RAID cards.

So what could have caused these problems?
You didn't change the boot location first, and it was further complicated by using consumer grade disks.

Is it worth scrapping this raid card? or should I just leave it in and run another RAID5 array with 4 1TB non-enterprise drives, and use that as a boot drive?
No, the card is fine.

Do not use consumer grade drives. I can't stress this enough. The recovery timings programmed in their firmware is not compatible with RAID cards. It's as simple as that, and ignoring this information causes all kinds of aggravation and lost time. And if there isn't a sufficient backup system in place, your critical data as well.
 
Last edited:
That sounds like a mess.

My Mac Pro came with the Apple RAID card, and I pulled it out and sold it. It's only useful for RAID5 internally. It does nothing for RAID 1 or 0.

I have a 2009, so no wires were connected... just the PCI slot. I really don't know much about your 2008 model as far as reattaching that SAS cable you speak of, since you didn't get the 1880ix, which offers internal SAS connections.

As for your boot problems, did you try to reboot while it was building the second RAID5? I'm not clear what you did there.

the SAS cable is so short that I would not be able to connect it to any other stlot except the top one anyway.

and, yes I am pretty sure I rebooted while the RAID5 was building, because a bunch of programs froze and would not force quit.
 
I presume the backup was on a different volume, and attached to a separate controller.

If not, you'll need to rectify this in order to be sure your backup is available to you when it's needed.

I have mutliple copies of everything right now. I was planning on having this second RAID5 as a backup, in the same enclosure, would that be foolish? as the two raids would use the same controller?

Where was the boot volume located (what controller)?
orignally it was on the Apple RAID card, I copied that boot drive onto a FW drive, then was planning on copying that onto a drive attached to the logic board, however that was problematic, so its back running off the Apple Raid.

I ask, as the HDD's are connected via cables in 2006-08 systems, and that cable is either connected to the logic board (most cases), or to the Apple RAID Pro when that was installed (either by Apple or by the user). That cable handles all 4x HDD bays, so it's all or nothing (can't have 3x on the card, and 1x on the logic board's SATA controller via the HDD bays).

So unless the boot disk was plugged into the logic board (i.e. boot volume located in the 2nd optical bay), the system couldn't find it once you moved the cable (SFF-8087 end, aka internal MiniSAS).

As far as I can access, there is only on plug I can currently reach from the hard drives and its SAS, getting this hooked up to the SATA controller on the logic board seems not worth my time at this point
attachment.php

The top left of the logic board is covered by this giant plastic encasing for the case fan that I was not able to remove (although I am sure its possible), and the SAS cable is so tight I could barely move it, making removing and reattaching the Apple RAID card difficult.

BTW:
FW800 booting isn't always supported, so IMO, it's best to stick with USB for an external drive in this case (good to keep an external clone anyway, as it makes repairing/restoring boot disk faults both faster and easier). Not so much of an issue with OS X, but if you're running Windows 7 or example, FW800 is no longer supported as a boot volume. YMMV.

that's interesting, the drive is Quad interface, so I will run it through usb then; however I have a hard time thinking Mac will ever give a problem with FW800 until Thunderbolt is perfected.


Upper left corner area on the logic board.

Replacing it there will re-attach the HDD bays to the SATA controller in the MP itself.

I really cannot do this with the way the MACPRO case is put together.


If you want the HDD bays on the new card, you'll have to connect it to the Areca.
I did not get the Areca card that allows this as I was not planning on doing it. Mine just has two external ports

If I understand you correctly, the fact you used consumer grade HDD's is what caused your system to go weird on you....

This is why it's critical that you use Enterprise Grade HDD"s on a hardware RAID controller. It's not an option. So don't be a fool to save a few bucks; get the right disks to start with and avoid the hell that consumer disks bring to RAID cards.

I kept all my enterprise drives in my external enclosure, and I was not running raid on my apple raid card, just JBOD, I didn't think this would be an issue, I was planning on not running through my RAID card at all and using the logic board, which is why I was going to fill my internal slots with consumer rated drives, and have one be boot and the other three on a software RAID0.



You didn't change the boot location first, and it was further complicated by using consumer grade disks.
I had changed the boot location, I was running off the external.

No, the card is fine.

Do not use consumer grade drives. I can't stress this enough. The recovery timings programmed in their firmware is not compatible with RAID cards. It's as simple as that, and ignoring this information causes all kinds of aggravation and lost time. And if there isn't a sufficient backup system in place, your critical data as well.

when I said get rid of the card, I meant the Apple card, the Areca seems to be fine. The only reason I was using consumer level drives is because I was not planning on using a hardware raid card with them. However, since although it's possible to run the drives off the logic board, it does not seem to be worth my time, as I have other work I need to get too and I need my computer functional and stable. Also currently my boot drive is consumer grade and the r/w speeds are terrible 77MB/Read 75MB/Write, wth? those are FW speeds? it ran faster when I had this drive in a external eSata enclosure that doesn't make sense, or is that just how bad this Apple RAID card is?
Anyway-
What I am probably going to do now is just put everything back to how it was before it all started to go bad, 4 drives in the external SAS via areca, and 4 drives on the Apple RAID, 1 boot, 3 RAID (maybe Ill do RAID5 instead of 0 this time). and then my external eSata drives. I should have enough storage for now, and I will have room to buy more WD2003FYYS's when prices drop and I need more room. Also I know I asked this before at one time, let's say in a year I buy two more 2TB drives, am I better off using the "expand raid set" function in the Areca Raid Storage Manager, or should I copy everything to another drive, then delete the 4 drive array, and rebuild as a six?
 
the SAS cable is so short that I would not be able to connect it to any other slot except the top one anyway.
Yeah, this has been a problem before as well, and Maxupgrades has an adapter for that too (here).

The number of adapters and need for more externals than I would have needed with another case (more bays), was the cost factor that played into me not keeping a 2008.

and, yes I am pretty sure I rebooted while the RAID5 was building, because a bunch of programs froze and would not force quit.
Ouch. That didn't help matters either.

On the plus side though, the card will pick back up where it left off, assuming the boot process is completed properly (initialization, recovery, online expansion, and online migration).

I have multiple copies of everything right now. I was planning on having this second RAID5 as a backup, in the same enclosure, would that be foolish? as the two raids would use the same controller?
No, it's not the worst way to go in your situation, as a blown card is very rare. It always comes down to mitigating failure conditions in regard to usage requirements and costs, as there's no such thing as fool-proof/100% covered storage system, no matter how well a storage system is put together.

But having it on a separate controller (i.e. eSATA + Port Multiplier based enclosure), is more cost effective. The reason is not only is the card and enclosure cheaper, but you can use consumer grade drives, and Greens on top of that (best cost/GB ratio available). It just happens to cover you in the event of a dead RAID card. ;)

Where this differs of course, is with very critical data, such as what would require a SAN (i.e. think banking records, gov. records such as IRS data,...). These types of setups incorporate redundancy on the computers, installed cards, how the arrays are configured, networking gear, ...

originally it was on the Apple RAID card, I copied that boot drive onto a FW drive, then was planning on copying that onto a drive attached to the logic board, however that was problematic, so its back running off the Apple Raid.
There is a way to connect the OS drive to the logic board without using the MiniSAS (SFF-8087) connector. You'll physically have to install the boot drive elsewhere, and the empty ODD bay (optical disk drive) is the easiest place to do this.

As per cables, a Molex to SATA power cable will handle supplying the juice, and a standard SATA data cable is used to connect to one of the ODD_SATA ports found on the logic board (upper left IIRC, near the fan; they're a bit buried, but they're there). The only caveat that might exist, is you may not be able to boot any other OS from that port (i.e. 2008's won't boot Windows without a hack). Routing isn't exactly fun from those that have done it, but it's possible.

It's in here somewhere, and it has pics on where the ports are located and how they routed the data cable. So it would be worth the time to search (rather old, so you'll need to be willing to put in some time to find it - could be more than an hour just to locate it).

As far as I can access...
There are things that can be done, which are listed above.

BTW, the image link isn't showing anything.

that's interesting, the drive is Quad interface, so I will run it through usb then; however I have a hard time thinking Mac will ever give a problem with FW800 until Thunderbolt is perfected.
No way to be sure with Apple, but in general, USB is the safer way to go when working on multiple, independent systems (i.e. on different, unconnected networks or stand-alone systems).

Cost wise, it's also inexpensive, and a good way to go for independents/SMB's (and I stress small) that don't have the networking infrastructure or require an OS image that will be used on multiple machines.

FW800 isn't the fastest thing out there, so ~80MB/s is realistic (theoretically it would be 100MB/s sustainable, but there's latency to deal with that reduces the figure in the real world).

Or did you mean this is all you're getting off of the Apple RAID Pro?

At any rate, it's possible to remove the Apple RAID Pro all together (it really is that bad), and I'd recommend doing so to regain a PCIe slot.

For example, the following slot config is both possible and desirable if you've only a single GPU card:
  • Slot 1 = GPU
  • Slot 2 = Areca RAID Card
  • Slot 3 = eSATA card for a backup system
  • Slot 4 = empty
I kept all my enterprise drives in my external enclosure, and I was not running raid on my apple raid card, just JBOD, I didn't think this would be an issue, I was planning on not running through my RAID card at all and using the logic board, which is why I was going to fill my internal slots with consumer rated drives, and have one be boot and the other three on a software RAID0.
If consumer drives are connected to the logic board, that's fine. But they don't do well on a proper hardware RAID card such as an Areca (this is why you do not find consumer disk P/N's on the HDD Compatibility List on their support site).

You'll be able to get the HDD bay cable back on the logic board (it was designed to fit). Perhaps not the easiest task, but it's possible, and worth the effort in order to get the configuration you're after.

Most users however, attach the internal HDD bays to the RAID card for lower cost than an external solution for every single disk attached to the card (i.e. even with say a 12 port card, 4x are internal in the MP, and up to 8x in an external enclosure).

Look at it this way; internal adapter is ~$130USD or so, while a 4 bay enclosure with an SFF-8088 port on the rear goes for $300USD or so. Even if an internal to external cable is needed ($60USD), it's still a savings of $110, which on a tight budget, is particularly attractive. ;)

I had changed the boot location, I was running off the external.
Assuming the settings had taken properly (I'll assume for the moment this is the case, and not lost due to a weak CMOS battery in the MP), the consumer drives could easily have caused all that aggravation/mess.

when I said get rid of the card, I meant the Apple card, the Areca seems to be fine.
Ah, OK. Your post was a bit confusing (this one is too), so I'm wading through them as best as I can.

RAID is a difficult undertaking, and the details are critical. So a misunderstanding is quite an easy occurrence, unfortunately. This is why either email or phone support can be so arduous.

The only reason I was using consumer level drives is because I was not planning on using a hardware raid card with them.
This is fine.

But I'm under the impression they ended up connected to the Areca, which would easily cause the mess you encountered (seen this before with those that didn't know any better, or worse, ignored the advice given prior to their equipment purchases).

There are other things that could have caused it as well, such as a duplicated LUN, but the information you've made available lends me to think along the lines that the consumer disks somehow ended up attached to the Areca (and they'll do this even if not in a RAID set).

BTW, in the case of a RAID card, you won't be able to run JBOD and RAID simultaneously. If you dig, it's in one of the card's settings, so it's one or the other, but never both. A second card is needed for that, or you can run it from Disk Utility for a lower cost solution (which is fine, and is a nice way to save on funds too). Great for backups when a single volume is desired with less risk than RAID 0 (JBOD's risk level is that of a single disk).

However, since although it's possible to run the drives off the logic board, it does not seem to be worth my time, as I have other work I need to get too and I need my computer functional and stable.
It won't take that long (hour should be sufficient for the actual work). Figuring in research and really taking your time, you should still be able to get it all done in a day (physical installation + initialization processes + at least begin the data restoration).

I better off using the "expand raid set" function in the Areca Raid Storage Manager, or should I copy everything to another drive, then delete the 4 drive array, and rebuild as a six?
I'd recommend you always make a backup prior to changing the array in any manner.

Now once that's done, it's your choice if you want to use the Online Expansion method, or do it manually (there are advantages to both).

Online Expansion can allow you to access the data while the expansion process is underway, but it's slower as the card's processing is divided between both the expansion function and providing data simultaneously (also, make sure the settings will allow this, as Foreground under Initialization could prevent the system from gaining access to the data during the Expansion process).

Manually is faster, but you won't be able to access the data until it's completed.
 
Computer Upgrade

So I have been given a 'newer' computer since an employee left my company, I was planning on ditching my current computer that this array is set up on, for the new one.
Now I have a Mac Pro 3,1 with OS X 10.5 and this new one is 4,1 wth 10.6; obviously I would need to transfer this raid array into the new system. Which means the Acera card as well. Is this something simple? can I just plop the Raid card into the newer computer and all my RAID info will be intact? or do I need to back everything up and then recreate the RAID Array?
 
So I have been given a 'newer' computer since an employee left my company, I was planning on ditching my current computer that this array is set up on, for the new one.
Now I have a Mac Pro 3,1 with OS X 10.5 and this new one is 4,1 wth 10.6; obviously I would need to transfer this raid array into the new system. Which means the Acera card as well. Is this something simple? can I just plop the Raid card into the newer computer and all my RAID info will be intact? or do I need to back everything up and then recreate the RAID Array?
Make a backup just to be safe, but Yes, transferring the card and drives to the new system will work (won't loose the data unless you make a mistake that causes it to be deleted).
 
Make a backup just to be safe, but Yes, transferring the card and drives to the new system will work (won't loose the data unless you make a mistake that causes it to be deleted).

O good that should save me alot of time,

on a side note after filling the array about 53% (2.92TB/5.46TB) there seems to be a significant slowdown in performance

331W / 289R

compared to when I first set it up

412W / 383R

is that normal?
 

Attachments

  • Picture 9.png
    Picture 9.png
    49.2 KB · Views: 72
on a side note after filling the array about 53% (2.92TB/5.46TB) there seems to be a significant slowdown in performance

331W / 289R

compared to when I first set it up

412W / 383R

is that normal?
Yes. Once you're past the 50% mark on the platters, you'll slow down due to running on the inner tracks.

To get performance back up to where it was (or better), you either have to delete a bunch of stuff, or increase the array's usable capacity (add drives).
 
well the transfer went extremely smoothly with no trouble at all!

50% I could have sworn reading somewhere hard drives start slowing down around 80% and by 90% your at a crawl. It's like you need double the space of what you think you need....
 
well the transfer went extremely smoothly with no trouble at all!

50% I could have sworn reading somewhere hard drives start slowing down around 80% and by 90% your at a crawl. It's like you need double the space of what you think you need....
The specifics matter (single disk, array, NAS, SAN, ...), and Yes, it slows down well before the 80 - 90% mark (the instant you exceed 50% it starts to slow down).

Most may not realize it though, as the storage system is far faster than they actually need for their usage pattern, so in such cases, it takes them longer to notice once past the 50% mark of used capacity per the set/disk.

In your case, you're using the storage system's throughputs, so you've noticed much sooner.

The double the capacity comment is more true than you may realize. Unfortunately, capacity in a proper RAID configuration isn't cheap, so most don't do this initially.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.