I guess, but if you're sitting on top of a pile of drives which almost anyone who upgrades to the nMP will be, doesn't it make sense to use the ones you have and then replace them sequentially as they fail? I'm mean your data is protected as it's in RAID.
If they aren't "Enterprise" or "NAS" or "RAID Edition" drives designed for a RAID array, I would never use them in a RAID-5/6/50/60 array.
Desktop Drives Drop In and Out
Desktop Vs Raid Edition drives
Using desktop drives under the RAID environment could result in frequent drives being dropped out of the RAID group while trying to recover bad sectors during read operations.
Enterprise drives uses a special FW for error recovery that limits the time it takes to attempt to recover a bad sector as other drives are waiting during that process.
Desktop drives are designed in scenarios where there is no data redundancy or other drives working in a group so it has to spend additional time to attempt to recover data.
http://kb.promise.com/Print10224.aspx
This isn't theory.
Put another way, provided your drives are working today, wouldn't you rather save the $1,000 you'd spend on 5 x 4TB Seagate drives and put that money towards a higher spec nMP?
No, I wouldn't and I don't. Why spend all that money on a rig and cheap out on the storage and risk losing your data? Maybe use them for a big backup volume, but not for data that you wouldn't want to lose.
If your data is business critical, which it probably is for the intended user of the nMP, it will make sense to replace drives *before* they fail. Downtime is lost money.
Check SMART and chuck any that are ageing.
Good advice.
What is a good software to throroughly check a drive for bad blocks? MAc or Windows software.
"Hiren" has a freeware bootable CD with a number of Linux tools on it, including partition management software and a good S.M.A.R.T. utility.
http://www.hirensbootcd.org/download/
The fact that your data is in RAID means there is no downtime except for that which is associated with going to buy a new drive.
Wait until you lose 10 TB of data because a second drive failed during the rebuild - which is actually fairly common because the stress of a rebuild can put a weak drive over the edge.
Saying that "RAID means there is no downtime" is a complete misconception.
There's an adage that the paranoid wear both "belts and suspenders". With disks, you need to wear belts, suspenders and underwear - so that if two of the three fail your junk isn't waving in the breeze.
__________
I'd keep older drives if
*all* of the following are true:
- S.M.A.R.T. shows no re-allocated sectors or "pre-failure" or "old age" warnings
- The drives are all "RAID Edition" drives or the equivalent
- You set them up with RAID-6 or RAID-60
- You have hot spares that will automatically rebuild the moment a drive fails
Drive failure is not a probability, it is a certainty. Sometimes you're lucky, sometimes you're not. If you don't have hot spares, the "unprotected" window is "until you notice". If you don't use RAID-6/60, the chance of losing the array to multiple failures is too high, even with hot spares.
I have a system with 180 drives. If my drives have a 3 year MTBF, I'll have 3 to 4 drives fail per month. And they do - so I order spares by the 20-pack.