OP, sorry to hear about your loss.
Two subsections:
1st off, HD failures:
Hard drive failures come in different guises, and the remedies are (likewise) different. Thus the first step (after noticing that something is wrong) is trying to ascertain what is wrong. Wikipedia's article is long-winded and lacks any how-to's, but may be worth a read. (
http://en.wikipedia.org/wiki/Hard_disk_drive_failure)
Basically: If your computer sees the drive (and reports it's capacity correctly), and the drive is not making any frightening noises, the data might be recoverable by using specialist software. If not, the problem is either mechanical (head crash or mechanic failure) or the controller circuitry is screwed. Unless you have a healthy disk from the same batch lying around, both necessitate a third-party specialist. If your data is economically valuable or your business has insurance, this alternative is worth exploring.
2nd, backup&redundancy - policies
The thing with hard drives is that they mostly do not fail, but when they fail everybody's cursing. Back at the change of the millennium I was working at a small IT company, and our main file server was based on a RAID 5 -array of IBM GXP75's - (see
http://hexus.net/tech/news/storage/209-ibm-gxp75-failures/ ) - man was that a nightmare, we basically came to the office in the morning and every second day one of the GXP's had failed over night so we went through some 30 drives before getting a totally new solution...
Anyway,
In my experience (both in offices and home offices), I've found that a three tier-solution offers quite a good balance.
1st tier: Local machine (workstations / file servers)
2nd tier: Local high-availability backup
3rd tier: Offsite backup.
Off course, if you're serious about your data you need an offsite backup. But whatever your offsite backup is, it will not be easy to access. That's where the second tier comes in: The purpose of the second tier is to protect against machine disasters, drive failures as well as "dumb users". Thus it needs to used every now and then and must be easy to access...
It also helps if the second tier is in itself both reliable and redundant.
I agree with d-m-a-x, that unless we're talking about data-center volumes, optical disks offer a viable route - they are both cheap and easy to transport and store and allow you to easily offsite-backup in multiples (always burn two disks and store them in separate sets). The caveat is that if you have no "library management" (for the lack of a better word), you'll be in a pinch when you're trying to ascertain which disc contains the most recent backups of files.
I recently posted a poll here, but did not get very many answers...
https://forums.macrumors.com/threads/1539302/
My main interest in posting the poll (which I did not want to mention as I was afraid it might be suggestive) is that considering the amounts of data many of us are working with, the concept of internet-based daily backups are hitting the limits of feasibility (data throughput rates, especially on asynchronic lines).
RGDS,