No, the original question was does the OP need backup software if they use RAID 1. To lose sight of that is to lose sight of the problem set.
No, the only difference is the price of the hardware and software and the level of risk one may be willing to assume because af a lack of trained staff. In essence, a backup solution is a backup solution- take your data, and put it in more than one physical place- doesn't matter if you're selling planters or plutonium. In fact, you'll find that when IP is your core business, the criticality of safe data actually increases versus when hard goods are your core business.
No, the challenge is between resources and capability- with scant resources capabilities are lesser, and finding that balance is a measure of risk acceptance.
I know plenty of folks who back up without content management as a part of the backup soluition- in point of fact, it gets increasingly more expensive to do content management as a part of a backup process, and that brings with it additional complexity which may not be important overall if your storage structure contains your content management stream (for instance project names/dates are in your filesystem hierarchy.) Also, if you use a CMS that simply references files in their native locations with their original names, then your backup solution need have no notion of any of it, you can simply restore your library, or a subdirectory of it and the CMS will still reference the same location.
Your idea of content management may be "simple access to an archive," but there are many people to whom it means a great many more things, such as revision control, licensing, formatting, channelized sales streams, meta data control, workflow automation, access control, edition tracking...
If you wait to "finish" a project before you back up, then your data is at risk in the intervening period. For many content creators, this is an unacceptable risk. I'd argue that any studio workflow should start with a backup of any new files before post-processing. I've had customers lose "backup CDs" within an hour of getting them.
My own workflow doesn't have me reusing a memory card until the images on it are in at least two locations or three different devices in one location. For some that's overkill, but I have yet to lose an image since I implemented it.
There is a difference between an archive and a backup. If your data all lives in one place, you are taking a risk that's insurmountable if it actualizes.
I'll happily disagree- but if you're one of the four small businesses who had RAID failues this year, you'd be one of the ones buying an actual backup system no matter what your size
No, in that case it's not used for redundancy- you could use any disk system (or flash, or RAM for that matter- just because it goes on a RAID doesn't mean that the Redundant part of RAID gets used- for the purpose is just to cache (and frankly few of the large commercial and government clients I deal with do so anymore, as fiber channel to a disk unit from the backup system works just fine without slowing anything down- I'm installing one in a large Government lab next month- nobody wanted to go from the storage device to a secondary array then off to tape with modern devices.)
In any case, reading the data takes just about as long per-element no matter if you're writing the results to tape or disk- and depending on your OS, filesystem and data layout, going to a slower, less-buffered medium can actually give you a performance advantage as you'll have fewer track misses due to the backup monopolizing the heads on the source drive if you're doing a lot of read seeks for small blocks of data. Non-Server OSX/HFS+ is particularly bad for this in my experience- copying off to a RAID tends to bring things to a standstill because the copy gets all the attention.
You are assuming that all hard drive failures are hardware- that's not always true, filesystem failures still happen, even in today's world of journaled filesystems. In the case of a HW failure that's not just the controller card, OnTrack tends to come in at several thousand dollars per drive for a clean-room recovery- and that's filesystem-accessible data only- not a forensically sound image to recover things like deleted or scrambled images. Obviously, this can be a cost covered by insurance- but it's not cheap at all, and you can actually have gigabytes less recovered than you started with (it's happened in three of my cases so far, and we generally don't have to deal with damaged hardware in discovery motions.)
More importantly, a mirror is a single point of recovery (so long as it's a straight mirror, not a striped one.) That generally means there's more chance of recovering more data if there's a failure, more easily and cheaply than trying to re-build a striped RAID set (which often requires the same HW, making it especially expensive in the event of a burst pipe or other "killed the controller too" types of events.
Actually, we would call off-site storage a best common practice in terms of data availability, business continuity and lowest risk. Like most business decisions, one should really ask themselves if they can afford to deviate from a BCP, rather than the reverse.
Not to belabor a point, but your history is incorrect- RAID stands for Redundant Array of *Inexpensive* Disks. Disks in a RAID array were cheap compared to the cost of storage on mainframes and minicomputers of the day. Having bought disks from the old 2314 packs through to 3390's on mainframes, I can remember the ROI calculations being so far on the RAID site it wasn't funny- that's how Netware made it into the data center- buying the server, OS, network hardware and RAID didn't come close to the expense of "real" disk on a mainframe or minicomputer.
While today's disks are better protected from shock, those days are hardly long-gone. Laptops are subject to far more shocks than the computers of the past, and not all laptops have great drive protection (and folks like photographers who tend to do laptop harddrive upgrades rarely look for the expensive shock-protected drives.)
I had one forensic case last year where drive damage was a factor and one side had to settle instead of fight because of it (laptop "accidentally dropped off a table" when running.)
Actually, it needn't be- you can pull a drive off a hot swappable mirror set,
put in a new drive and let the mirror rebuild- no archive necessary. You're still at the mercy of a backup medium that's significantly more fragile than tape, but with fewer issues than say DVDs or even CDs (I wish I had a dollar for every Sharpie I've taken from a user[1].)
Actually, HFS+ is good up to 85% (and yes, 5% of a terabyte is significant enough to mention.) Ext3 is good to 90%, UFS is also good to about 90%- but I'd caution anyone thinking of it to not use Apple's implementation, as it's not as robust as say the FreeBSD one, and a bad fsck can leave you having to reformat the partition to use it again.
Easy means jut that, it's got no part of functionality or risk- only operational simplicity. But the term "need" is a poor choice, as backups and disaster recovery are really about risk avoidance, not necessity. You can run without backups just fine until you have a disaster- if you don't have an incident, then your "need" is zero unless you have a regulatory or contractual obligation for one (I spent a number of years doing actual risk assessment and have one patent in risk measurement-) until the point of loss. At that point, you have a need for the data, but having a backup may not even satisfy that need.
Also, what starts out as an "easy" solution may not end up being one as the amount of data increases. For instance, pulling one drive of a mirror set and taking off-site is "easy," where pulling 20 becomes more difficult and as we move to solid state devices, the point at which recovery is "easy" may depend a lot on what you've done in the interim to "update" your backups- for instance, if you're an old "sell 'em prints" style wedding photographer, a 20th anniversary may find you unable to mate an old SATA drive with Internet3 available remote storage.
It's ultimately all about risk- and I've seen enough RAID failures to consider it an unacceptable risk in terms of redundancy for my own data. Lots of people play the odds with security and redundancy- just because some folks win doesn't mean everyone will.
[1] Sharpie ink is acidic, those CDs you wrote on with one are likely to be unreadable inside four-five years (two's the minimum I've seen degradation enough to be unreadable,) and the *top* of the CD/DVD is the fragile lacquer part, the bottom is nice hard plastic that can take a lot more abuse.
I've seen enough "archive to CD/DVD" stuff in design/graphics/publishing houses to know that they never revisit their archival media until it's too late to recover from bad media.