I've never had an issue. Conditioning is a bit of a pain in the ass, but whatever- you can force enable the cache and my system is on a UPS so it doesn't entirely bother me. I've had to condition server RAID systems before on other servers to properly calibrate the battery (most notably, HP's P800 controllers seemed notorious for reporting bad batteries after a year or two when in fact they were not). The only difference is that it seems to be mandatory on the Apple RAID card.
Conditioning is a PITA, as scheduling it usually isn't an option for most independent users (may not know to do it, or how if they're at least familiar with the performance issues it can cause). Granted, it can be solved to some extent (performance) by manually activating the cache, but there is still the lack of protection while the battery is conditioning (power goes, and not enough power to retain the data before power is restored).
Now there are ways to mitigate these issues, such as using a sufficient UPS, and in enterprise environments, the UPS system is also backed up by generators. Unfortunately, this is well out of the financial means of most independents and SMB's in my experience (generators). So YMMV due to the specifics regarding the backup power system.
As a result, 3rd party controllers don't include backup batteries, and only offer them as options if offered at all (expectation of being used in the enterprise environment that has a well designed backup power system). Not that big of a deal for independents and SMB's IMHO either, so long as they implement a decent UPS system (sans generators due to budgets).
Even with the card battery installed, it's no guarantee, as the data created that has to be written to disk can exceed the cache capacity. Under such conditions, card battery or not, data integrity is relying on the rest of the backup power system assuming it's not shut down, and the application doesn't log where it left off (application automatically resumes operations where it left off when power is restored).
You still need custom drivers to run those cards and the configuration utilities aren't included with OS X (obviously).
I don't see this as a big deal though, and it usually doesn't require Herculean efforts to perform (Linux can be another matter).
Areca's solution of including an Ethernet port on the card just so you can configure it without software is very bizarre and totally overkill to say the least (seriously, a RAID card running a web server?).
This is a requirement of a lot of enterprise users, as they don't have IT staff available for each and every server's physical location, so they require remote access.
Even a JBOD disk won't be usable without the card?
As it's currently formatted, that is correct. The system wouldn't be able to read it.
Same thing happens when the disk's configuration is changed, even if it's not moved to a different controller (i.e. break up a JBOD set, and use each as individual drives).
There are software packages that can recover the disk's data (essentially removes the JBOD formatting), and makes it available. And in such cases, the disk shouldn't be moved to a different controller prior to this being carried out.
Arcea may have just copied it over, or thought that not having to design and then the user install RAID manager software on the host computer would be a selling feature.
Better chance of getting it to work under multiple OS environments (Win, OSX, multiple Linux distros).
If one is not using RAID 0 but just a single boot drive (it would be a JBOD drive) and perhaps a RAID 1, is there any performance overhead incurred for the boot drive?
If you mean to the MP itself, then No. If you mean on the card, Yes, but it's very small (single digit %, and typically no more than 3% on the slowest of controllers; 1% is typical of most these days).
BTW, JBOD (Just a Bunch of Disks), is usually read as multiple members concatenated/spanned into a single volume, not a single disk. To me, including single disk operation is a mistake, particularly with RAID controllers, as you are able to run both Single Disks and RAID arrays simultaneously. But this is not the case with JBOD and RAID (between these two, it's one or the other, but never both on the same controller; 2 separate controllers would be required for this due to how the card operates under these different conditions <only one can be loaded into the card's controller chip at a time>).
Would it be faster on the standard SATA bus (ditching, of course, the card)?
No, the load on either the card or drive controller would be about the same. On a faster PCIe based controller, even less than the built-in controller.
Could this equation be affected in any way by the native speed of the drive? (I have an Intel 520 Cherryville SSD on order for my boot drive). I want to run with as much speed as can reasonably be achieved. Perhaps I should just get an OWC Accelsior, don't know yet.
Native speed of the drive is the ultimate dictator of throughput.
Where a separate card can seem faster, is when the data to be written is passed to the card's cache (if equipped), as the system "sees" it as completed, even if it actually hasn't entirely been written to the disk. The card takes over when the data is stored to its cache.
But in regard to how much of the controller's time will be spent processing it, No (sits idle between read/write cycles, assuming no other disks are being accessed during those periods).
If I set up RAID with the intention of having the machine double as a safe backup repository for a small Windows based POS system as well as a photography database, would there be problems due to lack of Windows drivers?
If you mean to have Windows access it directly, Yes. Windows wouldn't be able to access the card at all.
Now if Windows were running on a separate system, and you're accessing the stored data over a network, that's possible.
Regarding operation, if it's used while conditioning does this drastically increase time and or speed? If interrupted does it go back to square one or is the resumed conditioning cycle any shorter? Have you needed to replace battery and if so how often?
If the card is being used while conditioning, and you haven't manually forced the cache to be active under that condition, it will slow the card's operation (cache is shut off during a battery condition by default, so you loose the performance benefit of the cache during the procedure).
If interrupted, it should resume. Though IIRC, some have reported issues with this, particularly on the earlier generation of Apple RAID Pro cards (not as many newer ones out there).
As per replacing the battery, Yes they must be replaced at some point, and unfortunately, the reports on these cards, including the at least one I recall, that the new ones have already needed this as well. Entire cards have had to be replaced because of battery problems (conditioning never stopped, battery replaced, and the card found to be defective <performance, stability, or both>).