hang on.. from everything that ive read and have been told, its not good to use SSD as a scratch disk... due to the frequent writes...
ive located my scratch disk off my SSD just for that reason...
It depends on the specifics, as the information provided by drive vendors isn't based on real world conditions (particularly how much unused capacity remains, as wear on those cells is increased as the drive is filled with non-temporary data).
That said, there's an easy solution if you use a lot of scratch writes and your current SSD is near full (ideally, you want to keep ~20% unused, including any "reserve"/provisioning capacity not available to the user). Just get a small, cheap SSD solely for scratch (such as the
30GB or 40GB 3Gb/s models from OWC; both are under $100).
If you don't have an available SATA port, there are inexpensive SATA cards that can be used, such as the Highpoint RR620/622 (internal, external respectively).
That may be the case, I've never heard of it, but that's not to say that it isn't true.
If an SSD isn't optimal, then it'll have to be SAS or a Western Digital Velociraptors RAIDed.
Most users won't see this. Most failures under consumer or workstation use would be due to defective units (i.e. controllers die), not damaged cells due to reaching their write cycle limit.
Enterprise use OTOH... (think SAN used for a relational database). That crazy number of years provided by vendors can be reduced to 3 - 5 under the such conditions, and that's using SLC based drives.
Two things to consider: most of the info ppl spread around come from 1st and 2nd generation SSDs. Also, even for these "early" SSDs, the write problem has been blown way out of proportion. Add to that the fact that recent SSDs handle this "issue" a lot more intelligently, by leveling the wear across the entire SSD. I can't find the page, but the guys at Anandtech posted something about the real-world durability of SSDs: final word was essentially don't worry about it.
Consumer users don't really need to worry, so long as they don't fill the drives completely (keep ~ 20% unused, based on the total capacity = user accessible + any provisioning).