That's not what I see in that thread. A number of people have experienced instability and/or an inability to boot from ssd in Mac Pro systems after the recent 10.6.2. Reverting back to 10.6.1 causes the machines to boot normally. Its not definitive proof of anything, but their is some anecdotal evidence of a problem with 10.6.2, ssd drives and Mac Pro systems. That is good information to have if you are thinking of spending $500 on a SSD and opening up your brand new $2000 iMac for a potentially warranty voiding mod.
I see someone who has problems with his RAID because the disk boots fine when not being on the non-Apple RAID controller (he uses some Areca controller), someone else who is reporting his ssd in RAID0 is a bit flaky and I see a thread starter with some disk problems that are unrelated to the ssd (as a lot of people in that thread already mention that). No one has reverted back to 10.6.1 as far as I can tell and solved the problem. The person with the Areca RAID controller destroyed his array, created a new array and restored his backup.
It is known that RAID is problematic with ssd's as there are no RAID controllers that pass TRIM commands to the ssd's and there are some controllers that simply have difficulty with the fast disks, in fact the ssd's are too fast. In other words, this has nothing to do with the 10.6.2 update but with the RAID controllers and the chipsets on those controllers. You'll have to look for a proper RAID controller that is known to work fine with ssd's. There are quite some discussions about what RAID controller works good with ssd's on the OCZ forums but also on a lot of other forums if you look for it.
Obviously such kind of information is of no concern to the thread starter as the iMac is not able to house a RAID card
Of course I know this, the OP mentioned Intel X25-M G2 specifically and the linked thread deals primarily with the same drive.
Linking to one thread and saying the drive has a lot of problems does not compute, it's just stupid. If there are a lot of problems than provide some more information to what kind of problems and not just one thread were the problems are caused by a lot of other things not being the ssd (although there is one important piece of info in that thread: be wary of ssd's on RAID controllers). There were some problems with the mentioned Intel ssd due to a firmware update but that update has been pulled so no problems any more. That also gives you another good piece of information: wait for other people flashing their drives if the drive means a lot to you (aka you depend on it).
Well the OCZ drives have an advantage in sequential reads/writes but the Intel X25-m has a sizeable advantage in random read/writes, which I would judge to be the more important performance metric, but YMMV.
There is a difference in that area but it's quite small, it is not really something you might notice. This makes the Intel and the OCZ Vertex compete heavily. Intel has a max of 160 GB where as OCZ has 250 GB. If disk space means a lot go for something like the OCZ Vertex.
For instance, the file system is implemented as a closed source 'translation layer' that does not even allow for a file system check because it presents single files as a block device to the OS over SATA (for an explanation see
http://www.anandtech.com/storage/showdoc.aspx?i=3631&p=3).
The link you give does not tell you that files are represented as a block device but is about something else and tells you that ssd's use different sizes for the different parts than most OS's and RAID controllers do as the default. You can change it if you want so it aligns properly. The misalignment can decrease performance, especially when using a RAID controller. The Anandtech article describes how the ssd stores the data in the NAND chips. NAND chips are different from normal hard disk drives which is why there are some differences with the OS regarding alignment. The article is describing how a ssd stores it's data compared to how a normal hard disk does so. If you have performance problems or want to do some heavily tweaking than this becomes interesting and valuable information. From a users point of view this is far too technical and unnecessary information. This kind of information would be great in a thread discussing the ssd technology but not in this thread because it is probably too technical and goes far beyond the questions of the thread starter (velociraptor or ssd? which ssd?).
Btw: disregard that lwn.net article as it seems to be from a GPL's point of view. The funny thing is that the GPL is one of the most restrictive licenses out there but GPL-fanboys (those guys that only [want to] use things that are "free" and open source) will defend that heavily. The following sentence gives you an idea of what I mean: "Do you want to trust your data to a closed source file system implementation which you can't debug, can't improve and — most scarily — can't even fsck when it goes wrong, because you don't have direct access to the underlying medium?". As you can tell from the Anandtech link this is also a very wrong article as it is not the black box design that corrupted the article writers system (actually, ext3/ext4/etc. are good at doing that themselves already). It's not because of the black box design the article writer can't recover it. His corruption can be caused by a lot of different things and recovering it can be done with a lot of different things. The fact he can't use the recovery tools he knows is quite simple, just read the Anandtech and it becomes obvious

The tools in Linux you can use for file recovery on a very low level only have hard disk support and these things are completely different from the ssd's. Tools like bonnie++ have difficulty in benchmarking the disk as it only knowns hard disks, not ssd's. The problem will disappear when ssd's become more mainstream and tools will be updated to support ssd's.