One reason for not detecting would that it actually isn't a problem. To them.An unsafe assumption to consider people who have reported no issues 'as they seem technically competent'....Merely the laws of physics dictate that they might not be aware of degradation in performance if they are using the equipment for basic tasks, and many of these what you call 'technically competent' have admitted they only use their computers for basic tasks?
I was kind of a half decent Linux user until I realised their love for the kitchen sink was " 'till death do us apart". Picking kernels from different distribution, stripping it and compiling it for another distribution and so on.
I'm no longer competent as it's been ages since I've been at that stuff, but the SSD's I used kind of appreciated swap, and as my setups were put together with minimal resource usage and maximum performance for the applications, I packed up with ram and moved the swap to ram. Worked wonderfully. Blasting fast in the past.
Now, a few of the SSD manufacturers tended to report a slightly lower storage capacity than the actual, thus when capacity were lost due to heavy r/w, the SSD still kept within the promised spec. These SSDs were not soldered and had their own controller in the package, mind you. There was all this stuff with block sizes as well.
...but I'd argue that you don't loose performance due to r/w wear and tear. I'd argue what you loose is capacity. The controllers tend to have the big picture on where the lost capacity is positioned, and would make sure that there are no attempts to read or write from damaged/unusable spots. Thus no performance should be lost.
Kindly note that the above is taken directly from ram and not the ssd. Freely from my memory that is.