After more testing I've determined the problem is not the drives, it's the Mercury Rack Pro. I knew the enclosure was older but I didn't think It was SATA II. Nowhere on the OWC site did it mention that. I guess the fact it has USB 3.0 capable of 500MB/s is purely theoretical because it's limited to under 300MB/s with the drives interfacing SATA II. Oddly enough, only 2 of the WD Black drives in RAID 0 inside my classic Mac Pro (SATA II) is giving me just under 500MB/s Read and Write. That leads me to believe that the OWC enclosure is sharing the entire SATA bus between all 4 drives. I am saturating the SATA bandwidth with 1 SATA III drive which is why I can't get anything over 250MB/s Read and 150MB/s Write, even with 4 drives in RAID 0. I will most certainly be returning this enclosure and I'm pretty much finished with OWC at this point unless I need RAM. I just kind of feel cheated because I spoke with someone from OWC prior to purchasing and explained my specific needs. I pretty much wasted time and money and now I'm going to be back to square 1. I'm going to check out some of the 4&5 bay enclosures from Highpoint with USB 3.1 gen 2. There just aren't a lot of affordable external RAID options right now that aren't using Thunderbolt or SAS let alone finding one with hardware RAID support and rack mountable. If I have to, I could always remove a m.2 and purchase a mini SAS to m.2 adapter which will plug into my Amfletec quad carrier board and feed the SAS cable out of my computer. That would at the very least open my options up a little more.That's a tuff one to know. eSATA has plenty of headroom, so that seems like an especially unlikely culprit.
RAID controllers performance vary widely, and typically the performance goes up with price. Traditionally, with RAID 5, the more drives, the better the performance. Back in the day, especially for high end server database work (constant small reads and writes), one would see specs requiring 6 spindles (drives) minimum, and 8 or recommended for best performance.
Based on that, as RAID 5 starts with 3 drives, adding the 4th may not effect performance much...though I would have expected a small bump. RAID 1 has very little overhead compared to RAID 5, so 2 drives in RAID 1 compared to 3-4 drives in RAID 5....I would not expect a big change.
But again, tuff to call. Removing bottlenecks is key, and not always obvious. The drives are often the best known of the bunch with RAID controllers and buses less obvious.
If you want to test, and see the full throughput potential of the set up as-is (RAID controller, bus, drives, file sizes, etc), you could go RAID 0. Keep in mind there is NO REDUNDANCY, and if you lose one drive, you lose all your data. Still, might be good to see the performance as compared to both RAID 1 and 5.
Handy comparison chart for softraid on this page gives some idea of typical differences.
Historically, when drives were much slower, it was fairly common to see RAID 0 arrays for speed (video rendering, photoshop scratch space, etc.) but used only to produce, with finish work regularly copied off to a safe(r), redundant array.