So when testing out different exporting speeds from heavier codecs like Red Raw 8k or Braw 12k to Pro Res 4444 XQ, the disk speeds can also be very fast.
Sometimes will be 800-900mb/s write during export. If I choose a different disk, it can be cut in half and therefore slow down the export speed.
Disks used that cause this can even be a Sonnet PCIE x4 card with 4x 2TB Samsung Evo+ in raid 0 - very fast in benchmarks, but would any thing explain slower disk speeds? A raid 0 2x SSD in a sonnet pcie card also demonstrates this behavior at times.
What is your main CPU core and main system RAM utilization rates?
In another thread , you mention that your GPU VRAM are completely full. If this is transcoding data then there probably is some set of cores assigned the 'high priority" task of copying data in/out of the GPUs along with anything else that is assigned to CPU cores.
The Sonnet card is a software RAID card (or perhaps more accurately is a no-RAID card. The RAID is done something else). Doing RAID 100% in software is relatively low cost, but it isn't free. You have to have "spare" core allocation time at "high priority" for it to get all the normal disk controller work done virtually.
If you move some of the workload to a non software RAID storage device and the bandwidth goes up, then that can be an indicator that the software controller was "overloaded" ( given the resources it was allocated. )
IF the software RAID controller is trying to do predictive caching then there is a RAM read/write cost too. Heavy reads and writes could thrash the cache a bit also if the read policy is trying to read ahead and the write policy thinks you'll want that written data back soon (so retains a copy). Since this is a software RAID controller it may not be trying to be "smart" and recognize these are two extremely heavyweight sequential read/writes streams and switch policies. Being "smart" takes up. more CPU cycles and more RAM.
Not sure if the above is clear - but when exporting 8k or 12k, disk bottlenecks can sometimes occur even below the rated speed of the disks in question - so trying to figure out if it is a PCIE issue.
Example:
Export 12k BRAW to the 4x raid 0 NVME, and get around 400-500 mb/s write.
Export the same to the 2x raid 0 SSD, and get around 800-900 mb/s write, even though these are slower on paper.
The other thread indicated that this wasn't a lone write job. That the source and destination were on the same RAID set.
If the software RAID is pulling the whole stripe width on a read there is a tradeoff for doing that . Similar if it is trying to buffer a whole stripe width to write . If concurrently read/writing from the same RAID set that could make a difference. The 'wider' has traditionally primarily been about getting around rotational latency. The 'wider' your RAID set is the more rotational "drama" your software RAID may presume it is dealing with. If this is Apple's ancient software RAID implementation ( hidden under a Sonnet facade) then that could be even more true.
4Read , 4Writes , 4Read , 4Writes
versus
2read , 2Writes , 2Read , 2Wries
versus probably your assumption of
R, W, R, W, R, W , R , W , R
For a 2 stripe it isn't trying to do "chunking" of accesses, which is just fine for SSD because they don't have a head location seek problem.
If that is the case then at lower read rates then they stay out of the way more. Similarly if the computational overhead on writes kept their data flow rate down too. If you crank the read rate up and the write rate is about equally matched then perhaps the contention is higher. "Rule of thumb" is when have two high device data flow rates, then it is better to put them onto different logical drives. The point of RAID 0 isn't primarily to have a bigger "dump it all here" bucket.