Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

rondocap

macrumors 6502a
Original poster
Jun 18, 2011
542
341
So when testing out different exporting speeds from heavier codecs like Red Raw 8k or Braw 12k to Pro Res 4444 XQ, the disk speeds can also be very fast.

Sometimes will be 800-900mb/s write during export. If I choose a different disk, it can be cut in half and therefore slow down the export speed.

Disks used that cause this can even be a Sonnet PCIE x4 card with 4x 2TB Samsung Evo+ in raid 0 - very fast in benchmarks, but would any thing explain slower disk speeds? A raid 0 2x SSD in a sonnet pcie card also demonstrates this behavior at times.

PCIE bandwidth issue? Pool A is 100%, Pool B 25%

Not sure if the above is clear - but when exporting 8k or 12k, disk bottlenecks can sometimes occur even below the rated speed of the disks in question - so trying to figure out if it is a PCIE issue.

Example:

Export 12k BRAW to the 4x raid 0 NVME, and get around 400-500 mb/s write.
Export the same to the 2x raid 0 SSD, and get around 800-900 mb/s write, even though these are slower on paper.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
So when testing out different exporting speeds from heavier codecs like Red Raw 8k or Braw 12k to Pro Res 4444 XQ, the disk speeds can also be very fast.

Sometimes will be 800-900mb/s write during export. If I choose a different disk, it can be cut in half and therefore slow down the export speed.

Disks used that cause this can even be a Sonnet PCIE x4 card with 4x 2TB Samsung Evo+ in raid 0 - very fast in benchmarks, but would any thing explain slower disk speeds? A raid 0 2x SSD in a sonnet pcie card also demonstrates this behavior at times.

What is your main CPU core and main system RAM utilization rates?

In another thread , you mention that your GPU VRAM are completely full. If this is transcoding data then there probably is some set of cores assigned the 'high priority" task of copying data in/out of the GPUs along with anything else that is assigned to CPU cores.

The Sonnet card is a software RAID card (or perhaps more accurately is a no-RAID card. The RAID is done something else). Doing RAID 100% in software is relatively low cost, but it isn't free. You have to have "spare" core allocation time at "high priority" for it to get all the normal disk controller work done virtually.

If you move some of the workload to a non software RAID storage device and the bandwidth goes up, then that can be an indicator that the software controller was "overloaded" ( given the resources it was allocated. )

IF the software RAID controller is trying to do predictive caching then there is a RAM read/write cost too. Heavy reads and writes could thrash the cache a bit also if the read policy is trying to read ahead and the write policy thinks you'll want that written data back soon (so retains a copy). Since this is a software RAID controller it may not be trying to be "smart" and recognize these are two extremely heavyweight sequential read/writes streams and switch policies. Being "smart" takes up. more CPU cycles and more RAM.



Not sure if the above is clear - but when exporting 8k or 12k, disk bottlenecks can sometimes occur even below the rated speed of the disks in question - so trying to figure out if it is a PCIE issue.

Example:

Export 12k BRAW to the 4x raid 0 NVME, and get around 400-500 mb/s write.
Export the same to the 2x raid 0 SSD, and get around 800-900 mb/s write, even though these are slower on paper.

The other thread indicated that this wasn't a lone write job. That the source and destination were on the same RAID set.

If the software RAID is pulling the whole stripe width on a read there is a tradeoff for doing that . Similar if it is trying to buffer a whole stripe width to write . If concurrently read/writing from the same RAID set that could make a difference. The 'wider' has traditionally primarily been about getting around rotational latency. The 'wider' your RAID set is the more rotational "drama" your software RAID may presume it is dealing with. If this is Apple's ancient software RAID implementation ( hidden under a Sonnet facade) then that could be even more true.

4Read , 4Writes , 4Read , 4Writes

versus
2read , 2Writes , 2Read , 2Wries

versus probably your assumption of

R, W, R, W, R, W , R , W , R

For a 2 stripe it isn't trying to do "chunking" of accesses, which is just fine for SSD because they don't have a head location seek problem.


If that is the case then at lower read rates then they stay out of the way more. Similarly if the computational overhead on writes kept their data flow rate down too. If you crank the read rate up and the write rate is about equally matched then perhaps the contention is higher. "Rule of thumb" is when have two high device data flow rates, then it is better to put them onto different logical drives. The point of RAID 0 isn't primarily to have a bigger "dump it all here" bucket.
 

rondocap

macrumors 6502a
Original poster
Jun 18, 2011
542
341
What is your main CPU core and main system RAM utilization rates?

In another thread , you mention that your GPU VRAM are completely full. If this is transcoding data then there probably is some set of cores assigned the 'high priority" task of copying data in/out of the GPUs along with anything else that is assigned to CPU cores.

The Sonnet card is a software RAID card (or perhaps more accurately is a no-RAID card. The RAID is done something else). Doing RAID 100% in software is relatively low cost, but it isn't free. You have to have "spare" core allocation time at "high priority" for it to get all the normal disk controller work done virtually.

If you move some of the workload to a non software RAID storage device and the bandwidth goes up, then that can be an indicator that the software controller was "overloaded" ( given the resources it was allocated. )

IF the software RAID controller is trying to do predictive caching then there is a RAM read/write cost too. Heavy reads and writes could thrash the cache a bit also if the read policy is trying to read ahead and the write policy thinks you'll want that written data back soon (so retains a copy). Since this is a software RAID controller it may not be trying to be "smart" and recognize these are two extremely heavyweight sequential read/writes streams and switch policies. Being "smart" takes up. more CPU cycles and more RAM.





The other thread indicated that this wasn't a lone write job. That the source and destination were on the same RAID set.

If the software RAID is pulling the whole stripe width on a read there is a tradeoff for doing that . Similar if it is trying to buffer a whole stripe width to write . If concurrently read/writing from the same RAID set that could make a difference. The 'wider' has traditionally primarily been about getting around rotational latency. The 'wider' your RAID set is the more rotational "drama" your software RAID may presume it is dealing with. If this is Apple's ancient software RAID implementation ( hidden under a Sonnet facade) then that could be even more true.

4Read , 4Writes , 4Read , 4Writes

versus
2read , 2Writes , 2Read , 2Wries

versus probably your assumption of

R, W, R, W, R, W , R , W , R

For a 2 stripe it isn't trying to do "chunking" of accesses, which is just fine for SSD because they don't have a head location seek problem.


If that is the case then at lower read rates then they stay out of the way more. Similarly if the computational overhead on writes kept their data flow rate down too. If you crank the read rate up and the write rate is about equally matched then perhaps the contention is higher. "Rule of thumb" is when have two high device data flow rates, then it is better to put them onto different logical drives. The point of RAID 0 isn't primarily to have a bigger "dump it all here" bucket.
Thank you for the detailed response - there definitely is some sort of bottleneck happening. Ram usage is half, and cpu usually can get pretty high usage even on 28 cores.

I think you are right, likely down to the software raid on the sonnet. I may separate the drives up and see how they function
 

h9826790

macrumors P6
Apr 3, 2014
16,656
8,587
Hong Kong
If you export that to the project drive. Then your cMP need to read the timeline from the project drive, and write a video back onto the same project drive at the same time. The bandwidth is shared by both read and write. It may explain why the performance is about cut to half.

But when you read the time line from a project drive, then export (write) a video to another drive, then more likely the export speed won’t be affected by the storage speed.
 

kennyman

macrumors 6502
May 4, 2011
279
38
US
From my own experience of working with Red Raw 8K, I would say that you should not read and write from the same pool of drives. You should have 2 separate Sonnet cards. Or for example, Red Raw 8K on an external drive (from a Thunderbolt drive bay) and then transcode to Sonnet drives. There will always be a bottleneck if you read and write from the same raid array.

Another thing, we never use Samsung EVO SSDs, I was told once by our Tech Specialist that Evo drives have a cache and once that is fill, you will get slow read and write speed. We get constant read and write speed with other SSDs (Samsung Pro, Intel etc..). You can read more here.

I do have 2 EVO SSDs in Raid 0 at home but then again, I do 1080p H264 editing at home, I do not require max throughput and bandwidth. You would definitely have a speed problem writing and reading from Raid 0 with EVO drives.
 

Morgonaut

macrumors member
Apr 5, 2020
73
39
Any super specced computer can have bad performance when people have bad workflow and wrongly chosen/set up hardware. In most cases people don't need beefier computer to get better performance, they need to have balanced system with no bottlenecks and good workflow
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.