Hence why he's redeveloping his workflow-in my opinion. If it's impossible to achieve real-time scrubbing on a brand new system, why wouldn't he then look internally to see what he already has that could accomplish the job for a lower cost? My perspective is from a business analysts-and is purely opinion. I follow the math-not the emotion tied to how much someone has in their system-especially when Microsoft at no point in history has made the claims of support Apple did at one point. Correct me if I'm wrong-15GB/s (dual tray RAM copy speed) is is faster than the 18Gb/s (VRAM)-however a problem may come into play when you start wasting bandwidth on unnecessary displays and then compound rendering on top of that.
https://www.pugetsystems.com/labs/articles/Titan-X-Performance-PCI-E-3-0-x8-vs-x16-851/
NVME injectors into the PCIe x4 can achieve 2000MB/s as a scratch disk-the bottleneck I see is at the drives, not the CPU or RAM. I didn't say I'm not familiar with some of the process of video compression/editing, just not my area of specialty-4K 8-bit MP4/MOV is simply the most complex I've been willing to deal with due to my GPU/camera restrictions/background. Drives have always been the weakest link in the armor since floppy disks-followed by the GPU. The compression rate of the footage plays into account as well-but from what I understand in the article, editing wasn't the end-goal, debayering of the RAW footage into a usable format by a standard NLE platform seems to be his path. Scrubbing speeds wouldn't come into play until the next step of his work-flow.
We all tend to operate on single machine work-flows (or maybe two), whereas dynamic business's can have multiple machines dedicated to a single task. In a multiple layered workflow, may pass through multiple OS's several times seeing 1-200 different machines before the deliverable is ready.
I don't think just a GPU will do it, but just that it would be more cost-effective to use your existing machines if the tech isn't there to achieve a goal for the future, and I do agree great 8K performance is a long way off. I AM hoping we all get the windfall from it though. The GPU does a little more than that, but I think we're in agreeance enough to not get too technical over hyperbole.
The reality is that if people were more focused on the work they were doing on their systems-and the ease at with which they are doing it compared to decade ago, modern machines are only catching up to the performance bottleneck these machines reached years ago with the 4K/8K transition and huge data flows required to handle the media. The laws of thermal dynamics can only be pushed so far before they break-and that's why QPI's are still viable. Continuous workloads on single channel 4000Mhz is more likely to lead to thermal bottlenecks or even hardware failure, and a triple channel 1333Mhz is the same effective clock speed.
It looks like prerenders pre-caching and proxy handling take place on the GPU, see screenshot of the Nvidia website: