It's a physical layer. Current 4k monitors divide the screen into four independent imagages which are synchronized. It is not a single image stream. Multiple channels (which is what you are describing) can form a single stream. That is how HDMI works. 4k monitors utilize four independent streams brought together as a single final image by processing at the monitor end.
So lack of synchronization can be a problem but that is not latency. What I'm pointing out is that the current delivery systems chop data into subsets now and deliver now without latency problems. 4K doesn't particular change any of that into requiring a removing that synchronizing aspect of part of the solution.
Right now there is more "glue" required in the targeted displays in that they have to able to walk and chew gum at the same time to get all 4 subsets up and synchronized... but even if the abstraction will eventually be a bit more uniform above the abstraction line the implementions are just as likely to be chopped up into multiple subsets are they are now. Simply because it is more effective to implement because synchronization isn't all that hard and latency isn't an issue.
The latency has nothing to do with parallel data, it has to do with the amount of data
Latency is measured in time ( seconds , microseconds, nanoseconds). The amount of data is measured in bytes ( MB , GB , TB ). The units are not correct. It isn't about bytes.
Which you may be confusing is bandwidth ( MB/s , GB/s ) or cycle/refresh time ( Hz GHz, ). Those aren't time/seconds either.
Latency the gap between the request and that time start to get data going though. You can decrease latency by sending back multiple streams of answers. The arrival of the first of every one of those "substreams" does arrive quicker.
If you are on about that some 4K display are stuck 30Hz refresh rates. That is bandwidth/refresh issue not latency. And frankly, yes more channels are part of the typical solution for that . DisplayPort v1.2 with 4 channels can do 60Hz 4K now while HDMI is impeded from 60Hz 4K in part due to only having 3 data channels. Sure you can crank up the bandwidth of the 3 to match the what the 4 can do now .... but both of them are in the more than one channel zone. So the notion that the data is broken up into subsets as being some root cause latency issue doesn't make any sense.
Thunderbolt didn't become a "standard" right away.
It had a specification and compliance test process to navigate. Broad acceptance right away? No. Something to be implemented and compiled with? Yes.
It was effectively proprietary as only Apple devices had it at first.
It is proprietary because Intel is sole implementer and specifier. Apple doesn't particularly have much to do with that characterization. One adopter/customer doesn't make something proprietary. That says alot more about the design cycles and decision making processes of the adopter/customers than anything about the standard.
I've seen lots of hand waving about Apple "made" everyone else wait. There is little hard evidence that is true.
Apple's old monitor connector was proprietary as well and required adapters that didn't always work if you wanted to use third party displays years ago. It wasn't until they were forced to finally adhere to VESA standards that those connections went away.
There wasn't anything particular about VESA electrical/transmission standards that Apple was stepping on. It was bundling non VESA data/function into a single connector that was different. This time they didn't unilaterally do that. Intel did the implementation. They picked an existing socket implementation to derive an alternative electrical signalling protocol from.
Then there's Apple's iPad/iPod docking connector and the current "Lightning" connector,
And drifting far away from the new Mac Pro at this point. The iPod at 70% ( formerly even higher than that) of the "mp3 player" industry is a defacto industry. x86 isn't an open standard either. Dominating the market this even brought up as an issue much anymore.