Yep. Agree with all of that. I guess we won't know unless Apple ever acknowledges it and it looks like they won't.One possibility is that the GPU can do 2 of 10 at a rate of 3 each but not 1 of 20 at a rate of 6.
Where 1 and 2 are stream counts, 10 and 20 are pixel counts, and 3 and 6 are data rates (substitute those numbers with numbers that make sense). The GPU in this case is limited by the max data rate for a single stream. It's the same problem as why we have CPUs with 20 cores at 3 GHz instead of CPUs with 10 cores at 6 GHz.
In an Intel GPU, they have to use multiple CRTC's (pipes?) to handle a large resolution.
Remember in the olden days when 4K displays first came out, GPUs were doing two 1920x2160 tiles instead of one x 3840x2160 tile (using MST over a single DisplayPort connection).
The tiled display examples I gave (intel with Dell 8K, or old 4K displays) have the limitation in the display but you can image a similar limitation occurring in the GPU. It would be useful to find a document or video example where the limitation is in the GPU alone. Actually the video talks about single tile displays that require multiple pipes and they mention the DSC case (which uses multiple stripes?) and they mention clock limits, etc.
I don't believe the limit is 6K max frame buffer; I'm just saying that there could be reasons for a limit.
Let's hope that the M2 resolves it and then I'll just sell the M1 and be done with it.