Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

3587

macrumors 6502a
Original poster
Mar 23, 2008
753
87
The XDR looks amazing when running by itself, but whenever I plug in the LG 5K, the screen gets fuzzy text... Apple says that the new Mac Pro with 580X can run two 5K or two XDR's...?
 
Are you plugging one monitor in the top TB3 ports and one in the back of the Mac Pro? You can't connect both on adjacenet TB3 ports.

That did it! Where did you read that or was I just supposed to know that...?

Now, how can I get all three of my monitors to look amazing? 5700X?

You're the best by the way... thanks for such a quick response... this thing had me fooled!
 
  • Like
Reactions: OkiRun
  • Like
Reactions: OkiRun
Looks like you can do two 5K's and two 4K's... can I do one 6K and two 4K's if the two 5K's don't work? I'm trying to rock a three monitor setup somehow without having to upgrade my graphics card...
 
Looks like you can do two 5K's and two 4K's... can I do one 6K and two 4K's if the two 5K's don't work? I'm trying to rock a three monitor setup somehow without having to upgrade my graphics card...
Connect the two 4K displays to the HDMI 2.0 outputs of the 580X. That should work but noone has tried it. AMD graphics cards usually support 6 displays. The LG UltraFine 5K counts as two. The XDR counts as two displays when connected to the 580X or Vega II or Vega II Duo cards (but no-one has posted the AGDCDiagnose/DisplayDiagnose outputs to prove that yet). That leaves room for two 4K displays.

However, the number of allowed connected displays may be reduced from 6 to a lower number if the total number of pixels is large enough. For example, the W5700X supports display stream compression (DSC) so that an XDR display counts as a single display but you can only connect 3 of them to the W5700X according to the specs (I would like to see someone try to connect more but who has 4 XDR displays?)

There are 5K displays (not the LG UltraFine 5K) that can be connected at 5K (8 bpc) and count as a single display. But the specs don't say how many of those can be connected - maybe 3 (like the LG UltraFine 5K), or it could be 6.
 
you have the option of picking up a cheep PC GPU to run extra displays if needed, an RX 570 is a cheep option.
 
Neither the LG 5k or the XDR will work with a PC GPU in this case.
The LG could work at 4K and the XDR could work at 4K or 5K (not sure) using a Moshi USB-C to DisplayPort bidirectional cable.
A PC GPU that supports DSC could allow 6K with the Moshi cable.
In both of those cases, you'll be missing all the USB features of the displays (brightness control, camera, audio, USB hub, etc.).
Some GPUs with a USB-C port may add the missing USB functionality (Nvidia RTX in Windows). The W5700 has a USB-C port but you may need to wait for a later macOS version for it to work completely (I know some people are testing it).
 
  • Like
Reactions: Amethyst1
The LG UltraFine 5K counts as two. The XDR counts as two displays when connected to the 580X or Vega II or Vega II Duo cards (but no-one has posted the AGDCDiagnose/DisplayDiagnose outputs to prove that yet). That leaves room for two 4K displays.

The XDR at 60Hz, 10bpc needs just about 44 Gbps for the signal. DisplayPort 1.3/1.4 caps out at just over 32 Gbps. It needs two DisplayPort 1.4 connections to drive. It's why Macs that only have DisplayPort 1.2 aren't supported (including the 2018 Mac mini).

However, the number of allowed connected displays may be reduced from 6 to a lower number if the total number of pixels is large enough. For example, the W5700X supports display stream compression (DSC) so that an XDR display counts as a single display but you can only connect 3 of them to the W5700X according to the specs (I would like to see someone try to connect more but who has 4 XDR displays?)

DSC also needs to be supported on the display, and I'd be very surprised if the XDR supports it. As a content creation display, lossy compression isn't exactly desirable.

Without DSC on the XDR, the 3 display limit makes sense on the W5700X.
 
DSC also needs to be supported on the display, and I'd be very surprised if the XDR supports it. As a content creation display, lossy compression isn't exactly desirable.

It does. The ports on it actually run a USB2 speeds unless the GPU supports DSC.

DSC is advertised as visually lossless, which so far seems to be true. It just needs to be lossless to your eye, it's not a signal used for mastering material or anything.
 
  • Like
Reactions: Amethyst1
It does. The ports on it actually run a USB2 speeds unless the GPU supports DSC.

DSC is advertised as visually lossless, which so far seems to be true. It just needs to be lossless to your eye, it's not a signal used for mastering material or anything.

Interesting, Apple buried that footnote quite well.

The main problem in general with real-time compression is that you have a hard cap on your bitrates and latency. This limits what you can do with the compression itself, especially since DSC's target was for being able to output things like 4K from lower powered devices using fewer lanes (even though it also enables 8K displays).

If I had to guess, Apple's probably trying to use the 16bpp mode, keeping the compression ratio under 2:1, and they probably figured that's fine for the 16" MBP? And it may be in most cases, but the irony here is that folks working with video are the most likely ones to notice DSC artifacting. Which is something that has been brought up in implementer talks. Since you can't hook this up to a Mac Pro and work with it in DSC enabled (for the moment anyhow), I'd be curious if it is enabled for the W5700X and how it impacts things... or if the tech specs are right and Apple decided to disable DSC on the W5700X to avoid folks getting artifacting on the card?
 
Since you can't hook this up to a Mac Pro and work with it in DSC enabled (for the moment anyhow), I'd be curious if it is enabled for the W5700X and how it impacts things... or if the tech specs are right and Apple decided to disable DSC on the W5700X to avoid folks getting artifacting on the card?
You can get DSC to an XDR from a 5700 XT using a Moshi USB-C to DisplayPort cable. One person posted AGDCDiagnose output for that. I think someone tested the 5700W's USB-C port but not with an XDR yet, plus the card may cause crashes (maybe need to wait for 10.15.3 for full support?).

With DSC, only HBR2 link rate is needed (17.28 Gbps) for 6K 12 bpc (1286.01MHz -> 61.73 Gbps) which is a compression ratio of 3.57:1 which seems too high? I only saw the 12 bpc option in a screenshot from Windows using an Nvidia RTX card. There's a document showing how 8 bpc supports 3:1 compression ratio and 10 bpc supports 3.75:1 so I guess it's ok.

The XDR at 60Hz, 10bpc needs just about 44 Gbps for the signal. DisplayPort 1.3/1.4 caps out at just over 32 Gbps. It needs two DisplayPort 1.4 connections to drive. It's why Macs that only have DisplayPort 1.2 aren't supported (including the 2018 Mac mini).
The mtdd override files for the XDR show a tiled mode of 3006x3384 60Hz running at 649 MHz. That's 19.5 Gbps for 10 bpc. 38.9 Gbps total (two tiles) which fills up the Thunderbolt 3 bandwidth (40 Gbps) but it's not as bad as that. Stuffing symbols of DisplayPort are not included in Thunderbolt. The Thunderbolt controller recreates the stuffing symbols when it converts the Thunderbolt DisplayPort packets back into DisplayPort.

I would like someone to post an AGDCDiagnose output for this 6K tiled mode. Then we can know for sure if the GPU is sending two HBR3 signals (required for 6K, non DSC) to the Titan Ridge Thunderbolt controller (which usually cannot support two HBR3 signals, except Apple Macs are doing it? and they made a firmware update for the Blackmagic eGPU to do it too?).
 
  • Like
Reactions: Amethyst1
Overrides are stored at /System/Library/Displays/Contents/Resources/Overrides/
Overrides for Apple displays are in DisplayVendorID-610.
The Apple Pro Display XDR overrides have DisplayProductID
  • ae21
  • ae22 (mtdd includes 5K tiled mode; 12 bpc)
  • ae23 (same DisplayProductID from ae22 mtdd overlay)
  • ae2d
  • ae2e no DSC (mtdd includes 5K and 6K tiled modes, 10 bpc)
  • ae2f (same DisplayProductID from ae2e mtdd overlay)
I have not seen an occurrence of ae21, ae2d, or ae2e.

ae22 was 5K only from the Blackmagic eGPU (dual HBR2 connection) before the firmware update. It's also for 6K HBR2 DSC from 5700 XT.

ae2e is interesting because it has a new flag DisableDSC set to 1.
 
  • Like
Reactions: Amethyst1
ae2e is interesting because it has a new flag DisableDSC set to 1.

Not surprising, TBH. No compression needed.

With DSC, only HBR2 link rate is needed (17.28 Gbps) for 6K 12 bpc (1286.01MHz -> 61.73 Gbps) which is a compression ratio of 3.57:1 which seems too high? I only saw the 12 bpc option in a screenshot from Windows using an Nvidia RTX card. There's a document showing how 8 bpc supports 3:1 compression ratio and 10 bpc supports 3.75:1 so I guess it's ok.

3.75 is if you are compressing 30 bpp to 8 bpp. Don’t confuse bpc and bpp in these charts. They are very similar. :)

DSC will let you use bpp values other than 8 to achieve different levels of compression. So it’s more a question of which bpp modes Apple is using with something like the 16” MBP.

I would like someone to post an AGDCDiagnose output for this 6K tiled mode. Then we can know for sure if the GPU is sending two HBR3 signals (required for 6K, non DSC) to the Titan Ridge Thunderbolt controller (which usually cannot support two HBR3 signals, except Apple Macs are doing it? and they made a firmware update for the Blackmagic eGPU to do it too?).

Titan Ridge can indeed take two HBR3 (DP1.3/1.4) inputs. Just look at the retail Titan Ridge card from Gigabyte with its two DP 1.4 inputs as an example.

The issue is more that two HBR3 streams can exceed the TB3 bandwidth. But as you demonstrated, the tile mode is just under that limit. Probably quite intentional on the part of Apple.

What is surprising to me though, is that the MBP only supports two of these, despite having 4 DP connections, and using DSC as pointed out earlier. It should be able to drive 4 if it is using one of the DSC modes. Weird. I wonder if Titan Ridge can accept 2 HBR3 streams but can’t output them on separate ports without exceeding internal limits? Maybe that’s the gotcha?

In which case, that also explains the 3 XDR limit on the W5700X, since it relies on TB3 controllers for output.
 
Not surprising, TBH. No compression needed.
The surprising part is that this is a new property that has never existed before.

3.75 is if you are compressing 30 bpp to 8 bpp. Don’t confuse bpc and bpp in these charts. They are very similar. :)
I think what I said is not wrong. 8bpc=24bpp compresses to 8bpp = 3:1. 10bpc=30bpp compresses to 8bpp = 3.75:1.

DSC will let you use bpp values other than 8 to achieve different levels of compression. So it’s more a question of which bpp modes Apple is using with something like the 16” MBP.
16" MBP supports DSC, so you can get up to 12 bpc.

Titan Ridge can indeed take two HBR3 (DP1.3/1.4) inputs. Just look at the retail Titan Ridge card from Gigabyte with its two DP 1.4 inputs as an example.
Actually, it seems that only the Apple Titan Ridge controllers and the Titan Ridge controller of the Blackmagic eGPU can take two HBR3 inputs (anyone have an AGDCDiagnose output to prove this?). I tried connecting two HBR3 displays to a GC-TITAN RIDGE, and only one can connect at HBR3 speed, the other connects at HBR speed.

The issue is more that two HBR3 streams can exceed the TB3 bandwidth.
You mean two HBR3 streams exceeds the TB3 bandwidth. But Thunderbolt does not transmit stuffing symbols so the full 25.92 Gbps of each HBR3 stream is not sent. Only something between 36.64 and 38.9 Gbps is sent over Thunderbolt (6K 60Hz 10bpc).

What is surprising to me though, is that the MBP only supports two of these, despite having 4 DP connections, and using DSC as pointed out earlier. It should be able to drive 4 if it is using one of the DSC modes. Weird. I wonder if Titan Ridge can accept 2 HBR3 streams but can’t output them on separate ports without exceeding internal limits? Maybe that’s the gotcha? In which case, that also explains the 3 XDR limit on the W5700X, since it relies on TB3 controllers for output.
I think the gotcha is that either the GPU or the macOS drivers has a limit. 6K is a lot of pixels - more than dual 4K which is more than 5K. While there is enough outputs from the GPU to support four displays, the GPU cannot support four 6K displays (or the macOS AMD driver has an artificial limit).

The support document https://support.apple.com/en-ca/HT210754 does not say the 6K displays need to be connected to different sides of the MacBook Pro (16 inch, 2019). That makes sense since each display only requires one DisplayPort signal (because DSC is used). It does say to connect them to different buses for best performance if possible but doesn't say it's necessary.

One strange thing in that support document:
Supports:​
5K, 5K​
5K, 4K, 4K, 4K​
However, it doesn't say it can support:​
5K, 5K, 4K​
even though this option has less pixels than 5K, 4K, 4K, 4K​
(I think both of those options assume the 5K display is not like the LG UltraFine 5K which is dual link SST and therefore requires both DisplayPort signals from a Thunderbolt controller/bus/side.​

A GPU that probably also has pixel limits is the new Intel 10th gen core CPU with gen 11 graphics named Ice Lake. It's the first Intel iGPU to support DisplayPort 1.4 and DSC. I can't find information on what the limits are though.
 
  • Like
Reactions: Amethyst1
The surprising part is that this is a new property that has never existed before.

Doesn't really surprise me at all. Seems like DSC support at all is relatively new to macOS, and only enabled for Navi, yes? So I'd have expected the property to show up in 10.15.2 with the Navi-dependent DSC support.

I think what I said is not wrong. 8bpc=24bpp compresses to 8bpp = 3:1. 10bpc=30bpp compresses to 8bpp = 3.75:1.

16" MBP supports DSC, so you can get up to 12 bpc.

The remarks on compression ratios aren't what I was commenting about.

And are you talking 12 bpc or bpp in relation to the 16" MBP? That's what I mean by be careful. macOS doesn't use 12 bpc for frame buffers, so 12 bpc doesn't mean anything useful here. Out of curiosity, I've been reading the XDR thread more for details, and I did notice your post talking about "12 bpc" in relation to the XDR hooked up to a 16" MBP, and reading through the log, I do think it was reporting 12 bpp for the link rate. Since units are not provided, I'd expect it to be in the context of the mode it's operating in, since it is more an engineering debugging tool (I've written my fair share like these). Especially since the numbers are coming from the link data, rather than frame buffers. So the 16" MBP is at least operating in a compression mode of 2.5:1, which is much better than the 3.75:1 number you are repeating.

And I just noticed that AGDCDiagnose even spits stuff out wrong: "Link Bits 30 bpc"

Whoops. That should be 30 bpp. Units are fun.

EDIT: I will say that I will find some use of the tools you've mentioned though. I wonder if it would give me any useful information about some link stability issues I have with my setup depending on how I connect up a DisplayPort switch to an eGPU (requires both to reproduce, and only on certain ports of the switch, even more fun).

Actually, it seems that only the Apple Titan Ridge controllers and the Titan Ridge controller of the Blackmagic eGPU can take two HBR3 inputs (anyone have an AGDCDiagnose output to prove this?). I tried connecting two HBR3 displays to a GC-TITAN RIDGE, and only one can connect at HBR3 speed, the other connects at HBR speed.

You mean two HBR3 streams exceeds the TB3 bandwidth. But Thunderbolt does not transmit stuffing symbols so the full 25.92 Gbps of each HBR3 stream is not sent. Only something between 36.64 and 38.9 Gbps is sent over Thunderbolt (6K 60Hz 10bpc).

Which is weird, since Apple's using the same chips. The only real answer there is Apple might have a firmware fix that the Gigabyte card doesn't have for the JHL7540 (wouldn't surprise me). The chip itself is obviously capable of it if Apple is doing it. It's not like Apple has custom TB3 controllers here.

I think the gotcha is that either the GPU or the macOS drivers has a limit. 6K is a lot of pixels - more than dual 4K which is more than 5K. While there is enough outputs from the GPU to support four displays, the GPU cannot support four 6K displays (or the macOS AMD driver has an artificial limit).

The support document https://support.apple.com/en-ca/HT210754 does not say the 6K displays need to be connected to different sides of the MacBook Pro (16 inch, 2019). That makes sense since each display only requires one DisplayPort signal (because DSC is used). It does say to connect them to different buses for best performance if possible but doesn't say it's necessary.

It's possible that there's a limit as you say. I haven't worked closely with GPUs in years, so it is possible we're seeing limits to the line buffers that put an upper limit on how big the screens can be in total as well. But assuming that's true, there's not much point in enabling DSC for the XDR in that configuration either, other than to support all three displays being hooked up to the GPU's TB ports. If you have 3 XDRs, you already have enough bandwidth to feed them an uncompressed signal.

But really, it makes a lot of sense to keep DSC disabled when using a W5700X and a 6K XDR. Whichever of us is right about why the limit exists. Or even a compromise of only using DSC if it detects it needs to do so in order to support the displays as they are connected to the system. That would certainly be "user friendly" if not intuitive.

At this point though, even Dell has shrugged at DSC for their 8K display aimed at content creation, opting instead for dual link. Makes me wonder how long before we start seeing DSC-capable 8K displays, and how things shake out in terms of favoring DSC vs dual-link for consumer vs production outside the Apple ecosystem. Since so far it seems like this stuff is still very much in its infancy as an actual shipping technology, and Apple may be one of the first actually using DSC.

One strange thing in that support document:
Supports:​
5K, 5K​
5K, 4K, 4K, 4K​
However, it doesn't say it can support:​
5K, 5K, 4K​
even though this option has less pixels than 5K, 4K, 4K, 4K​
(I think both of those options assume the 5K display is not like the LG UltraFine 5K which is dual link SST and therefore requires both DisplayPort signals from a Thunderbolt controller/bus/side.​

I'm not too surprised by an oversight like this. It took something like the Planar IX2790 for a 27" 5K panel to get matched up to a DisplayPort 1.4 connection anyways. It's borderline grey market kit in the US as it is on top of that.
 
I’m pretty sure DSC with the XDR is already working with the Navi MacBook Pros.

DSC is also quite complicated with many different modes. Some are more lossy than others. DSC is not as basic as just dropping the bytes per pixel.
 
Doesn't really surprise me at all. Seems like DSC support at all is relatively new to macOS, and only enabled for Navi, yes? So I'd have expected the property to show up in 10.15.2 with the Navi-dependent DSC support.
Right. AMD 5300M, 5500M, 5700 XT, W5700X, W5700.


And are you talking 12 bpc or bpp in relation to the 16" MBP? That's what I mean by be careful. macOS doesn't use 12 bpc for frame buffers, so 12 bpc doesn't mean anything useful here. Out of curiosity, I've been reading the XDR thread more for details, and I did notice your post talking about "12 bpc" in relation to the XDR hooked up to a 16" MBP, and reading through the log, I do think it was reporting 12 bpp for the link rate. Since units are not provided, I'd expect it to be in the context of the mode it's operating in, since it is more an engineering debugging tool (I've written my fair share like these). Especially since the numbers are coming from the link data, rather than frame buffers. So the 16" MBP is at least operating in a compression mode of 2.5:1, which is much better than the 3.75:1 number you are repeating.
I'm pretty sure it's 12 bpc. It's in the EDID of the overlay in the ae22 mtdd override file (the ae2e mtdd file has an EDID overlay with 10 bpc). 12 bpc is also in the AGDCDiagnose and DisplayDiagnose outputs.
#11
In the two DisplayDiagnose outputs that I've seen, it's called "BITDEPTH". In the case where HBR2 DSC is used, BITDEPTH is 12. In the case where dual HBR3 is used, BITDEPTH is 10. Maybe the DSC compression algorithm can do a better result with 12 bpc input.

As for frame buffer, I think you're right. I don't think I've seen a 36 bit frame buffer. Only "30-Bit Color (ARGB2101010)" and "24-Bit Color (ARGB8888)".

And I just noticed that AGDCDiagnose even spits stuff out wrong: "Link Bits 30 bpc"

Whoops. That should be 30 bpp. Units are fun.
I didn't notice that before.

EDIT: I will say that I will find some use of the tools you've mentioned though. I wonder if it would give me any useful information about some link stability issues I have with my setup depending on how I connect up a DisplayPort switch to an eGPU (requires both to reproduce, and only on certain ports of the switch, even more fun).
Maybe the GPU isn't using a strong enough voltage for the switch. I'm not sure how any diagnostic output is going to help with a bad connection. What is the model of the DisplayPort switch? Are you using it with HBR2 or HBR3?

Which is weird, since Apple's using the same chips. The only real answer there is Apple might have a firmware fix that the Gigabyte card doesn't have for the JHL7540 (wouldn't surprise me). The chip itself is obviously capable of it if Apple is doing it. It's not like Apple has custom TB3 controllers here.
Yes, I think Apple is using different firmware than the default that we get with GC-TITAN RIDGE or with PCs with Titan Ridge controllers, (or new Intel CPUs with built in Thunderbolt controllers except those support DSC so it's not a problem there).

It's possible that there's a limit as you say. I haven't worked closely with GPUs in years, so it is possible we're seeing limits to the line buffers that put an upper limit on how big the screens can be in total as well. But assuming that's true, there's not much point in enabling DSC for the XDR in that configuration either, other than to support all three displays being hooked up to the GPU's TB ports. If you have 3 XDRs, you already have enough bandwidth to feed them an uncompressed signal.

But really, it makes a lot of sense to keep DSC disabled when using a W5700X and a 6K XDR. Whichever of us is right about why the limit exists. Or even a compromise of only using DSC if it detects it needs to do so in order to support the displays as they are connected to the system. That would certainly be "user friendly" if not intuitive.
I think maintaining and using a single frame buffer is more efficient and performant. What might be interesting is a frame rate comparison for a game showing the difference between a dual link or stream and single link or stream display at 4K (old 4K MST displays), 5K, and 6K (the tests should avoid scaling and pixel format differences).

The W5700X will probably use DSC like the rest of the Navi cards.

At this point though, even Dell has shrugged at DSC for their 8K display aimed at content creation, opting instead for dual link. Makes me wonder how long before we start seeing DSC-capable 8K displays, and how things shake out in terms of favoring DSC vs dual-link for consumer vs production outside the Apple ecosystem. Since so far it seems like this stuff is still very much in its infancy as an actual shipping technology, and Apple may be one of the first actually using DSC.
Even though the Dell UP3218K (2017) was released after DisplayPort 1.4 (2016), GPUs didn't support DSC until later (Navi/RTX? nothing previously?). DisplayPort 1.4a with DSC 1.2a was released in (2018) but I don't think more than DSC 1.2 would be required for an 8K display with the Dell UP3218K specs (not HDR). I think the Dell works with DisplayPort 1.3 also at 8K.

I'm not too surprised by an oversight like this. It took something like the Planar IX2790 for a 27" 5K panel to get matched up to a DisplayPort 1.4 connection anyways. It's borderline grey market kit in the US as it is on top of that.
Yup, not many 5K single link displays out there (it's not DSC so it's only 8 bpc because DisplayPort 1.4 only has 50% more bandwidth than DisplayPort 1.2 and therefore only 75% the bandwidth of dual DisplayPort 1.2 which allows 10 bpc on the LG UltraFine 5K).
 
  • Like
Reactions: Amethyst1
I'm pretty sure it's 12 bpc. It's in the EDID of the overlay in the ae22 mtdd override file (the ae2e mtdd file has an EDID overlay with 10 bpc). 12 bpc is also in the AGDCDiagnose and DisplayDiagnose outputs.
#11

That post doesn't really lend credibility to your claim. It does clarify that the display supports 12 bpc so that the GPU can operate with a 12 bpc framebuffer, but that's it's own thing.

If you have DSC enabled, the link is operating in effectively a "single channel" mode. Pixels are a single compressed value (if even that), not a triplet like you have in non-DSC mode. So "bpc" doesn't even really mean anything when DSC is enabled.

It makes more sense to report the compressed bitdepth of the DSC stream for debugging purposes, since that combined with information about the GPU framebuffer lets you calculate the compression level, and detect issues where perhaps you are using the wrong compression ratio or target bitdepth. If you report bpc, and report nothing about the DSC bitdepth mode you are in, there's scenarios you literally cannot debug in the field when customers hit problems and you need these tools to report the state of the link.

And there's zero benefit to using 12 bpc prior to compression if you are using 10 bpc for the framebuffer.

In the two DisplayDiagnose outputs that I've seen, it's called "BITDEPTH". In the case where HBR2 DSC is used, BITDEPTH is 12. In the case where dual HBR3 is used, BITDEPTH is 10. Maybe the DSC compression algorithm can do a better result with 12 bpc input.

That's my point though: BITDEPTH is vague and doesn't specify units. So be careful about attributing units to a unitless number. Because both channels and pixels have a bitdepth, but you use different units for each. And with DSC in play, the context/meaning of bit depth is different as I've pointed out above.

But this is what I mean by it being easy to get confused, because DSC 1.1 supports 8-12 bpc as inputs to the encoder, but also supports 6-12 bpp as outputs from the encoder. Very similar numbers, for two different things if you don't know exactly what the unit/context is, and have to assume.

As for frame buffer, I think you're right. I don't think I've seen a 36 bit frame buffer. Only "30-Bit Color (ARGB2101010)" and "24-Bit Color (ARGB8888)".

Yup. But that's why the 12 bpc makes no sense. There's no reason to pad the input to the encoder, or report on the padded input when talking about the DisplayPort link itself. The more interesting number is the output of the encoder, and what's going over the wire itself.

Maybe the GPU isn't using a strong enough voltage for the switch. I'm not sure how any diagnostic output is going to help with a bad connection. What is the model of the DisplayPort switch? Are you using it with HBR2 or HBR3?

I don't really want to go into my problem on this thread, but it's very specific. So far every GPU I've used in my PC (including the affected GPU) works fine when hooked up internally. The iGPUs and dGPUs of multiple laptops (work and personal) have worked fine. It's only when I have an eGPU hooked up to Port 1 that things go south. On Port 2, with the right cable, it's stable. But I've used both 1.2 and 1.4 sources with this switch without issues.

It's more that the tools can tell me if there is something going on that's interesting, or if it is just a signal degradation issue. It's just weird that it only happens while the GPU is in the eGPU chassis, and I've had connectivity issues over Thunderbolt 3 to the point that I have had to swap out for an active cable.

Honestly, eGPUs themselves are a mixed bag because the Thunderbolt port adds another failure point when something goes wrong.

I think maintaining and using a single frame buffer is more efficient and performant. What might be interesting is a frame rate comparison for a game showing the difference between a dual link or stream and single link or stream display at 4K (old 4K MST displays), 5K, and 6K (the tests should avoid scaling and pixel format differences).

It probably is, the catch is that you do pay a latency penalty for DSC, which depends partly on how you slice the screen up. It's not pure overhead in the sense that it will impact how many frames you can spit out (just the input lag), but Apple hasn't really been optimizing for raw FPS, historically. They seem more interested in trying to maintain fidelity (i.e. 10 bpc, 220 px/inch).

The W5700X will probably use DSC like the rest of the Navi cards.

This is where I fundamentally disagree. At least by default, I'd be surprised if it does. Turning it on doesn't enable anything interesting on the Mac Pro:

  • Doesn't seem to allow a fourth XDR to compete with the Vega II Duo.
  • USB 3.1 to the XDR's ports isn't as interesting on the Mac Pro as on the MacBook Pros. Enabling one-wire connections to the XDR makes a lot of sense for the laptop market, and something Apple has pursued and pushed for.
It's giving up fidelity in exchange for conveniences that the W5700X won't even really take advantage of, or that Apple historically hasn't cared about.

Even though the Dell UP3218K (2017) was released after DisplayPort 1.4 (2016), GPUs didn't support DSC until later (Navi/RTX? nothing previously?). DisplayPort 1.4a with DSC 1.2a was released in (2018) but I don't think more than DSC 1.2 would be required for an 8K display with the Dell UP3218K specs (not HDR). I think the Dell works with DisplayPort 1.3 also at 8K.

DSC 1.1 would have worked, TBH. 1.2 reads a lot like addressing issues with HDR content where you've got 10-12 bpc and it's being used for video content, making the compression less effective due to the higher variety/randomness in the stream content compared to say, your average desktop with a lot of flat colors.

As I said, this stuff is so much in the infancy of the tech, that I'm interested to see how things play out in the longer term. These early models aren't a great indicator one way or another. But honestly, I'm not convinced that DSC will or should be the "default" mode for a display used on the production side of things. But I could turn out to be wrong. The market does what the market does.

But heck, bring it on for consumption displays. I'd prefer DSC to chroma subsampling at high framerates for sure. And to the interpolation artifacts TVs have.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.