Not really a fan of increased PPI in the new MacBooks, I’d rather have the old 220PPI which does not make the UI elements too small.
You can still select the “everything looks bigger” scaling option which would give you same sizes as the older model, with the same visual fidelity.
To start, what's the general statment of how this works? Is it that if the user-selected ("looks like") resolution is a x b, the OS will render it at 2a x 2b, and then downsample to the next lower integer fraction of a native resolution of the monitor? E.g., using your example, how would that work if the user selected the default "looks like" of 1440 x 900?
The OS renders everything at 2x2 (so at 4x super-resolution). The rendered image is then resampled to the native resolution (most likely using a linear filter). This technique is also known as super-sampling antialiasing.
In the case of 1440x900 the OS renders at 2880x1800. Then this image is displayed using native resolution of the display. On an older MBP, no resampling is needed as the resolution matches the image. On the new MBP it is resampled to match the 3456x2234 resolution. No information is lost as we are upsampling onto a very dense raster.
More to the point, how would the process you described be fundamentally different for a screen that's 218 ppi vs. one that is 254 ppi? It seems the process would be the same either way, i.e., that neither of these pixel densities is "privileged" by the OS. Yet
@Krevnik, whose post you agreed with (you gave it a like) seems to be saying exactly that:
It’s not different at all. The PPI literally doesn’t matter.
"Apple’s been using a scaled resolution on their MBPs by default for years now. Using a 110 pts/inch display (@2x is 220px/inch) but rendering at 127 pts/inch."
I.e., krevnik seems to be saying there is something about 127 pts/inch that is privileged based on how MacOS does its rendering, which would in turn explain why Apple went with 2 x 127 ppi = 254 ppi on their new MBP's (which was my original question)— but I'm not seeing this from the explanations you folks have been trying to give me. So clearly there's a big missing piece I'm not getting. That's why I asked krevnik — and I'll ask you the same thing — if you might be able to refer me to a detailed, well-written technical paper on this that lays it all out.
It’s simple, really. Apple used the resolution of 1440x900 for a long while on their large laptop line, and the HiDPI resolution of that is 2880x1800. But at some point, with world moving towards FullHD on compact laptop, that resolution was dated, so they moved the default to 1680x1050, but that would involve a slight loss of visual fidelity since the super sampled 1680x1050 didnt exactly map to the hardware resolution. Now with the M1 model they have increased the PPI to improve the fidelity. That’s it.
For a technical paper you can refer to Apple documentation here:
https://developer.apple.com/library...d.html#//apple_ref/doc/uid/TP40012302-CH4-SW1
OK, I think I have it: There's TWO STEPS here.
For the first step, the Mac needs to convert its internal bitmap (127 pts/in) to the native resolution of the display. If the display is 254 ppi, then that's ideal, because after doing integer upsampling to 254 ppi, no non-integer downsampling is needed (the latter would, by contrast be needed with, say, a 218 ppi display). Question: How much effect does this have on quality?
Since I don't know the proper term of art, let's call the output from step one the "external bitmap", which is now in ppi
For the second step, the Mac needs to take the external bitmap and apply it to the display. If the user wishes the actual sizing to correspond to some integer fraction of the resolution of the display, that's ideal, because no non-integer scaling is needed.
In summary, there are two potential sources of artifacts:
Type 1: Those incurred in going from the 127 pts/in internal bitmap to a display whose native resolution is not an integer multiple of this.
Type 2: Those incurred in displaying the output of step 1 at a non-integer fraction of the native resolution of the display ("non-integer scaling").
Thus:
254 ppi display used at integer scaling: No Type 1 or Type 2 artifacts
254 ppi display used at non-integer scaling: Type 2 artifacts only.
220 ppi display used at integer scaling: Type 1 artifacts only.
220 ppi display used at non-integer scaling: Type 1 and Type 2 artifacts.
But this still leaves the question—why 127 ppi in the first place? Given that their early LCD monitors were typically ~100-110 ppi, wouldn't one of those resolutions have made more sense (so they could avoid the Type 1 artifacts for such displays)?
I think you are massively over complicating all this
Its all very simple really. First, forget about native resolution, that’s a red herring. The most important thing is that you have three kinds of “pixels” in play:
- logical pixels, called points (the pixels OS and your UI work with)
- rendered pixels, called the backing pixels (the pixels the system actually is rendering )
- hardware pixels, the “real” pixels, ones that the computer can display
In the good old times, points and backing buffer were the same thing. You set the system to 1440-900 resolution, you got a 1440x900 framebuffer, you rendered to it, you sent that off to the display. If the framebuffer you sent does not match the hardware resolution, well, you will probably get a blurry picture. But the most important thing is that neither the system nor the apps know or care about the hardware resolution, for them there is only one resolution they work with, and that’s 1400x900.
Retina displays change the equation by differentiating between the logical pixels and the backing pixels. You still get 1440x900 logical pixels (points), but each point is now backed by 2x2 pixels in your framebuffer (backing buffer). So you render stuff at supersampled resolution and then send that supersampled framebuffer to the display. And just like before, you have to interpolate so that it fits the hardware resolution. Only now you have many more hardware pixels and can get away with more things without visibly degrading image quality. But again, neither the system nor the app cares about the “native resolution”. They only care about points and pixels. That’s it.
The beauty of this system is that it’s very simple and provides best possible image quality. Loss of visual fidelity can only occur at the presentation stage - when the backing buffer is scaled to the hardware display, but since you are rendering at a supersampled resolution you always get the best possible rendering. And the software only ever has to deal with the basic SSAA - the fact that a point is backed by 2x2 pixels, which makes everything simpler. Fractional scaling, native resolution - you don’t have to care about these things because they kind of happen implicitly at the presentation time.
Contrast it with the method used by other systems, where the backing buffer would match the hardware resolution and you instead need to render things at fractional scaling directly. So instead of fixed 2x2 backing factor that can be hard-coded you need to tweak your rendering to match the chosen scaling factor. At first, it sounds like a less wasteful system (fewer pixels to process compared to Apples super-sample everything approach), but processing more pixels is extremely cheap on modern hardware while fractional rendering can be algorithmically challenging and all thst burden is now on the developer. Apples approach trades off some extra (negligible) work to give you best possible image quality while making developers life as simple as possible.