I was not stressing the percentage/constant difference, but if I'm not terribly wrong, DPI-aware (important!) apps in Windows can adjust their font sizes and graphics accordingly so there is no need for interpolation — the output is always native and rendered with scaling in mind.
This is a difficult but technically superior solution to Apple's "we make the hardware so let's just render at 200% and if you need anything else, well, take this interpolated picture that looks good enough"
Its not that simple though.
The interpolation always happens — this way or another. You still must map the mathematical concept (e.g. line/curve) onto the pixel grid. The main difference is that Apple's solution simply tells you that every logical point is backed by exactly 2x2 physical pixels (which is a lie of course), while with a flexible backing factor you can have stuff like one point is backed by 1.5x1.5 pixels. The question is then how your app deals with it. Ideally, when drawing a single-point line (and wanting to be correct), the app should use blending (interpolation) on pixels that only partially cover the line. In the real life, only few bother though — most of the time one uses a cheap algorithm which accepts or rejects the pixel based on some threshold coverage. The result? A crisp looking line which is also wrong. In fact, Apple's solution can lead to the more correct result under these circumstances. The app can draw the line to the 2x2 backing buffer using coverage threshold and the window manager will then interpolate back to a x1.5 scaling factor. This gives you a much better treatment of all those partially covered pixels than what a naive drawing algorithm could do when drawing directly to a native-resolution target. What Apple is doing here is essentially super-sampling AA. I would disagree in calling this a technically inferior solution. In fact, from the technical standpoint, its a much more involved and sophisticated system. After all, the OS needs to track and interpolate the dirty rects in the fashion that will not introduce any image artefacts — very difficult to pull off properly and much more then just "give an app a buffer and let it do whatever it wants" approach that Windows uses.
The only real issue I can see with Apple's solution is when you really need that pixel precision. Which you of course don't get with OS X HiDPI, because you are drawing to subpixels with an unpredictable position in the final pixel grid. But with modern high-resolution screens, I very much doubt that there are a lot of valid reasons to want pixel precision.
Now, the reason why Windows is commonly known to have 'crisper' graphics is because they abuse pixel-perfect rendering. This means distorting the underlaying mathematical representation of the image for the purpose of better final pixel alignment. There is a lot of written on the topic of OS X vs. Windows font rendering. In the end, I guess its a matter of personal aesthetic preferences. As for me, I prefer to see things as they were actually intended by the artist, and not how they would look when mapped to the closest pixel.