...(psudo-intellectual babble)...
You're projecting, Bubba. Please refresh my memory on WHO jumped into the fray babbling utter nonsense about "student's T-tests", "statistically significant differences" and "variance" -- of a SINGLE data point, no less.
When you find yourself in a hole of your own making -- STOP DIGGING!
Are you seriously arguing that a single point measurement is going somehow accurately assess the monitor?
Yet another lame straw man. The weasle-worded phrase "assess a monitor" is nothing but a transparently dishonest attempt to muddy the waters and divert attention from your astoundingly idiotic "statistical" pronouncements.
The subject was
not "assessment" (whatever
that might mean) -- it was a discussion of LUMINANCE UNIFORMITY, which (unfortunately for you), is an easily-understood, precisely-defined, and unambiguously
quantifiable property of LCD displays.
Widely-accepted international standards define and specify luminance uniformity as a simple ratio of maximum to minimum brightness. No statistics involved; just divide
the brightest by
the darkest. Any errors in locating the
absolute brightest/darkest areas of the screen simply produce a slightly optimistic result. No big deal. No one
cares if it's off by a few percent.
And why exactly to you think that a repeat measurement will give the exact same result??
As I already explained, there is nothing to repeat -- and no reason to repeat it. The data-set under discussion (a more-than-sufficient 10 million+ discreet RGB brightness
measurements) was acquired by Mr. Sushi's superb Canon IXY DIGITAL 700 camera -- in a matter of 20 milliseconds -- and promptly recorded on his SD flash card. Measurement complete!
Everything after the shutter-click is
data analysis -- NOT measurement.
Are you seriously suggesting that replicate high-quality digital photographs of the same display -- taken back-to-back, on the same day -- would yield substantially different results? Remember, Bubba, we're NOT talkin' about ppm precision hair-splitting; we're talkin' about
macroscopic brightness differences -- on the order of tens (or hundreds!) of percents.
Please tune-in Sesame Street for a refresher course on "big" and "little."
Oh wait a sec... you're not so stupid that you're measuring the screen with the DigitalColor Meter utility, ARE YOU?? Haha! Nice Experimental Design GENIUS! Yeah, that will tell you exactly what the computer has mapped to that area of the screen, but it will not assess he actual output of the screen. Get a monitor calibrating device, and do repeat measurements.
What a maroon! I wouldn't have thought it possible that anyone capable of operating a keyboard could be so invincibly dense. The "actual output" of the iMac screen was ALREADY measured once -- by Mr. Sushi's camera. There is
nothing more to measure. The data is now written in silicon; no matter how many times you "sample" it, it remains
unchanged.
Anyone who follows your sage technical advice will be measuring the "actual output" of Mr. Sushi's display -- MULTIPLIED BY -- the "actual output" of the second display. ...but why stop at only two?
The
ideal method of
analyzing the 10 million+ RGB measurements acquired by Mr. Sushi's camera would be to programatically extract and process the binary JPG data -- directly from the camera's SD flash card.
The
next best approach is an interactive graphical examination/analysis of the JPG's data bits --
exactly as they are "mapped" by the computer into an area of video memory. I freely admit that DigitalColorMeter.app has its limitations. OTOH, it's universally available, and
good enough for detecting gross non-uniformities and distinguishing between "big" and "little."
And FWIW, the correct statistical test is a 2-tailed Student's T-test
Oooh! Now we have a TWO tailed test ... for the same ol' ONE data point!
...stupidity has no asymptote!
LK