Thanks, Phrasikleia-- a very thorough answer to many questions. One more... (actually...several)
When you say take an 18mp file and "normalize it" to 12mp in order to compare with a file from a 12mp sensor, are you saying that each camera, respectively, would produce an 18mp file and a 12mp file (RAW?) When I check my RAW files, the sizes can vary from 10.1mp to 12.4mp as .NEF files using 12-bit conversion and well over 15mp using 14-bit conversion, and I also have the options of several types of RAW file compression, including lossless and uncompressed... all which would give me different file sizes... so, the question is... how do you normalize, let's say, a file from your 18mp sensor camera and one from my 12.4mp sensor camera. I know you don't crop because the sensors are very similar in size, although the Nikon crop sensor is slightly larger than the Canon. So, what's the formula? Is it a ratio of one sensor mp over the other? Because of differences in file size, depending on the particular image, 12/14bit RAW, compressed/uncompressed, wouldn't that present an inconsistent result for normalization. Then there's the matter of sensitivity, like if one camera (fewer megapixel) had a larger dynamic range, thus more shadow detail and highlight detail in a given image than the higher megapixel camera -- which would actually produce detail never seen in the higher resolution camera. Nothing will bring that detail out in a sensor if it can't capture it, regardless of how many pixels are there.
I understand in principle what you're saying about the original 18mp sensor image, if "normalized" to the 12.4 mp sensor image output by discarding some data, would still contain more detail, but that's where I get lost. The way my non-digital mind works is this: You have a real-life scene before you (unlimited pixel density there, if you can think of it like that.) The two cameras record the scene on their respective sensors. One with more pixels can resolve more detail from the scene at 100%. Now, to normalize that sensor to an equivalent 12.4 mp file, something has to go... some megapixels. Wouldn't that be as if the 12.4 mp sensor recorded whatever was on the 18 mp sensor, losing some of it's detail in the process. You're tossing something, however you look at it. Both cameras record from the maximum resolution image, real life. By the time you normalize the higher density sensor to match the less dense one, aren't they are capable of exactly the same detail?
I know I'm missing something... and my brain needs a good night's sleep..
When you say take an 18mp file and "normalize it" to 12mp in order to compare with a file from a 12mp sensor, are you saying that each camera, respectively, would produce an 18mp file and a 12mp file (RAW?) When I check my RAW files, the sizes can vary from 10.1mp to 12.4mp as .NEF files using 12-bit conversion and well over 15mp using 14-bit conversion, and I also have the options of several types of RAW file compression, including lossless and uncompressed... all which would give me different file sizes... so, the question is... how do you normalize, let's say, a file from your 18mp sensor camera and one from my 12.4mp sensor camera. I know you don't crop because the sensors are very similar in size, although the Nikon crop sensor is slightly larger than the Canon. So, what's the formula? Is it a ratio of one sensor mp over the other? Because of differences in file size, depending on the particular image, 12/14bit RAW, compressed/uncompressed, wouldn't that present an inconsistent result for normalization. Then there's the matter of sensitivity, like if one camera (fewer megapixel) had a larger dynamic range, thus more shadow detail and highlight detail in a given image than the higher megapixel camera -- which would actually produce detail never seen in the higher resolution camera. Nothing will bring that detail out in a sensor if it can't capture it, regardless of how many pixels are there.
I understand in principle what you're saying about the original 18mp sensor image, if "normalized" to the 12.4 mp sensor image output by discarding some data, would still contain more detail, but that's where I get lost. The way my non-digital mind works is this: You have a real-life scene before you (unlimited pixel density there, if you can think of it like that.) The two cameras record the scene on their respective sensors. One with more pixels can resolve more detail from the scene at 100%. Now, to normalize that sensor to an equivalent 12.4 mp file, something has to go... some megapixels. Wouldn't that be as if the 12.4 mp sensor recorded whatever was on the 18 mp sensor, losing some of it's detail in the process. You're tossing something, however you look at it. Both cameras record from the maximum resolution image, real life. By the time you normalize the higher density sensor to match the less dense one, aren't they are capable of exactly the same detail?
I know I'm missing something... and my brain needs a good night's sleep..