maximile said:
I don't really understand that. If that's the case, then how come HDR photos can be generated from a single RAW capture (according to Photomatix's page, anyway)? That suggests to me that the camera's sensors can capture plenty of dynamic range. Am I misunderstanding something?
Ok, I'll try to explain this as simply as I can.
Dynamic range is the difference between the brightest light that a sensor can record without clipping and the dimmest light it can record without clipping.
Imaging sensors are based on the photoelectric effect. To see dim light, there must be a minimum number of photons striking the sensor before enough electric current is induced. Because all electric circuits have some noise (i.e. tiny fluctuations in current), if the designer is not careful, these stray currents could be interpreted as a low-light signal. In order for the sensor to be sensitive to low light while avoiding stray or noisy currents, the sensor must have an extremely low "noise floor." In industrial applications, this is often done by cooling the sensor. This is not practical in consumer applications, so the designer implements a clipping algorithm that says, "if the current induced in a particular pixel is less than x milliamps, consider that pixel off." Because x is a hard number, we say that the sensor "clips" any current below x.
The same applies to the brightest light. As the quantity of photons impinging on a pixel increases, a strong photoelectric current is induced. However, if you keep increasing the intensity of photons, the pixel will become "saturated". Every sensor has its own saturation current. Let's assume that current is y milliamps. Any pixel that produces more than y milliamps will be recorded as a dot of a fixed intensity. In other words, any intensity of light that produces more than y milliamps will be "clipped" or truncated to this fixed upper value.
Thus, the sensor cannot see light below x milliamps nor any intensity beyond y milliamps. The difference between x and y is the dynamic range of the sensor.
Film has a lower x and a higher y than digital sensors.
The human eye has an even lower x and an even higher y than film.
HDR does not expand the dynamic range. Instead, it produces an image where no part of the image is either underexposed or overexposed. In effect, HDR balances the exposure. But it also gives us a result that is more in accordance with what the eye sees. The eye does not overexpose or underexpose parts of the same scene to the same degree as digital sensors. HDR brings photography one step closer to the capabilities of the human eye, but it cannot recover clipped shadows or blown highlights.
What I am looking for is a new generation of sensors that can see more at night and more in the noontime sun. It will be able to produce fantastic shadow detail as well as fantastic highlight detail. And we'll be another step closer to matching our own eyes.