I think my own takeaways from this discussion and my own observations are the following:
(1) Smartphone image quality is still measurably lower than most larger sensors, though in certain situations, computational/AI methods help fill in a few of the gaps either during image capture or in post processing. And the quality of the sensors is getting generationally better each year, eating in to 1-inch and maybe slightly larger sensors, for some but not all situations.
(2) For a lot of people, smartphone image quality and point/shoot capabilities are adequate for their needs (social media, etc).
(3) It’s possible for good photographers to get nice images from smartphones for certain genres in certain situations
(4) Computational photography will continue to fill some of the quality gap for small sensors in certain situations, getting better each year. Example is computational depth-of-field and Portrait Lighting. These still don’t look nearly as good as optical approaches (to me), but may work well for certain needs.
(5) Computational photography in its various guises will continue to augment larger sensor photography too, from (seemingly) everyone’s favorite Eye AF, other AF modes, post processing such as all the Topaz software and from there all the way to the frame averaging in the Phase One backs if 60k cameras are your thing.
(6) The larger sensors also continue to improve, either via pixel count or price or off-loading technology. How well they handle color and noise. Many other things.
(7) Video, if you’re in to that, is apparently getting much better on the larger sensors and the phone sensors are doing well for certain situations too. I really can’t speak to video, just my understanding.
Did I miss anything?
(1) Smartphone image quality is still measurably lower than most larger sensors, though in certain situations, computational/AI methods help fill in a few of the gaps either during image capture or in post processing. And the quality of the sensors is getting generationally better each year, eating in to 1-inch and maybe slightly larger sensors, for some but not all situations.
(2) For a lot of people, smartphone image quality and point/shoot capabilities are adequate for their needs (social media, etc).
(3) It’s possible for good photographers to get nice images from smartphones for certain genres in certain situations
(4) Computational photography will continue to fill some of the quality gap for small sensors in certain situations, getting better each year. Example is computational depth-of-field and Portrait Lighting. These still don’t look nearly as good as optical approaches (to me), but may work well for certain needs.
(5) Computational photography in its various guises will continue to augment larger sensor photography too, from (seemingly) everyone’s favorite Eye AF, other AF modes, post processing such as all the Topaz software and from there all the way to the frame averaging in the Phase One backs if 60k cameras are your thing.
(6) The larger sensors also continue to improve, either via pixel count or price or off-loading technology. How well they handle color and noise. Many other things.
(7) Video, if you’re in to that, is apparently getting much better on the larger sensors and the phone sensors are doing well for certain situations too. I really can’t speak to video, just my understanding.
Did I miss anything?