Me too! I'd like to know if a video that is pinched in to zoom is technically an enhanced image.
But can it distort the image in such a way as to make it appear to be different than the original. Technically an enhance, "new" image? We're talking about zooming into a video for the purpose of highlighting something that could end up putting someone in jail for the rest of their life. It's a trivial matter, but perhaps an important one if you were in such a position and a prosecutor was trying to persuade a jury by changing the view point of the original video.
I'm not saying that it does make a difference, I'd just like to know for sure either way. I'm open to either conclusion.
I don't know how many different ways I can explain the same concept to you.
Yes, again
yes, it alters the image from the original in order to resize it. And, yes, it's a "new" image. The resulting image won't even necessarily have the same color attributes after resizing.
It's
not a trivial matter. One cannot simply "enlarge" images and be certain they still have the original image...because there is no "original" image. The entire thing is a construct based on 0s and 1s. Even the
"original" image is being displayed via algorithms. This is not a photograph, which actually could be enlarged and relied upon up until one exceeded the resolution of the capture.
I'm so confused how this is news to any members of a tech site. Where have you been all your life? When something is remastered they go back to the original
analog sources rather than HD. If you try to digitize an old VHS or cassette tape you're going to find it won't,
can't, be better than the original without that interpolation, which is what every HD display device does to non-HD content. You witnessed this in your own life when we switched from interlaced to progressive displays and from CRTs to LCDs.
I don't remember which iPad they were discussing but let's assume it's a 2019 10.2" iPad with 2160x1620 resolution. That's nearly 3.5 million pixels. If you take a picture of your face and display it on that iPad, your face will use all of those pixels. If you then enlarge your face to show only your eye, your eye will have to take up all of those pixels (unless you do a 1:1 crop, which wouldn't result in a larger picture so not relevant). How do you think all of those pixels are filled and what do you think they are filled with?
You can experiment on this with your own devices. Take a photo and keep enlarging it until it becomes blocky and incomprehensible. It will not just keep zooming in on a clear picture like what would happen if you, as someone else mentioned, looked a photograph with a magnifying glass. If you enlarge a photograph of a photo taken with a crappy lens you'll just end up with a blurry photo. If you enlarge one that was taken with a lens with high enough resolution you'll be able to see fine details that the naked eye couldn't see before it was enlarged...but the data needs to be there.
The only way around this would be to use RAW. The image you see on your display is already manipulated. You don't need an expert telling you all of these facts because Apple boasts about it every product release--their algorithms (and it's not an "Apple" thing) are part of why they claim we can get pictures that rival SLRs. The lenses on your device are arguably part of the technology. Make no mistake, though, the algorithms of how to make sense of those data the lenses are capturing are a huge part of the solution to displaying wonderful pictures on a digital device.