Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

agamoto

macrumors newbie
Nov 11, 2021
23
14
The problem with your examples is that you're comparing resampled still frames from videos. Of course resampling the images or grabbing frame captures from the same video converted to different formats will always result in a different hash, pixels dropped, pixels created. So I don't see how comparing file hashes is a meaningful comparison. This is going to be true even if you don't use any sort of interpolation and you're going to have very slight changes to the pixel counts with each frame you're looking at from frame to frame.

I know there are forensics tools that can do image comparison, but are you familiar with Python at all? You could go a step further and use this guys extremely simple script for physically pinpointing all of the differences between two static images.


Still, even if you compared a thousand frames, it doesn't answer the question about pinch to zoom, upscaling/downscaling and whether an iPad is capable of that level of motion interpolation in real-time. Everything I've learned in 30 years tells me it's not possible with the current iPad processor, so it's doing something else entirely.

You could create an experiment where you have at least two iPads, one of them a control, put them both under a microscope and devise a method to play a video whilst continuously performing some mechanically duplicated pinching gestures, record the changes to the pixels in the microscope frame by frame and then compare the results. Either that or Apple can just tell everyone what's going on under the hood.

I don't think an iPad has the processing power to do real-time motion interpolation in video WHILE zooming in and out.
 

Forensic_Dude

macrumors newbie
Nov 11, 2021
9
14
The problem with your examples is that you're comparing resampled still frames from videos......
Tracking like a VCR, including the hashes, sizes, etc is for demonstration purposes (totally expected). Even though its resampled some people don't understand that/how it has been changed so you need additional visuals to drive the point home. The color isolation was done through ImageJ (from NIH/LOCI), and a multi-spectrum isolation would be a more accurate way to show significant changes (like a halo effect thats not in an original frame). It gets even more complicated when you add in ProRes + R3D.

I use python regularily. I've also used those pricey comparison tools, but prefer to use the standard processing tools with scripts and plugins as need for each job. I also use golang for some SOIC stuff infrequently for when dealing with EPROMs. I had a beast of a problem a few years ago with a self encrypting drive until I read a white paper on them and was able to identify the issue I was running into. Its a great nerd read and I highly recommend it. https://eprint.iacr.org/2015/1002.pdf

But as far as the iPad. I didn't tear apart the codec but I was able to request and obtain for forensic purposes some heavily detailed documents covering zooming while playback. I can't post the documents but I can provide you with an overview which I've done in a chart. And in reference to my original post I was correct. I've annotated the different header/footer markers for frames in multiple video types. When my wife gives up her iPad (after she deletes everything I'm sure ?), I'll be able to run some tests. I like your test idea though, its about how we perceive things.
 

Attachments

  • iPad_Enhance.png
    iPad_Enhance.png
    5.9 MB · Views: 155

BR3W

macrumors 6502
Sep 22, 2010
343
61
That said, @Forensic_Dude, would you please answer this claim directly:

I don't think an iPad has the processing power to do real-time motion interpolation in video WHILE zooming in and out.

This claim, which I view as objectively false, is predicated upon the earlier claim such chips are prohibitively "expensive." Is that claim true?

From my perspective, these chips (and capabilities) are in devices that range in cost from ten dollars to tens or even hundreds of thousands of dollars. Some of my devices in my home theater are purported to have really good versions of these image "enhancements," but that by no means my cheapest (and least expensive) devices do not.

There was an earlier claim such chips aren't used in phones at all. Is that true?

That's difficult to square with my lived reality wherein I can purchase almost any TV from any retail electronics store for fractions of the cost of my current iPhone. The only TVs that cost more than a modern iPad are the most esoteric of technologies. It's already been mentioned the TV used in the courtroom was most likely a $149.99 Walmart special--and I absolutely agree--and that budget display interpolates video. Of course, this argument that consumer devices don't interpolate video has always been disingenuous to me. I explicitly questioned the "discussion" in my first post days ago...it's only a few comments from the first for anyone's review.

It's always been disingenuous to me for some to argue interpolate doesn't exist. It took a few days for this discussion to make the minimal progress of whether interpolation of video even occurs. Now that we're here, the argument has shifted to the claim iPhones and iPads don't have these expensive chips and therefore don't have the hardware capability to do this interpolation in real-time.

Would you please address these two claims:
1. Are these chips prohibitively expensive and, therefore, not used in iPhones and/or iPads?
and
2. Does an iPhone and/or iPad have the hardware capability to interpolate videos and/or static images in real-time?
 
Last edited by a moderator:

Forensic_Dude

macrumors newbie
Nov 11, 2021
9
14
Would you please address these two claims:
1. Are these chips prohibitively expensive and, therefore, not used in iPhones and/or iPads?
and
2. Does an iPhone and/or iPad have the hardware capability to interpolate videos and/or static images in real-time?

1. I can't speak to the price for the following reason. It depends on the hardware its on and would be chip by chip dependent, so would its efficiency. If your a modern Apple or Android owner you already have them and may even be integrated into the processors for efficiency like how M1 and Intel Core processors have integrated GPUs. It cuts down the lag if the decoder sitting right there and not a separate chip.

If you refer to the diagram I made depending on the configuration there are either 1 codec chip, running among other things two software based decoder algorithms; Two codec chips, running among other things independent decoders; or two software based decoders integrated on another chip being run among other things as a virtual integrated circuit.

ipad_enhance-png.1911992
2.

In the diagram you'll see its using algorithmic predictive analysis (guessing) or learning from AI to determine where you'll look and where you're focused on and calculating whats next based on minor finger movement. To your question it does do interpolation in real time on zoom because each frame is a new frame that never existed before (and not forensically sound), using a decoded locked keyframe from HDSA 1 (but not rendered) as a reference in RAM. I didn't include it because the diagram was already getting bananas, but it would be part of the change detection comparing an original keyframe in RAM.

But a beginning and ending keyframe is a locked frame from the different points in the the video, and a static image is the keyframe in its instance. That entire sequence is happening multiple times a second, how many times I don't know. I also don't know how many actions (if-then-and-else-goto.....) are occurring we'd never be able to look at the source code because of all the encryption. But we could drop the codec in a decompiler, which I'll do eventually.

Its pretty common now that interpolation is being done at some level in real time. You just have to configure your settings correctly if you're allowed to adjust those settings for whatever software you're using. If you have access to Academia Databases I'd recommend https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4916476/ from the medical field. I can't remember which, but two years ago a doc was getting a real time 3d motion rendering of my heart during a nuclear stress test on a tablet. That was crazy and somewhat frightening watching my heart beat in real time. But he didn't film my real heart.........

Additionally, to drive the point home SwiftUI has a interpolation functions. Two for reference are "func interpolation(_ interpolation: Image.Interpolation) -> Image" & "enum Interpolation". Bottom line making things that aren't weren't there originally. Now onto my wife's iPad......... Yipeeee! For the iPad process I'll see if I can capture the video in RAM while manipulating zoom carve the frames out.
 

PawPawTaylor

macrumors newbie
Nov 19, 2021
1
0
I don’t blame the defense for trying every objection they can, but common- this is pretty much common sense like the prosecution said. It’s not enhancing the video, just zooming into it. Enlarging the pixels
You are actually incorrect. The screen's pixel size is set. The pixels of the screen on your device do not enlarge and shrink as you zoom in and out. The software adds pixels and takes them away as you zoom in and out.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.