So True. I haven't said opposite either.
You kinda do, by suggesting that 120Hz demands more resources, even in the quote below. It doesn't work that way.
So how much more the 120hz takes from the computing power? If you know the math why don't you lighten us and give the exact numbers?
There aren't exact numbers here. It's a bit like asking "what is the 0-60 time of a car?" It depends on a lot of factors and is specific to the car. What you need in order to hit 120fps varies on the app, and what task you are doing at the time. But what 120Hz screens
do not ever do is require you to render at 120fps. It just
allows you to render at
up to 120fps. Much like the 60Hz screens that the iPad has always had don't always mean the iPad is rendering at 60fps.
It all really comes down to a couple key factors though:
1) How much drawing does the CPU need to actually do, per frame?
2) How much overhead is there currently in the GPU to handle the higher fill rate?
The second question is murkier, but in general, it should be feasible to assume the fill rate is there. There are variables there that cannot be easily worked out, because we don't have the source code, but considering it's been possible to hit 60fps with GPUs
less than half as powerful as the A10X, GPU fill rate isn't going to be your problem.
So the first question is the more interesting one. There are three scenarios here that demonstrate what will actually happen.
One extreme is certain simple user interactions. Say, dragging an item in a table to a new position, without any scrolling happening. Because all the rows are already drawn, you are just repositioning the existing layers the GPU needs to composite. Here, hitting the higher frame rate is super cheap, since you are drawing 0 pixels. The GPU grunts a bit more, but we've already demonstrated that the GPU has plenty of headroom.
The other extreme would be something like a 120fps animated GIF. You have to render each frame on the CPU (no hardware assist here), and you are potentially redrawing the entire screen. At 60fps, the CPU just drops every other frame, and draws 60 frames every second. At 120fps, the CPU is going to try to draw 120 frames every second, doubling the amount of CPU you need to accomplish the task. But these sorts of things are why we use hardware decoders for video, because this sort of work can get expensive quick.
A common real-world scenario though is scrolling. This one is interesting, because you can measure scrolling in terms of "pixels per second". The faster you scroll, the more expensive it can be. A slow scroll produces very little new drawing each frame, but a super-fast scroll you can wind up trying to redraw the entire screen every frame. But what's interesting here is that more frames doesn't
by itself incur more drawing. Instead, you split the drawing into smaller pieces (each frame is moving half the distance, in half the time). Where this gets interesting is during those super-fast scrolls.
If I can scroll at N pixels per second on a 60Hz display, before it starts "skipping" lines of pixels between frames, then a 120Hz display will be able to accomplish 2N pixels per second before the same "skipping" of lines occurs. But the catch is that to perfectly draw the faster scroll, it does take more CPU. So any scroll under N pixels per second will use the same CPU power on both older and newer processors. Any scroll between N and 2N pixels per second on the new iPad
will require more CPU at 120fps. If it can't though, you will start to see things like checkerboarding/etc again, or slower framerates.
Now, if you want to talk worst case, that's probably a good place to start. Take an app that manages to max out the A9X CPU during certain tasks, is single-threaded, but can hit 60fps during those tasks. That app will still render at ~80fps
or better on the A10X, since the new screen allows it to. The CPU may still be maxed, but you are getting something for your trouble. Apps that aren't able to hit 60fps (and they exist today) will still be closer to 60fps on the A10X than on the A9X. There is effectively nothing to lose, and all to gain by going to 120Hz. Worst case, an app will "only" have a 30% higher frame rate, instead of 100%.
The short answer though: the A10X will produce better latency and smoother animations, when paired with the 120Hz display, than the A9X paired with a 60Hz display. Period. It doesn't matter what app, they will benefit from it. The only variable is
how much.
Edit: Punctuation.