TIME SINCE LAST FULL CHARGE
New insight about Apple's Battery Algorithm
In iOS 10.2 Dev/Public Beta 1, TIME SINCE LAST FULL CHARGE populated at 90%. Next 10% is actually not required or its the reserved charge? If its not required, then why there's still the capacity to be charged? If its reserved capacity (for development purposes to check some new possibility which is not yet substantially precedent) then for what purpose, if its not "user accessible" in emergency, instead it still relies upon 20% and 10% Low Battery Pop Up Alert?
Apple's new pre-emptive measure against Samsung's Note 7 battery explode incidents. In my opinion so far as my best knowledge is concerned, Samsung goes full throttle as far as the hardware's maximum with its incremental software optimisations. Apple in general advertises the hardware capacity (no matter what it is) as it is, but its software drives the 90% of it at maximum. Might be to protect its manufacturing costs for the replacement/RMS.
Benchmarking is psychological console for the users. Here's a great deal of multiple factors plea. In realtime its on individual ground but how much difference is acceptable and should be considered as unified absolute?
P.S: iOS 4 to current public release iOS 10.1.1 is not concerned here although the same algorithm is available there. In contrast to contemporary rival flagships.
New insight about Apple's Battery Algorithm
In iOS 10.2 Dev/Public Beta 1, TIME SINCE LAST FULL CHARGE populated at 90%. Next 10% is actually not required or its the reserved charge? If its not required, then why there's still the capacity to be charged? If its reserved capacity (for development purposes to check some new possibility which is not yet substantially precedent) then for what purpose, if its not "user accessible" in emergency, instead it still relies upon 20% and 10% Low Battery Pop Up Alert?
Apple's new pre-emptive measure against Samsung's Note 7 battery explode incidents. In my opinion so far as my best knowledge is concerned, Samsung goes full throttle as far as the hardware's maximum with its incremental software optimisations. Apple in general advertises the hardware capacity (no matter what it is) as it is, but its software drives the 90% of it at maximum. Might be to protect its manufacturing costs for the replacement/RMS.
Benchmarking is psychological console for the users. Here's a great deal of multiple factors plea. In realtime its on individual ground but how much difference is acceptable and should be considered as unified absolute?
P.S: iOS 4 to current public release iOS 10.1.1 is not concerned here although the same algorithm is available there. In contrast to contemporary rival flagships.
Last edited: