Looking for intelligent discussion here, not a fanboy fest either way.
I've been playing a lot with both my iPhone 4S and my Galaxy Nexus the last month or so. I worry I might wear my SIM out from all the swapping. lol Today I got to thinking about something and was hoping some people with programming experience could help me out here.
iPhone 4S - A5 based on Cortex-A9 clocked at 800MHz / PowerVR SGX543MP2 GPU
Galaxy Nex - TIOMAP based on Cortex-A9 clocked at 1.2GHz / PowerVR SGX540 GPU
I'm not so up to snuff on mobile GPU's but they at least seem like they are from the same series (would appreciate input on this). For the CPU's it appears both are based on the Cortex-A9 dual core but the GNex is clocked at almost double the clock speed stock.
For informational purposes, I've actually rooted my GNex and I'm running franco.Kernel with the CPU overclocked to 1.5GHz and the GPU overclocked to 384MHz (307MHz stock). Not sure on iPhone GPU clock speed.
Here's the meat of my question. I know Apple reaps the benefit of being able to design the hardware and software specifically for each other giving them the ability to virtually eliminate waste. The Galaxy Nexus is about as close as we can get to a Google design with the same two prong approach.
As a result my iPhone 4S feels smoother and faster than my overclocked and optimized GNex on ICS 4.0.4. Is this all there is to it though? Is Apple's advantage in designing both hardware and software hand in hand really so beneficial to overcome 700MHz in user experience?
If you guys can recommend some good cross platform benchmarks that could be used to compare the two beyond just the "feel" of speed, I will gladly perform them and post up the results.
Just looking for some good discussion on how much of a benefit it really is to control the whole potato (Apple) vs Google's approach with Android.
I've been playing a lot with both my iPhone 4S and my Galaxy Nexus the last month or so. I worry I might wear my SIM out from all the swapping. lol Today I got to thinking about something and was hoping some people with programming experience could help me out here.
iPhone 4S - A5 based on Cortex-A9 clocked at 800MHz / PowerVR SGX543MP2 GPU
Galaxy Nex - TIOMAP based on Cortex-A9 clocked at 1.2GHz / PowerVR SGX540 GPU
I'm not so up to snuff on mobile GPU's but they at least seem like they are from the same series (would appreciate input on this). For the CPU's it appears both are based on the Cortex-A9 dual core but the GNex is clocked at almost double the clock speed stock.
For informational purposes, I've actually rooted my GNex and I'm running franco.Kernel with the CPU overclocked to 1.5GHz and the GPU overclocked to 384MHz (307MHz stock). Not sure on iPhone GPU clock speed.
Here's the meat of my question. I know Apple reaps the benefit of being able to design the hardware and software specifically for each other giving them the ability to virtually eliminate waste. The Galaxy Nexus is about as close as we can get to a Google design with the same two prong approach.
As a result my iPhone 4S feels smoother and faster than my overclocked and optimized GNex on ICS 4.0.4. Is this all there is to it though? Is Apple's advantage in designing both hardware and software hand in hand really so beneficial to overcome 700MHz in user experience?
If you guys can recommend some good cross platform benchmarks that could be used to compare the two beyond just the "feel" of speed, I will gladly perform them and post up the results.
Just looking for some good discussion on how much of a benefit it really is to control the whole potato (Apple) vs Google's approach with Android.