Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
View attachment 2268832I would seriously concern with Apple Silicon's GPU performance since M2 Ultra is barely closed to RTX 3060 Ti according to Cinebench R24. I really think that each chips require more GPU cores cause the power consumption is really low. At this point, Apple Silicon's GPU performance sucks for sure.

GPUs (especially nVidia's GPUs) can have wildly disparate performance depending on exactly WHAT you want to do. This gets even worse when the crowd that care about tribalism get involved and start comparing very different elements of the line.

If you want massive AI training, nVidia has no competition right now. Everything from Tensor Core to nVLink has nothing scaled up to their levels. But those same AI training cards have no Ray Tracing and have stripped out almost all of their graphics support (a small fragment remains, presumably for compatibility, but likely to be removed soon).

If you want Ray Tracing support (for whatever reason) of course until A17 Apple wasn't even pretending to compete in that space.

If you want HPC (ie GPGPU Compute) support, do you care about FP64? For whatever reason, Apple has so far shown no interest in that, not even token nVidia consumer style "FP64 at 1/16 the performance of FP32" support.

What Apple HAS concentrated on is:
- all round support. Good (not superb) at graphics, including tracking all the latest features. Good (superb when you consider energy) support for FP32 compute BUT missing technical details that allow fancy new algorithms to work on nVidia. Good (again considering energy) support for AI, but spread over the SoC not just in the GPU.

- tight integration with the CPU/system. nVidia is working on this with Grace, but they are behind, and of course targeting a very different (much higher power, much more expensive) space.

- a nicer (but always feature-lagging relative to nVidia) API.


Basically Apple has something that's good for "all-round" use cases, but they are NOT chasing the extreme gamer market (a good business decision IMH), and they are not chasing the academic market (a bad business decision IMHO).
They are advancing their GPU rapidly (not just in performance but in features) BUT nVidia are not Intel or ARM, nVidia are running just as fast as Apple, adding serious, important, heavyweight features every two years. Apple is always about 4 years behind nVidia in terms of FUNCTIONALITY and that is not improving.

(a) I don't think you can criticize Apple – they ARE running as fast as the best; they just started from a position far behind...

(b) "Performance" is less interesting (and ultimately less important) than functionality. Apple will always beat nVidia's lowest end (06 and 116 class cards) and will not match their highest end (100 and 102 cards) unless they start shipping data center type machines. But functionality defines what system STEM folks and leading edge companies and researchers are forced to use.
nVidia has radically redefined the idea of the baseline GPU over the past two generations,
- first by redefining the meaning of a warp (not a single PC over multiple lanes that fakes branches via predication, but 32 PCs all running independently, with aggregation HW to run them coherently and thus optimally WHEN possible. This allows for dramatically different sorts of algorithms, for example having each lane walk and MODIFY linked list or similar pointer data structures with per-list locks rather than a single global lock (because synchronization primitives called at the per-lane level are no longer fragile and liable to livelock the entire GPU)
- second by redefining the size of the locality unit with Threadblock Clusters in Hopper, which take serious advantage of the fact that modern GPUs have so many cores, allowing the local storage on each core to co-ordinate with other cores.

These two features will define how future GPUs are used; and by the time Apple (and everyone else, but they are running even slower...) catch up nV will have amazed us yet again.

Apple ARE very good at the baseline GPU of ten years ago; don't mistake that! At very low energy they run the GPU very efficiently in terms of the packing and scheduling at every level. That's why they do so well on GB6 Metal or GFXBench. And both of those match their main Apple device use cases. BUT they're very good as a GPU of ten years ago. nVidia has raised (and keeps raising) the bar, and if one is being honest; in terms of that raised bar Apple do lag four years, and that's not changing.

Most of this has no relevance to the tribal wars most of you care about. Apple does every bit as well as you'd expect in that arena, given the energy budget.
But it DOES have relevance in terms of where this stuff matters, the choices made by leading edge STEM GPU users/researchers...
 
@koyoot I though 15 Pro was supposed to have worse battery life than 14 Pro or 13 Pro? This result casts more doubt on the methodology of the test linked on Anandtech forums.
Rushing to judgement about battery life based on one video/benchmark is as dumb as rushing to judgement based on the first GB score.
In both cases the first few numbers you see are determined by some reviewer who cares more about FIRTRST!!!! than about accuracy, and represent battery life/performance while the iPhone is simultaneously doing all its post-install background work.
 
That is indeed correct. Its 9 wide decode which gave them 3.5% IPC increase.

From technology standing point of view - it appears its not the issue with the process, but more with skills, and talent drain that affected Apple after M1 release.

A17 Pro is very much disappointing product.
You could fairly rapidly write/execute a benchmark that tests that "Decode" is 9-wide.
You could not perform decent tests to establish the OTHER claims that are made in that diagram in one day. I even put "Decode" in quotes because there may well be details as to exactly how wide decode is depending on many things (wider if executing out of loop buffer? wider if fusion?)

My guess is that Decode is wider (9 or 10) but Rename is not. A dumb microbenchmark that does not understand the issues would not pick that up.
The rest of your Jeremiad as to 3.5% IPC increase, disappointment, Apple skills, etc, well that's not technical so I won't touch it.
 
  • Like
Reactions: smalm
When Apple was on top of the world with the M1, they got all the praise and accolades. When they are on a slump for several years in a row, it’s only fair to give them equally amount of criticism. Ever since the A14, Apple has basically gone for the low hanging fruit and increased clock speeds to increase performance. It’s obvious that they are grasping at straws.
There are two classes of people on this forum, the engineers and the tribal warriors.
Please take your tribal warrior nonsense somewhere else and leave us engineers to discuss reality.
 
It's way too really for this kind of pessimism. The crucial detail will be the clocks. If A17 runs at 3.6Ghz to achieve this 10% uptick, then it's indeed not a good sign.
It is even worse. Not sure if at this point you already saw it, but the A17 Pro runs at 3.77GHz…
 
You don't get both power efficiency and performance gains at the same time. Want more of one you get less of the other and vice versa.
Well, you can get both power efficiency and performance gains with a die shrink. But you can't get the maximum theoretically possible gains of both at the same time. One has got to give way for the other.

Nonetheless the iPhone 17 Pro is rated at the same battery life even though the battery capacity grew to 4000 mAh (+380). So where is all this power going? The only explanation is that the twice as fast neural engine is also utilized much more by the OS. And with a new USB-C controller and RTX engine entirely new energy consumers were added to the equation.

So because of new and better features, the CPU performance gains are limited to 10%. In practice 20% faster GPU and 4× faster Raytracing will make a bigger difference to the users.
 
A16 gets 2640
A17 Pro gets 2925

That's not 16%.
I trusted the aggregate score reported for the iPhone 14 pro at 2522.
But that's indeed less than the typical score. Do they use averages? If so, that's dumb. They should use medians as any sensible person would do...
 
Snímek obrazovky 2023-09-19 v 23.10.14.jpg
According to Tom's guide battery life of 15 Pro is longer than 14 Pro
 
  • Like
Reactions: APCX
I know, I know! But ... you falsely assume, only because A17 is 3nm and a new design, it must realize the total max possible 18% speed increase from the die shrink plus some extra speed increase from design changes. You do not take into account that new GPU features and power reduction are also valuable design targets.

Only an idiot would choose CPU speed increase above all else. Apple always prides themselves on leading the competition at performance per watt, which can even mean lowering raw performance a little bit while lowering energy consumption by a lot. That's exactly what they did with the introduction of so-called high-efficiency cores. And every time Apple introduces a new media engine (for example RTX) they take a little bit of chip area away from the CPU cores, so that on highly specific task (like raytracing) can run much faster at much higher energy efficiency. The goal is always to build a system with a good balance between all demands, never one that excels at one metric only.

So when TSMC predicts that the 3nm transition will bring up to 18% CPU speed increase, you should expect less not more. Not because Apple engineers are too stupid to achieve the full potential of this technology, but because they have better things to do than winning a meaningless benchmark race.
@deconstruct60 has already provided a pretty good followup explanation of how to think about this better.

I'll just add that I never said anything about expecting 18% from N3 - that's too high, even going by TSMC, and I never expected Apple to target iso-power for the A17. I did expect somewhat better performance iso-clock, but there are lots of possible reasons, some of them even very good reasons, why that might not be the case. I've already mentioned them previously so I won't repeat that here.

In any case, the GPU/NPU whatever are a red herring when talking about CPU performance. If the GPU isn't basically asleep while your test is running, you're not smart enough to be writing a benchmark anyway.

And finally, you can't "take a little bit of chip area away from the CPU cores". They're the size that they are, and you can't just "borrow" a little of their area for something else. You can have fewer cores in your design, or you can choose to shrink them (smaller cache, fewer execution resources, etc. - Apple is definitely not doing that in their P cores), but that's it. And Apple isn't doing any of that. For years they've been unwilling to compromise, which is why the chips kept growing (at least in transistor count, actual area is process-dependent).

The "benchmark race" is meaningless on phones these days because it's a foregone conclusion that Apple will destroy everything else, and because it's impossible to find a broad use case where more CPU is critical. Neither of those things is true on Macs (we are after all in the M3 thread here), despite the increased importance of GPUs/NPUs/etc. over time.
 
  • Like
Reactions: name99 and dgdosen
I trusted the aggregate score reported for the iPhone 14 pro at 2522.
But that's indeed less than the typical score. Do they use averages? If so, that's dumb. They should use medians as any sensible person would do...
Indeed, 2522 is the official geekbench score. I’m not sure why any other would be chosen.
1695160021523.png
 
View attachment 2269666According to Tom's guide battery life of 15 Pro is longer than 14 Pro

Tom's is useless because they copied last year's number for iPhone 14/Pro and pasted into this year's table. Web surfing tests can't be relied on because over a year:

  • Web page contents change
  • Ads change
  • Browsers change
  • iOS changes
  • 5G cellular coverage changes
 
  • Like
Reactions: souko
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.