So because Apple didn't talk about it, are we to believe that the A14 doesn't have hardware acceleration for RT? Do we think they would talk about RT if the hardware could accelerate it?
OP knows more about this than me but I think this possibly explains the discrepancy. Apple has stated that they don't use a single benchmarking suite (or program) for their performance claims but run their own (opaque) real-world tests. If the GPUs are moving to really wide SIMDs this would have an immediate impact on compute but a much smaller impact on existing graphics which are optimized for narrower SIMD designs—not to mention deferred rendering if they are cross-platform apps.The Metal Score is impressive. It's 137% higher than A12 and 72% higher than A13 according to Geekbench results. It's lot more than 30% higher GPU performance that Apple stated at WWDC.
A12 5307, A13 7308, A14 12571
It can mean that A14X and A14Z can also be much faster?
A12X 10860, A14X 25725
A12Z 11665, A14Z 27632
A12 with 4 GPU cores scores 5307. A12Z with 8 GPU cores scores 11665. 4 extra cores means 120% performance increase. An A14Z Mac SoC with 24 GPU cores could score 87876 in Metal. That's between Radeon Pro W5700XT and Radeon Pro Vega II.
Who knows how Apple does stuff internally... I guess that it’s a remote possibility. I hope though Geekbench devs have more sense than using Performance Shaders for the compute benchmark.
Apple has also talked about new "ML accelerators" in GPUs and I've wondered if those aren't related to wider SIMDs. This speculation seems very plausible to me!
Interestingly, I just reran the compute tests for my 2020 11” iPad Pro and iPhone 11 Pro and received much improved results. It looks like either the OS improved results or GeekBench changed the compute tests.Well, obviously I have extrapolated the scores for A14X and A14Z since they don't exist. The Metal score for A14 was reported here at Macrumors on several occasions. Here is the link:https://browser.geekbench.com/v5/compute/1581541
As for the A12, A12X, A12Z and A13 all the scores can be found at https://browser.geekbench.com/ios-benchmarks, so you're looking at "wrong" chart. You should look at iOS Benchmarks. The A12X with 10860 is a 3rd gen iPad Pro on that chart.
Some Metal scores for iPhone 12:
While not has high as the iPad score, they are still much higher than the 30% uplift claimed by Apple (the A12 yields scores near 5200).
And the CPU scores are about the same as those we saw for the iPad.
Another very high Metal score for "iPad 13,1": https://browser.geekbench.com/v5/compute/1641158
This suggests that the iPad results are legit. I'm not sure why the Metal results are lower for iPhones. Thermal throttling?
Looking at the detailed results, it appears that many tasks yielded very similar scores between the iPhone and iPad (again confirming that iPad results are legit, unless someone wasted their time forging certain sub-results while correctly guessing others), while the iPad was much faster at certain tasks.
I thought iPads always had twice the memory bandwidth of iPhones because the memory controller is 128 bit instead of 64 bit.Wow, those benchmarks are certainly popping up now! Thanks for pointing this out!
Now, I am completely shooting in the dark here, but maybe the iPad has more memory bandwidth (e.g. quad-channel vs. dual-channel on the iPhone)? Looking at the SFFT benchmarks for example — they should be fairly "light" onto compute side, so they are likely memory bandwidth limited, and iPad scores almost exactly twice as much as the iPhone (consistent with 2x as many memory channels). But really, it's a mystery until someone actually has a device in their hands and looks at it in more detail.
Anyway, at least we know for sure that these results are not a fluke. The big disparity in Metal scores does suggest that there is something more beyond what Apple told us. They did mention "new memory compression" for the A14 GPU on Tuesday, but I am kind of doubtful that it could deliver such big improvements in compute.
I thought iPads always had twice the memory bandwidth of iPhones because the memory controller is 128 bit instead of 64 bit.
You are correct. They called the matrix multiplication accelerators "ML accelerators" in some marketing literature (or perhaps someone in the press quoted them wrong. That threw me off.There is a dedicated ML coprocessor (Neural Engine) and there are matrix multiplication accelerators on the CPU cores (most likely for smaller jobs or ones that need more flexibility). I don’t remember Apple mentioning any ML accelerators on the GPU. You can obviously do ML via GPU compute, it’s just not the most power-efficient way.
In the end, could the improvements result mostly from higher bandwidth and better compression? It seems Apple uses LPDDR5 now.Anyway, at least we know for sure that these results are not a fluke. The big disparity in Metal scores does suggest that there is something more beyond what Apple told us. They did mention "new memory compression" for the A14 GPU on Tuesday, but I am kind of doubtful that it could deliver such big improvements in compute.
This would be super cool. And super expensive. ?I missed it when it dropped, but a couple days ago Imagination publicized some info about their upcoming B-Series GPU technology which would seem to be of interest here.
IMG B-Series GPU - Imagination
IMG B-Series is a range of GPU IP that takes all the advances of IMG A-Series, the fastest GPU IP ever created, and adds even more performance.www.imgtec.comImagination Announces B-Series GPU IP: Scaling up with Multi-GPU
www.anandtech.com
Note: despite what you may have heard, Apple didn't make a permanent break with Imagination. Some time after the very public spat a few years ago, both companies quietly removed all the PR bluster about it from their websites, which seemed to indicate they'd come to a new agreement. Sometime later, they made it public; Apple now has something akin to an ARM architectural license which allows Apple to design its own implementations of Imagination's GPU technologies. So, things derived from IMG B-Series could well be showing up in Apple Silicon, as an "Apple" GPU.
Of particular interest, IMO: B Series is targeted at desktop, and supports multi-GPU scaling. It seems obvious in retrospect, but TBDR is actually a much better fit for multi-chip GPU designs than immediate mode GPUs. You're already splitting the scene into a bunch of tiles; so what if the physical hardware rasterizing the tiles is split across multiple chips? There's no demand for low latency communication between the tile engines and the geometry processor, so as long as you have enough geometry processor to keep all the tile engines fed, it should be easy to scale up. There aren't any problematic hacks required, unlike multi-GPU with immediate mode engines.
This answers one of the big questions that's been kicking around my head since the announcement: How would Apple build an Apple Silicon Mac Pro given the absolutist statements they've made about only using Apple TBDR GPUs? The IMG-derived GPUs in A series chips have always had a single full GPU integrated into one chip with everything else. While it's been quite powerful for what it is, it wasn't obvious how that could in any way scale to a system like the Mac Pro, where one of the headline features is two 500W MPX module sockets, each capable of supporting two GPUs.
Now we have a clue about how Apple Silicon might be able to scale out like that, and it may be able to do things that were impossible with AMD GPUs before (all four GPUs contributing to rendering the same scene, without ugly hacks like AFR).
I’ve been checking their website fairly regularly as I consider and speculate and that appeared on the same day as the Apple announcements. May have been posted before the event, but I didn’t check before the event.I missed it when it dropped, but a couple days ago Imagination publicized some info about their upcoming B-Series GPU technology which would seem to be of interest here.
Do we think parts of the GPU are clocked higher on the iPad due to better thermals?
More memory bandwidth is what I was thinking. But the iPhones have more memory, so how is it that the iPad has more bandwidth.That possible, but it won’t explain the results we are seeing. It’s not like the iPad is 30% faster in every benchmark - there are just few of them where it’s twice as fast. Not something higher clocks would do.
Currently I have two hypotheses. One: the iPad has more memory channels and some compute tasks scale very well with that. Two: it’s a bug in Geekbench.
More memory bandwidth is what I was thinking. But the iPhones have more memory, so how is it that the iPad has more bandwidth.
Bandwidth is a function of how fast the memory is, and how many bits of it you can read at once. That’s different than the amount of memory.More memory bandwidth is what I was thinking. But the iPhones have more memory, so how is it that the iPad has more bandwidth.