Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
The Metal benchmark in Geekbench scores compute performance, not graphics performance. But even in the same test, M1 gets about 21000 (7-core Air gets about 19000) and the Radeon Pro 5600M gets about 42000 on average. So the 5600M is actually faster than you may think.

Apple of course isn't just aiming to "match" the performance of the 5600M. Apple has never ever introduced any machine that matches the GPU performance of a previous generation device. So they "need" at least 3-4x the GPU performance to be able to claim something like 1.5 - 2x performance improvement over the last generation. That's why I'm guessing Apple will need more than 16 GPU cores here. It's also to note that it's not as easy as just throwing more cores at the problem. At some point, memory speed becomes a bottleneck as well, and Apple will need to use memory that's faster than LPDDR4X. So of course performance per watt will take a nose dive.

Also, depending on your use case, an M1X chip with a much more powerful GPU may not really last all that long. For instance, if I'm gaming, my M1 MacBook Pro 13" barely lasts about 6 hours with Hitman/Tomb Raider. It does of course reach 15+ hours if I'm just browsing the internet. With a GPU that's 3-4x faster and more CPU cores, I wouldn't be surprised if the 16" MacBook with M1X can blow through the battery in 3 - 4 hours (implying a max power consumption of 30 - 35W here, which is... "generous" considering the M1 is about 15W in the worst case).
All good points!

I agree that a new MBP16 will need to improve upon the current top-end 5600M - I'm not sure whether a 2x performance increase will be achievable, but let's see.

I would hope that a new MBP14 would be able to get close to the current 5600M - which would be a significant improvement over the current M1 Macs.

I do wonder how Apple plans to scale out GPU performance on Apple Silicon. As you say, you can't keep adding cores without other changes such as memory bandwidth. We may see the GPU be separated onto a separate die (on the same SoC package) to improve yields and offer more flexible CPU/GPU combinations.

All of these increases will increase power consumption - whether Apple can maintain the performance/Watt remains to be seen, but battery life is likely to take a hit compared to the M1 Macs. There's no such thing as a free lunch.
 
  • Like
Reactions: JohnnyGo

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Does anyone know whether the current GPU benchmarks are skewed towards IMR vs TDBR GPUs?

If the benchmarks are skewed, it probably doesn't make much sense to rely on those numbers to predict future Apple GPU strategy.

In any case, I would think that Apple would have all their CPU, GPUs and other cores figured out way before they decided to switch to their own Silicons.
 

Jpoon

macrumors 6502a
Feb 26, 2008
553
38
I think Intel is going to be in the stack at least as long as the 7,1 Mac Pro life cycle, so like 3-5 years?
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
What do you mean by that? And which benchmarks are you talking about?
I'm referring to the Geekbench Compute benchmark, and basically just about any GPU benchmarks that's used to compare the GPU performance between the Intel and M1 Macs.

With IMR GPUs used in Intel Macs, any data used by the GPU has to be copied via PCIe. Apple's GPU basically get the data FoCs in this regards, with all else being equal. Would that mean that Apple's GPU does not need so much grunt compared to current crops of GPUs to achieve the same performance level?

I have to admit I'm not knowledgeable in this regards, so I'm curious to know if existing benchmarks used to compared between both Mac architectures are relevant.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I'm referring to the Geekbench Compute benchmark, and basically just about any GPU benchmarks that's used to compare the GPU performance between the Intel and M1 Macs.

With IMR GPUs used in Intel Macs, any data used by the GPU has to be copied via PCIe. Apple's GPU basically get the data FoCs in this regards, with all else being equal. Would that mean that Apple's GPU does not need so much grunt compared to current crops of GPUs to achieve the same performance level?

TBDR only benefits rasterization performance in presence of complex pixel shading, which basically limits it to games and maybe some odd professional software that uses game-like rendering techniques.

As to Geekbench, that's an interesting question. It depends on what Geekbench actually measures. M1 has very low CPU/GPU communication latency (as you mention, they don't need to transfer any data between physical memory pools), but it lacks memory bandwidth compared to some dGPUs with similar theoretical compute performance.
In tasks where the working data can be effectively preloaded and reused frequently, a dGPU with fast dedicated memory will likely outperform the M1. If the tasks are relatively small and have to be streamed to/from GPU continuously, M1 will probably have an advantage. Which is one of the reasons why M1 does so well in video editing but doesn't quite reach its peers in some synthetic benchmarks such as LuxMark.

Regarding Geekbench specifically, I am wondering whether it only measures the shader execution time, or does it also measure the data setup time. If it's only the former (as I would suspect), then this would probably penalize a GPU like M1 that has lower memory bandwidth but also lower setup latency. Another question is the complexity of a benchmark itself. M1 has quite a lot of compute power for a GPU it's size, so it will do good exceptionally in tasks where compute work outweighs the data transfers. Geekbench uses fairly simple benchmark kernels that are not very demanding. Geekbench compute scores I have looked at seem to correlate primarily with memory bandwidth.

I have to admit I'm not knowledgeable in this regards, so I'm curious to know if existing benchmarks used to compared between both Mac architectures are relevant.

Benchmarks are almost always relevant as long as you know what they are measuring. Otherwise, how would you compare performance in the first place?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.