Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Well, yeah. I'm just saying... the M1 is an impressive feat of engineering. The CPU is nothing to sneeze at. But in terms of GPU performance, it's still behind what AMD and nVidia are able to achieve.

Not so sure about that. Andrei Frumusanu measured M1 GPU power consumption at 10 watts max. In Rise of the Tomb Raider (1080p, Very High, FXAA), their M1 machine gets 40 fps. A gaming laptop with a GTX 1650 gets 60 fps. Note that this is Rosetta 2 vs. Windows DX 12. If you look at performance per watt, its 4 fps/watt for Apple and 1.2 fps/watt for Nvidia. Even if we assume that 1650 is a scrap GPU and not really power-efficient, the faster, specially binned 1660 Ti Max-Q in the Surface Book 3 doesn't do much better here (82 fps / 60 watts = 1.4 fps/watt). And even if we look at the pinnacle of Nvidia power-efficiency (1650 Ti max-Q at 35W, assuming that it's the same performance as the 1650), we get 1.7 fps/watt.

You are right of course that in order to scale that Apple would need faster RAM, and that would definitely have negative effects on the battery, but then again, faster M variants are supposed to go into larger machines with larger battery. Running GPU-intensive stuff the RAM seems to draw less than one watt... Apple could double or even triple the memory bandwidth and still limit the memory-related power increase by 3-5 watts at most.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
They have only released the entry level models that all had only intels integrated graphics. All higher end models will have amd graphic cards like the intel models had with the automatic graphic switching system

Apple was very clear that there won't be any third-party GPUs on Apple Silicon Macs. They have their own custom GPUs with custom features.
 

wyatterp

macrumors member
Nov 11, 2020
88
85
Not so sure about that. Andrei Frumusanu measured M1 GPU power consumption at 10 watts max. In Rise of the Tomb Raider (1080p, Very High, FXAA), their M1 machine gets 40 fps. A gaming laptop with a GTX 1650 gets 60 fps. Note that this is Rosetta 2 vs. Windows DX 12. If you look at performance per watt, its 4 fps/watt for Apple and 1.2 fps/watt for Nvidia. Even if we assume that 1650 is a scrap GPU and not really power-efficient, the faster, specially binned 1660 Ti Max-Q in the Surface Book 3 doesn't do much better here (82 fps / 60 watts = 1.4 fps/watt). And even if we look at the pinnacle of Nvidia power-efficiency (1650 Ti max-Q at 35W, assuming that it's the same performance as the 1650), we get 1.7 fps/watt.

You are right of course that in order to scale that Apple would need faster RAM, and that would definitely have negative effects on the battery, but then again, faster M variants are supposed to go into larger machines with larger battery. Running GPU-intensive stuff the RAM seems to draw less than one watt... Apple could double or even triple the memory bandwidth and still limit the memory-related power increase by 3-5 watts at most.
Amazing to consider what's coming if they are off to this great a start. I'm a big nvidia fan, but nvidia may need to be careful they don't pull an intel in the coming years. I think this is why nVidia wants to really improve on ARM - their TEGRA chip was ahead of it's time, but they can't sit still and focus on power gobbling ginormous chips as they have in their RTX 3XXX series GPUs. They need entry level, efficient GPUs too.
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
Not so sure about that. Andrei Frumusanu measured M1 GPU power consumption at 10 watts max. In Rise of the Tomb Raider (1080p, Very High, FXAA), their M1 machine gets 40 fps. A gaming laptop with a GTX 1650 gets 60 fps. Note that this is Rosetta 2 vs. Windows DX 12. If you look at performance per watt, its 4 fps/watt for Apple and 1.2 fps/watt for Nvidia. Even if we assume that 1650 is a scrap GPU and not really power-efficient, the faster, specially binned 1660 Ti Max-Q in the Surface Book 3 doesn't do much better here (82 fps / 60 watts = 1.4 fps/watt). And even if we look at the pinnacle of Nvidia power-efficiency (1650 Ti max-Q at 35W, assuming that it's the same performance as the 1650), we get 1.7 fps/watt.

You are right of course that in order to scale that Apple would need faster RAM, and that would definitely have negative effects on the battery, but then again, faster M variants are supposed to go into larger machines with larger battery. Running GPU-intensive stuff the RAM seems to draw less than one watt... Apple could double or even triple the memory bandwidth and still limit the memory-related power increase by 3-5 watts at most.

I'm not sure what numbers are you are seeing. Here:

According to that, the 1650 actually gets 80fps at about the same settings as what we're seeing with M1.

So its efficiency is actually higher than you're quoting. But ignoring that, the 1650 is still roughly double the performance of the GPU in M1, at least as far as Rise of the Tomb Raider is concerned. The game is optimized for Metal, so we can't quote overhead by Rosetta 2 here. Rosetta 2 may be holding the CPU back, but not the GPU in this case.

And we both know power consumption doesn't scale linearly, so in order to increase performance by 2X, Apple may need to raise power consumption of the GPU of the next chip to about double. So... 25W is my guess. Plus whatever increase in memory power consumption.

And suddenly, that next chip doesn't seem so impressive anymore. Sure, it's still more power efficient than nVidia's most efficient chip, but not by as much of a significant margin as some may think. And this is considering Apple is on 5nm compared to 12nm for the GTX 16 series.

Imagine the kind of gain nVidia will have going from 12nm to 5nm.
 

tdar

macrumors 68020
Jun 23, 2003
2,102
2,522
Johns Creek Ga.
Amazing to consider what's coming if they are off to this great a start. I'm a big nvidia fan, but nvidia may need to be careful they don't pull an intel in the coming years. I think this is why nVidia wants to really improve on ARM - their TEGRA chip was ahead of it's time, but they can't sit still and focus on power gobbling ginormous chips as they have in their RTX 3XXX series GPUs. They need entry level, efficient GPUs too.m no
Also, I expect that Nvidia will use their arm ownership to build server chips to do on servers what Apple is going to do with AS. The future is arm.
 

tdar

macrumors 68020
Jun 23, 2003
2,102
2,522
Johns Creek Ga.
I'm not sure what numbers are you are seeing. Here:

According to that, the 1650 actually gets 80fps at about the same settings as what we're seeing with M1.

So its efficiency is actually higher than you're quoting. But ignoring that, the 1650 is still roughly double the performance of the GPU in M1, at least as far as Rise of the Tomb Raider is concerned. The game is optimized for Metal, so we can't quote overhead by Rosetta 2 here. Rosetta 2 may be holding the CPU back, but not the GPU in this case.

And we both know power consumption doesn't scale linearly, so in order to increase performance by 2X, Apple may need to raise power consumption of the GPU of the next chip to about double. So... 25W is my guess. Plus whatever increase in memory power consumption.

And suddenly, that next chip doesn't seem so impressive anymore. Sure, it's still more power efficient than nVidia's most efficient chip, but not by as much of a significant margin as some may think. And this is considering Apple is on 5nm compared to 12nm for the GTX 16 series.

Imagine the kind of gain nVidia will have going from 12nm to 5nm.
That’s great and all but Irrelevant. There will be no Nvidia GPU’s in AS just as there have not been in recent macs. In the case of AS there will also be no AMD GPU’s as well.
You take the system as a whole the way decides to package it or you don’t.
I feel certain that most people will take it.
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
that next chip doesn't seem so impressive anymore. Sure, it's still more power efficient than nVidia's most efficient chip, but not by as much of a significant margin as some may think.
so you think when M1 takes in total cpu and gpu around 15W is not significant vs an 1650 where dgpu+cpu draws more than 35-40W? And all of this in an machine that runs around 37C vs 45C? ok
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
The numbers are from Anandtech, who tested the M1 and the Geforce in controlled conditions. Using numbers from another source bring uncontrolled factors, like the precise game settings used.

Both Anandtech and that source already stated their settings. I'm comparing High settings @ 1080p for both. But say... even ignoring my results, the crux of the problem is still that the GTX 1650 is 1.5x faster. It's also a 12nm chip. The M1 is far more power efficient, but it's a 5nm chip. Almost 1.5 nodes ahead.

That’s great and all but Irrelevant. There will be no Nvidia GPU’s in AS just as there have not been in recent macs. In the case of AS there will also be no AMD GPU’s as well.
You take the system as a whole the way decides to package it or you don’t.
I feel certain that most people will take it.

Just for the record... I have an M1 MacBook Pro and I love it.

But the thing is... we do have to have this comparison against nVidia and AMD. GPU performance isn't a big concern with the M1, but it'll be a big concern for the upcoming 16" MacBook Pro, the iMac, and the Mac Pro.

so you think when M1 takes in total cpu and gpu around 15W is not significant vs an 1650 where dgpu+cpu draws more than 35-40W?

Sure, M1 is impressive. I have one. I know.

But the chip in the 16" MacBook Pro will need to step up its game significantly compared to the GPU of the M1.

It's impressive now, but if this is the most efficient chip that Apple can push out, that means the next chip will be less efficient, and less impressive. I hope I'm wrong. I really do want a 16" MacBook Pro that can last 20 hours and has graphics that beats the 5500M in my current 16". But I also realize that it's not realistic to expect Apple to be able to scale M1 linearly without any drawback whatsoever.

The reason the current 16" sucks is because of Intel's CPU more so than the GPU. The Core i9 can draw 120W when Turbo Boosting. It's hugely inefficient.

From Anandtech's figures, M1's CPU is drawing about 10W at most. So Apple can still very easily pair M1X with an AMD GPU for their desktop line and just call it a day. It remains to be seen if that's what they'll do, or if they have a plan to somehow usurp the 5600M with another integrated GPU, then go for gold and give the 3080 and 6800 XT a good run for their money.
 

MK500

macrumors 6502
Aug 28, 2009
434
550
I know I'm totally "feeding the troll" here; but why the heck is this false clickbait thread still listing so high on the forums? Maybe we should be starting a new discussion on the M1 GPU in a different thread.

tl;dr This thread is likely confusing some newbies.
 

Sanpete

macrumors 68040
Nov 17, 2016
3,695
1,665
Utah
But the chip in the 16" MacBook Pro will need to step up its game significantly compared to the GPU of the M1.
Of course. As they always do for the more expensive, larger, more power-hungry machines.
It's impressive now, but if this is the most efficient chip that Apple can push out, that means the next chip will be less efficient, and less impressive.
"If." Why should it be? I don't get all the angst about this.
 

Homy

macrumors 68030
Jan 14, 2006
2,506
2,459
Sweden
It's impressive for an internal GPU, but on absolute terms, the results are not so good. Look here: https://forums.macrumors.com/thread...gb-ram-screens-and-settings-included.2269262/

For example, Dota 2 (a very old game) has performance issues, and Starcraft 2 requires low quality shaders. With Intel Macs, you could plug an external GPU to have up to 85% native performance, but now, you're stuck.

Then don't play demanding games like Deus Ex MKD and Borderlands 3 on an entry level laptop/iGPU with 10W TDP. Not even laptops with Intel XE G7
and AMD APU, like Ryzen 5 3400G with Radeon RX Vega 11 at 65W can do better than 20-25 fps in Shadow of the Tomb Raider on LOW settings.

In that video he ran the game on HIGHEST settings and got around 20-25 and the game isn't even optimized, running through Rosetta. If that doesn't impress you have unrealistic expectations.
 
  • Like
Reactions: MEJHarrison

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
"If." Why should it be? I don't get all the angst about this.

So the inverse question is: why shouldn't it be?

Even based on Apple's own graphs, they are showing that past the M1, power consumption will shoot up exponentially while performance may not rise as much.

Apple_m1-chip-cpu-power-chart_11102020_big.jpg.large_2x.jpg


Why is it so hard for you and many others to accept that the M1X may actually not be as power efficient as the M1?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
And we both know power consumption doesn't scale linearly, so in order to increase performance by 2X, Apple may need to raise power consumption of the GPU of the next chip to about double. So... 25W is my guess. Plus whatever increase in memory power consumption.

And suddenly, that next chip doesn't seem so impressive anymore. Sure, it's still more power efficient than nVidia's most efficient chip, but not by as much of a significant margin as some may think. And this is considering Apple is on 5nm compared to 12nm for the GTX 16 series.

Imagine the kind of gain nVidia will have going from 12nm to 5nm.

GPU performance basically scales linearly with the number of cores. So does power. I am not suggesting that they increase the clocks. I am suggesting that they make a bigger cluster. The M1 GPU is only 1024 shader cores, running at low clock.

Another thing you are forgetting that Apple uses TBDR. That alone gives them a 2x advantage in rendering. Apple will still have no incentive to go with a third party GPU because their own one will always have two decisive advantages: TBDR and unified memory.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
GPU performance basically scales linearly with the number of cores. So does power. I am not suggesting that they increase the clocks. I am suggesting that they make a bigger cluster. The M1 GPU is only 1024 shader cores, running at low clock.

Another thing you are forgetting that Apple uses TBDR. That alone gives them a 2x advantage in rendering. Apple will still have no incentive to go with a third party GPU because their own one will always have two decisive advantages: TBDR and unified memory.
1.2GHz is a low GPU clock?
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
GPU performance basically scales linearly with the number of cores. So does power. I am not suggesting that they increase the clocks. I am suggesting that they make a bigger cluster. The M1 GPU is only 1024 shader cores, running at low clock.

Another thing you are forgetting that Apple uses TBDR. That alone gives them a 2x advantage in rendering. Apple will still have no incentive to go with a third party GPU because their own one will always have two decisive advantages: TBDR and unified memory.

Yeah, but even scaling linearly means that a GPU that's 2x faster than M1 will need 20W of power in the current configuration (same clocks, etc...). Plus the 3-5W increase in memory power consumption if we want to make use of HBM2 and we're looking at around 25W for the GPU. So it's in the ballpark of what I suggested.

So this mythical GPU will be 25W, compared to nVidia's most efficient GPU at 35W. 5nm vs 12nm.
 

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
And we both know power consumption doesn't scale linearly, so in order to increase performance by 2X, Apple may need to raise power consumption of the GPU of the next chip to about double. So... 25W is my guess. Plus whatever increase in memory power consumption.
It does if you just add more cores, which is what Apple is probably going to do.

From Anandtech's figures, M1's CPU is drawing about 10W at most. So Apple can still very easily pair M1X with an AMD GPU for their desktop line and just call it a day. It remains to be seen if that's what they'll do, or if they have a plan to somehow usurp the 5600M with another integrated GPU, then go for gold and give the 3080 and 6800 XT a good run for their money.
Nah they stated very clearly that the SoC is all theirs. No AMD nor NVIDIA.

So the inverse question is: why shouldn't it be?

Even based on Apple's own graphs, they are showing that past the M1, power consumption will shoot up exponentially while performance may not rise as much.

Apple_m1-chip-cpu-power-chart_11102020_big.jpg.large_2x.jpg


Why is it so hard for you and many others to accept that the M1X may actually not be as power efficient as the M1?
That's the CPU performance of the same chip at different power draws. They raise the power to raise the clock, thus the quadratic (not exponential) scaling. Performance is ~linear with frequency, but voltage required to sustain that frequency scales quadratically.

If they make a new SoC (M1X) they're not going to raise the clock and call it a day, they'll probably add more cores, more bandwith... And multicore or GPU performance does scale almost linearly with the number of cores. The M1X does not have to be significantly less efficient than the M1, and I bet it won't be.
 
  • Like
Reactions: Sanpete and leman

leman

macrumors Core
Oct 14, 2008
19,521
19,675
1.2GHz is a low GPU clock?

Well, Nvidia's mobile 1650 runs at ~ 1.5 ghz and the 35W 1650 Max-q runs on 1.2 ghz (same as Apple). So year, 1.2 is quite low for a modern GPU.
So this mythical GPU will be 25W, compared to nVidia's most efficient GPU at 35W. 5nm vs 12nm.

Except this mythical GPU will have over 5 GFLOPS of processing power, 2-3 times more than Nvidia's 35W.

The point is, in the end everyone cooks with water. Apple has a big trick in it's sleeve with TBDR, which allows it to be 20-50% more efficient than traditional IMR GPUs in rasterization. In compute though, it kind of boils down to how many ALUs you have and what clocks you run. Right now, Apple seems to have a decent advantage in compute per watt (probably mostly due to the smaller process) and a very good advantage in rendering (mostly because to TBDR).

In the end, assuming that all GPU makers are at the same process node and use similar tricks, their performance will be comparable. We've had this dance for a while — major GPU vendors were always able to match each other, give or take. And everything else being comparable Apple will win on points that are unique to it's architecture — TBDR and unified rendering. Even if a 30 watt AMD/Nvidia GPU had the same sustained performance as a 30W Apple GPU — why would Apple choose third party if this means abandoning the extreme advantages of unified memory and compromising their development ecosystem?
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
The M1X does not have to be significantly less efficient than the M1, and I bet it won't be.

Well, performance scaling may be linear, but power consumption scaling isn't. That's the realistic scenario. We're already seeing it with the MacBook Air losing only 20% of performance for roughly 33% less power. Let's just agree to disagree here.

In the end, assuming that all GPU makers are at the same process node and use similar tricks, their performance will be comparable. We've had this dance for a while — major GPU vendors were always able to match each other, give or take. And everything else being comparable Apple will win on points that are unique to it's architecture — TBDR and unified rendering. Even if a 30 watt AMD/Nvidia GPU had the same sustained performance as a 30W Apple GPU — why would Apple choose third party if this means abandoning the extreme advantages of unified memory and compromising their development ecosystem?

I'm not suggesting that Apple go with third-party. I'm simply pointing out the fact that Apple isn't really that far ahead of the competition, contrary to what most people have been singing about. The reality is that Apple is now able to catch up to and probably have exceeded nVidia and AMD in the lower-end of the threshold. That's great.

It remains to be seen how they'll compete at the midrange and the upper end. I don't expect the fight to be easy, but that's what you and many others are hopeful for.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
Well, Nvidia's mobile 1650 runs at ~ 1.5 ghz and the 35W 1650 Max-q runs on 1.2 ghz (same as Apple). So year, 1.2 is quite low for a modern GPU.


Except this mythical GPU will have over 5 GFLOPS of processing power, 2-3 times more than Nvidia's 35W.

The point is, in the end everyone cooks with water. Apple has a big trick in it's sleeve with TBDR, which allows it to be 20-50% more efficient than traditional IMR GPUs in rasterization. In compute though, it kind of boils down to how many ALUs you have and what clocks you run. Right now, Apple seems to have a decent advantage in compute per watt (probably mostly due to the smaller process) and a very good advantage in rendering (mostly because to TBDR).

In the end, assuming that all GPU makers are at the same process node and use similar tricks, their performance will be comparable. We've had this dance for a while — major GPU vendors were always able to match each other, give or take. And everything else being comparable Apple will win on points that are unique to it's architecture — TBDR and unified rendering. Even if a 30 watt AMD/Nvidia GPU had the same sustained performance as a 30W Apple GPU — why would Apple choose third party if this means abandoning the extreme advantages of unified memory and compromising their development ecosystem?
Eh you are quoting boost clocks which are (supposedly) heavily TDP constrained. I wouldn’t expect the GPU to sit at those frequencies for long (hence why game clocks are lower). Plus you have to give nvidia some leeway considering their two processes behind in the chips chosen for comparison. They likely could reduce power used just by using a newer process node.
 

Sanpete

macrumors 68040
Nov 17, 2016
3,695
1,665
Utah
So the inverse question is: why shouldn't it be?

Even based on Apple's own graphs, they are showing that past the M1, power consumption will shoot up exponentially while performance may not rise as much.

Apple_m1-chip-cpu-power-chart_11102020_big.jpg.large_2x.jpg


Why is it so hard for you and many others to accept that the M1X may actually not be as power efficient as the M1?
No, they aren't showing that past the M1 power consumption will shoot up exponentially. They're showing that if you push the M1 itself past a certain point you get diminishing returns. That doesn't mean you can't scale up the same kinds of components that are in the M1 and maintain similar efficiency.

What's hard to understand is why you think this is any different than with past chips.
 

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
Well, performance scaling may be linear, but power consumption scaling isn't. That's the realistic scenario. We're already seeing it with the MacBook Air losing only 20% of performance for roughly 33% less power. Let's just agree to disagree here.
Have you read my post at all? Performance increases linearly with frequency while power increases quadratically to frequency. So if you double the frequency, you have 2x the performance but 4x the power consumption. But there are other ways to increase performance that don't require changing the frequency.

For example both (multicore) performance and power increase linearly with the number of cores. If you have 2x cores, the power needed will increase to 2x, NOT 4x.

The MacBook Air has a 20% perf. loss using 33% less power because it modulates performance with frequency, not number of cores. When designing a new chip, you can add more cores to increase performance instead of raising the frequency.
 
  • Like
Reactions: BigSplash

zakarhino

Contributor
Sep 13, 2014
2,611
6,963
Apple was very clear that there won't be any third-party GPUs on Apple Silicon Macs. They have their own custom GPUs with custom features.

That's disappointing. Regardless of what "magic" Apple do to their SoCs it's highly unlikely that they'll beat a dedicated desktop GPU for certain applications.
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
No, they aren't showing that past the M1 power consumption will shoot up exponentially. They're showing that if you push the M1 itself past a certain point you get diminishing returns. That doesn't mean you can't scale up the same kinds of components that are in the M1 and maintain similar efficiency.

What's hard to understand is why you think this is any different than with past chips.

So at this point, we're both just speculating on what can be. I guess I'm a bit more pessimistic than you, but anyways, we'll see when it comes.

Have you read my post at all? Performance increases linearly with frequency while power increases quadratically to frequency. So if you double the frequency, you have 2x the performance but 4x the power consumption. But there are other ways to increase performance that don't require changing the frequency.

For example both (multicore) performance and power increase linearly with the number of cores. If you have 2x cores, the power needed will increase to 2x, NOT 4x.

The MacBook Air has a 20% perf. loss using 33% less power because it modulates performance with frequency, not number of cores. When designing a new chip, you can add more cores to increase performance instead of raising the frequency.

Well, we also have the iPad Air with A14 as a case study as well. It runs at lower frequencies, sure. But it doesn't have 1/2 the core count. And it's 5W instead of 10-15W.
 

lJoSquaredl

macrumors 6502a
Mar 26, 2012
522
227
Even if they hit 70% of the performance of those cards, I think it's fine. And graphics performance will only get better in future chips each year.

70% while having better battery life and possibly low to no heat or fan noise? Yeah sign me up for that any day lol
 
  • Like
Reactions: ikenstein
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.