Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

el-John-o

macrumors 68000
Nov 29, 2010
1,590
768
Missouri
Not doubling but quadrupling. And it’s not a random guess but an extrapolation based on existing specs and benchmarks.

Not so long ago the idea of an iPhone CPU outperforming a desktop one was laughable to many. And look where we are now.
Yeah, but that didn't happen overnight.

Trust me; I'd LOVE to see that kind of performance from an M1. But I don't think it's a given.

If quadrupling the performance is indeed what happens (and the leaks are true) when quadrupling the number of 'cores', then that would give it... *drumroll please*, mid-tier desktop GPU performance. Which would be ASTONISHING, and damn near physics breaking like only Apple can do.

Blowing past them though? I'd love to see it, but I don't think that's what is being suggested just yet. If you take the M1's current benchmark performance, framerates in games, etc,. and you just multiply all of those numbers by 4; you're basically right at the numbers of an RTX 3060. A $300 mid-ranged GPU.

One bit of confusion, which is something that annoys me so much about modern GPU manufacturers, is that they use the exact same marketing names for products that aren't at all equivalent. For example, a mobile RTX 3060 has half the ram and is about 50% slower than a desktop RTX 3060. But both will simply be called an "RTX 3060". They're not the same GPU, they're not the same anything. They're two completely different products with the same name.

There's no doubt in my mind the M1 will absolutely obliterate integrated GPU's (it already does), and continue to trade blows with the fastest dedicated laptop GPU's. But what I really want to see is at least mid-tier desktop GPU performance. The kind of performance that starts enabling real high end workloads and decent gaming. I'm not sure if we'll ever see that from M1 though; I wouldn't be surprised if future Mx chips are still paired with GPU's from nVidia on the very high end.
 
  • Like
Reactions: Rashy

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Yeah, but that didn't happen overnight.

No, but the technology is there today. The question is simply how far Apple wants to go with it. There is no principal reason at all why they wouldn't be able to make a GPU with x cores, the only limitation is their business plan.

If quadrupling the performance is indeed what happens (and the leaks are true) when quadrupling the number of 'cores', then that would give it... *drumroll please*, mid-tier desktop GPU performance.

Leaks sound realistic and yes, GPUs scale almost linear with the number of processing units. It's easy to see if you compare performance and core count of desktop GPUs (while normalizing for clocks of course).

Which would be ASTONISHING, and damn near physics breaking like only Apple can do.

Just a direct consequence of using smart technology instead of relying on ever increasing power consumption like some others do. See next section. Process advantage helps too though.

One bit of confusion, which is something that annoys me so much about modern GPU manufacturers, is that they use the exact same marketing names for products that aren't at all equivalent. For example, a mobile RTX 3060 has half the ram and is about 50% slower than a desktop RTX 3060. But both will simply be called an "RTX 3060". They're not the same GPU, they're not the same anything. They're two completely different products with the same name.

Because that's just how Nvidia does business these days. They overclock the hell out of the GPU and the VRAM, increase the power consumption by 50% from the last generation and declare "look, we now have a 50% faster GPU!". This power consumption inflation has to stop. It is absolutely ridiculous that 200watts for a GPU is considered "mid-range" today, just a few years ago that was reserved for biggest and hottest GPUs out there. What is Nvidia going to do next, release a 500W GPU and call it the new "mid-range"? Intel is doing it too btw, they just raised their mobile CPU TDP to 65W (!!). Couple of years ago that was considered desktop. This is lazy, creates unreasonable expectations and is definitely not how innovation is supposed to work.

And that btw is one reason why I am so exited about Apple's new hardware. Because they are currently the only company that gets it. Because they haven't lost the sense of scale and are not joining "performance at any cost and let the world burn" bandwagon.

There's no doubt in my mind the M1 will absolutely obliterate integrated GPU's (it already does), and continue to trade blows with the fastest dedicated laptop GPU's. But what I really want to see is at least mid-tier desktop GPU performance. The kind of performance that starts enabling real high end workloads and decent gaming. I'm not sure if we'll ever see that from M1 though;

As I wrote before, it solely depends on what kind of product Apple wants to offer. Thy currently own the most energy-efficient GPU IP out there and if they wanted, they probably could sell dGPUs that outperform an RTX 3090 at half the power consumption. But I doubt that they are interested in doing this.

They will likely target products that are "good enough", and ship them in form factors that would be impossible with the usual hardware. Yes, I don't expect Apple prosumer GPUs to be faster than RTX 3060 variants... but they would ship in thin and light laptops with incredible battery life and all other bells and whistles.

I wouldn't be surprised if future Mx chips are still paired with GPU's from nVidia on the very high end.

Why would Apple use less efficient GPU IP, and why would the sabotage the GPU programming ecosystem that they took so much care to build up? Apple's vision is true heterogeneous computing in a personal device, something that has so far been reserved to supercomputers. A third-party dGPU is death sentence for that vision.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
As I wrote before, it solely depends on what kind of product Apple wants to offer. Thy currently own the most energy-efficient GPU IP out there and if they wanted, they probably could sell dGPUs that outperform an RTX 3090 at half the power consumption. But I doubt that they are interested in doing this.
Apple CPUs & GPUs seem to be drastically better than AMD, Intel, and Nvidia offerings in terms of efficiency.

And with computing offloading to the cloud such as Stadia, Xbox Cloud gaming, etc., you have to believe that Apple will also eventually move a lot of the computing traditionally done locally to the cloud. For example, ray tracing gaming on Apple devices can be done on the Apple Cloud with a subscription.

I'm just wondering how much longer can Apple sell better chips on local hardware before cloud computing becomes more important.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
As I wrote before, it solely depends on what kind of product Apple wants to offer. Thy currently own the most energy-efficient GPU IP out there and if they wanted, they probably could sell dGPUs that outperform an RTX 3090 at half the power consumption. But I doubt that they are interested in doing this.
Hmm ... maybe Apple is planning to release a MPX dGPU for their current Mac Pro? Not out of the realm of possibility I think ... heh heh.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Hmm ... maybe Apple is planning to release a MPX dGPU for their current Mac Pro? Not out of the realm of possibility I think ... heh heh.

Why would they invest that much R&D into a niche feature on an obsolete platform? Big part of Apple Silicon's value is the programming model, and putting in on a separate board would break some of the guarantees, making software development even more messy.
 
Last edited:
  • Like
Reactions: JMacHack

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Leaks sound realistic and yes, GPUs scale almost linear with the number of processing units. It's easy to see if you compare performance and core count of desktop GPUs (while normalizing for clocks of course).

Because that's just how Nvidia does business these days. They overclock the hell out of the GPU and the VRAM, increase the power consumption by 50% from the last generation and declare "look, we now have a 50% faster GPU!". This power consumption inflation has to stop. It is absolutely ridiculous that 200watts for a GPU is considered "mid-range" today, just a few years ago that was reserved for biggest and hottest GPUs out there. What is Nvidia going to do next, release a 500W GPU and call it the new "mid-range"? Intel is doing it too btw, they just raised their mobile CPU TDP to 65W (!!). Couple of years ago that was considered desktop. This is lazy, creates unreasonable expectations and is definitely not how innovation is supposed to work.

And that btw is one reason why I am so exited about Apple's new hardware. Because they are currently the only company that gets it. Because they haven't lost the sense of scale and are not joining "performance at any cost and let the world burn" bandwagon.
So the 6900/6800 is a good example of why such scaling can also be meh. At least for gaming the performance improvement isn’t really that great going from 36 WGPs to 40.

I also take some issue with the blasting of how the product stacks have been moved, wrt to GTX 10 series -> 20 series -> 30 series. The 3060Ti is genuinely as fast as a 2080 Super (which was $700 MSRP). They did this using 50 watts less TDP as well. Ignoring the silicon lottery for a moment, you can get even lower power figures by tuning the clocks and voltage of these cards as the reference designs tend to leave a lot on the table. It is why all the AIB 3060 (non-Ti) models all boost higher than the reference clocks would suggest.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
So the 6900/6800 is a good example of why such scaling can also be meh. At least for gaming the performance improvement isn’t really that great going from 36 WGPs to 40.

You always get into diminishing returns with performance scaling in high end, because there are a lot of other bottlenecks that start playing a role. For example, the RX 6900 XT has about 30% more compute power on paper, but the memory interface is the same. Also, with that much processing power, it is starting to get incredibly more difficult to use it efficiently, as any little delay (data synchronization, texture access, batch setup overhead) can end up wasting a lot of time where the GPU units go unused.

On the lower end of performance spectrum, it's much easier to reason about scaling, since it's almost always the raw GPU performance (compute + memory fetch) that the biggest bottleneck. And it's also easier to address these bottlenecks (it's much simpler to increase the memory bandwidth from 100GB/s to 200GB/s than from 1TB/s to 1.2TB/s)

I also take some issue with the blasting of how the product stacks have been moved, wrt to GTX 10 series -> 20 series -> 30 series. The 3060Ti is genuinely as fast as a 2080 Super (which was $700 MSRP). They did this using 50 watts less TDP as well. Ignoring the silicon lottery for a moment, you can get even lower power figures by tuning the clocks and voltage of these cards as the reference designs tend to leave a lot on the table. It is why all the AIB 3060 (non-Ti) models all boost higher than the reference clocks would suggest.

Oh, I am not saying that Ampere did not bring any architectural improvements, it's just that these improvements are much smaller than is commonly assumed. Nvidia kind of used a sleight of hand here, what they essentially did was to introduce a new performance/power bracket and then renamed the GPU models to make it seem that there is a huge generational leap. But when you look at it more precisely, the 3060 basically replaces the 2070, 3060 replaces 2080, etc. — all with very similar power consumption. The 3080 and up are new GPU classes with a big jump in power consumption.

One can of course argue that it's better bang for buck for the consumer (and I don't disagree, if only one could buy any of these GPUs at their MSRP of course), but at the same time it's neither sustainable nor scalable. Building these ridiculously oversized GPUs where 200W and up are considered "entry-level" to the enthusiast world only creates weird expectations and ultimately makes things worse for everyone. I'd rather that companies worked on making GPUs more power efficient and GPU algorithms smarter, so that we can actually raise the bar for graphics for everyone rather than having a small group of elitists showing off their 500W+ GPU setups and everyone else feeling inadequate.
 
  • Like
Reactions: JMacHack

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
You always get into diminishing returns with performance scaling in high end, because there are a lot of other bottlenecks that start playing a role. For example, the RX 6900 XT has about 30% more compute power on paper, but the memory interface is the same. Also, with that much processing power, it is starting to get incredibly more difficult to use it efficiently, as any little delay (data synchronization, texture access, batch setup overhead) can end up wasting a lot of time where the GPU units go unused.

On the lower end of performance spectrum, it's much easier to reason about scaling, since it's almost always the raw GPU performance (compute + memory fetch) that the biggest bottleneck. And it's also easier to address these bottlenecks (it's much simpler to increase the memory bandwidth from 100GB/s to 200GB/s than from 1TB/s to 1.2TB/s)



Oh, I am not saying that Ampere did not bring any architectural improvements, it's just that these improvements are much smaller than is commonly assumed. Nvidia kind of used a sleight of hand here, what they essentially did was to introduce a new performance/power bracket and then renamed the GPU models to make it seem that there is a huge generational leap. But when you look at it more precisely, the 3060 basically replaces the 2070, 3060 replaces 2080, etc. — all with very similar power consumption. The 3080 and up are new GPU classes with a big jump in power consumption.

One can of course argue that it's better bang for buck for the consumer (and I don't disagree, if only one could buy any of these GPUs at their MSRP of course), but at the same time it's neither sustainable nor scalable. Building these ridiculously oversized GPUs where 200W and up are considered "entry-level" to the enthusiast world only creates weird expectations and ultimately makes things worse for everyone. I'd rather that companies worked on making GPUs more power efficient and GPU algorithms smarter, so that we can actually raise the bar for graphics for everyone rather than having a small group of elitists showing off their 500W+ GPU setups and everyone else feeling inadequate.
Elitism is what the PCMR is all about, as a Mac user you should know this. ?

These past two years have been odd on the gaming GPU side. You are arguably, better off getting a laptop than trying to build a system due to availability (well really pricing). Conversely power usage only really matters in the mobile market. Apple is in an interesting position. For all the power and efficiency they have with Apple Silicon graphics they wont even be in the position to move the GPU market in one direction or another, by virtue of them never selling their cards to anyone other than Mac users. And with the dearth of options to actually take advantage of any rasterization performance improvements (read games) I sadly don’t see that changing anytime soon if ever.
 
  • Like
Reactions: JMacHack

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Why would they invest that much R&D into a niche feature on an obsolete platform? Big part of Apple Silicon's value is the programming model, and putting in on a separate board would break some of the guarantees, making software development even more messy.
Yeah, I know it's a stretch, but I can't help thinking that Apple couldn't have engineered the 2019 Mac Pro chassis with MPX slots just as a one off. The Apple Silicon transition strategy would have been done and dusted by the time the 2019 Mac Pro chassis are being designed. Regardless of how monstrous the Apple Silicon Mac Pro's GPU may be, it's still limited. So for pros requiring more graphics power than what is available for the Apple Silicon Mac Pro, they could not extend it like the current 2019 Mac Pro.

I think the Metal API has largely abstracted the programming model from the underlying hardware? I concluded that because at the moment the same app running in Apple Silicon or Intel CPU are using the same Metal APIs for GPU resources.

It should be Apple's engineers who has to worry about writing the proper drivers. I have to admit I'm not familiar with Apple's APIs tho.
 

Danny82

macrumors member
Jul 1, 2020
50
25
SVE/SVE2, hardware ray tracing
Hi leman, always value your opinion. Just a quick question base on if and assumptions as u gave the reply which I am most looking out for.. but of course everyone opinions are as important :)

Scenario:
If in wwdc there is a hardware launch base on rumors and it is for the macbook.. we do not know what it is but let's say it is the 14" and 16" macbook pro with let's call it M1X thats uses the same chip architecture as M1 with 8+2 cpu core and 16 or 32 gpu core base on rumors.. and the rumor pointing towards a year end macbook air and let's just say it comes with so call M2 chip with 4+4 cpu core with arm v9 architecture (sve/sve2) and rumored 9 or 10 gpu core base on whatever architecture (believe it is imagination) with hardware ray tracing..

Would it be a better choice to wait for the macbook air..? Minus away the Scenario of waiting for next year M2X.......

Would love to hear your thoughts on the pros and cons.. pretty understandable the M1X is going to be more powerful than M2 but the value of sve/sve2 and hardware ray tracing could not be undermine..

About myself, I am a casual gamer but more towards video editor.. but would of course love to have a good machine :)
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Hi leman, always value your opinion. Just a quick question base on if and assumptions as u gave the reply which I am most looking out for.. but of course everyone opinions are as important :)

Scenario:
If in wwdc there is a hardware launch base on rumors and it is for the macbook.. we do not know what it is but let's say it is the 14" and 16" macbook pro with let's call it M1X thats uses the same chip architecture as M1 with 8+2 cpu core and 16 or 32 gpu core base on rumors.. and the rumor pointing towards a year end macbook air and let's just say it comes with so call M2 chip with 4+4 cpu core with arm v9 architecture (sve/sve2) and rumored 9 or 10 gpu core base on whatever architecture (believe it is imagination) with hardware ray tracing..

Would it be a better choice to wait for the macbook air..? Minus away the Scenario of waiting for next year M2X.......

Would love to hear your thoughts on the pros and cons.. pretty understandable the M1X is going to be more powerful than M2 but the value of sve/sve2 and hardware ray tracing could not be undermine..

About myself, I am a casual gamer but more towards video editor.. but would of course love to have a good machine :)

I'd say it kind of depends on whether you want a MacBook Air or a 14"/16" MaBook Pro? Regardless of how likely (or unlikely) the scenario you describe is, these are still going to be very different machines, with different performance profiles.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.