Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Mayo86

macrumors regular
Original poster
Nov 21, 2016
105
304
Canada
Not certain if this has been discussed but reading the developer material, it seems fairly evident that the graphics in Apple Silicon based Macs will be entirely distinct from Intel based Macs.

Apple themselves are touting “Apple family GPUs” versus Intel based Mac GPU families.

“How you port a Metal app to a Mac with Apple silicon depends on whether your project supports Apple family GPUs. If your project supports iOS or tvOS, it also supports Apple family GPUs. If your codebase is macOS-only, you may find situations where Apple family GPUs behave differently from the GPUs in Intel-based Macs. ...”

“...Previously, Apple GPUs and Mac GPUs belonged to distinct families, and each GPU only supported one family, so unless you designed a cross-platform app, you only checked for members of a single family. The GPU in a Mac with Apple silicon is a member of both GPU families, and supports both Mac family 2 and Apple family feature sets. Now, to support both Apple silicon and Intel-based Mac computers, test for both families in your app.”

(Source: https://developer.apple.com/documentation/metal/porting_your_metal_code_to_apple_silicon)


And in a separate section it further claims that:


Architectural differences between arm64 and x86_64 mean that techniques that work well on one system might not work well on the other. For example:

  • Don’t assume a discrete GPU means better performance. The integrated GPU in Apple processors is optimized for high performance graphics tasks.”
(Source: https://developer.apple.com/documentation/xcode/porting_your_macos_apps_to_apple_silicon)

While this doesn’t answer questions about whether all Apple Silicon (ARM) based Macs will have solely integrated GPUs, it does seem that their wording suggests that just because some of the Apple family GPUs are integrated into the chip, that it may perform equal or better than previous discrete chips.

I thought it was interesting and figured it was a good read. Thoughts/speculations/conjectures?
 

peanutbridges

macrumors newbie
Jul 6, 2020
5
3
What else would Apple marketers say? That no serious professional uses integrated and that's why they offer a 32GB HBM2 Radeon Pro in their Mac Pro?

Integrated works for a majority of folk and with a bit of a bump, will cover those on the fence who understand the tradeoff between performance and heat and noise. But the biggest concern isn't about this but rather how do you replace a 250W GPU with a hell lot more compute power? It's easy to take out that low-power mobile AMD chip on laptops but much harder to compete against a separate dedicated card. Apple isn't immune from heat and voltage and scaling issues. It's chip design, they still use silicon, they will run into the same problems as everyone else.

No one buys the argument that a PS5 is dethroning Nvidia's flagship. With a locked console that's largely controlled inside and out, a lot can be tweaked and accelerated. In a majority of future tasks, expect onboard accelerators. Nvidia does it with DLSS 2.0 running on their Tensor upscaling to 4K with fairly imperceptible differences; Apple themselves already offers the Afterburner card so you can see how dedicated hardware can alleviate fewer cores or weaker GPUs. No doubt the A-chips already have these in use so I'm weary of benchmarks because the results are hardly transparent.
 

Erehy Dobon

Suspended
Feb 16, 2018
2,161
2,017
No service
Not certain if this has been discussed but reading the developer material, it seems fairly evident that the graphics in Apple Silicon based Macs will be entirely distinct from Intel based Macs.
...
I thought it was interesting and figured it was a good read. Thoughts/speculations/conjectures?
During the WWDC keynote, Apple was VERY careful not to elucidate on the GPU architecture. They didn't refer to the CPU architecture as Arm either. Everything was blandly tossed under the overreaching umbrella of "Apple Silicon".

The Tomb Raider demo. The Maya demo. Apple was completely silent about AMD. They just said everything was running great on Apple Silicon. Intel Integrated Graphics aren't Apple Silicon. Radeon chips aren't Apple Silicon either.

This led me to believe that Apple Silicon Macs will debut with their own proprietary graphics architecture.

It is notable that the Developer Transition Kit hardware (basically an iPad Pro in a Mac mini enclosure) does not have a discrete AMD graphics subsystem.

My guess is when Apple kicked Intel CPUs back to the curb, they also gave notice to AMD graphics.
 

UltimateSyn

macrumors 601
Mar 3, 2008
4,967
9,205
Massachusetts
It is very easy to see that on the first page of this sub-forum there are multiple threads already discussing the upcoming Apple Silicon GPUs. We have too many overlapping threads on this topic. Here are some more quotes relevant to your thoughts:

"And to know if a GPU needs to be treated as integrated or discrete, use the isLowPower API. Note that for Apple GPUs isLowPower returns False, which means that you should treat these GPUs in a similar way as discrete GPUs. This is because the performance characteristics of Apple GPUs are in line with discrete ones, not the integrated ones. Despite the property name though, Apple GPUs are also way, way more power-effficient than both integrated and discrete GPUs."

"Intel-based Macs contain a multi-core CPU and many have a discrete GPU ... Machines with a discrete GPU have separate memory for the CPU and GPU. Now, the new Apple Silicon Macs combine all these components into a single system on a chip, or SoC. Building everything into one chip gives the system a unified memory architecture. This means that the CPU and GPU are working over the same memory."
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
There are discussions on this very topic in a number of threads:



All in all, Apple-designed TBDR GPUs are coming to the desktop and I would't be surprised if non-Apple GPUs are dropped in a not too far future. I have little doubt that Apple GPUs will hav excellent performance on their laptops and mid-range, desktop. The only big unknown are pro-level desktops such as Mac Pro — I just don't see Apple replacing the hardware there any time soon.
 

ksec

macrumors 68020
Dec 23, 2015
2,295
2,662
The only big unknown are pro-level desktops such as Mac Pro — I just don't see Apple replacing the hardware there any time soon.

Yes I have yet to see a detail analysis of what is happening to Pro Level hardware. That is the 300W TDP CPU and 400W TDP Duo GPU Config.

Apart from simply attacking it with brute force. I dont see how they could achieve the economy of scale at big die size without using it in their own Datacenter. ( Which has its own set of issues and constrain. )
 

dugbug

macrumors 68000
Aug 23, 2008
1,929
2,147
Somewhere in Florida
I would add that reading the tea leaves in big sur and the 2020 wwdc really only give us a one year plan. They may well just initially plan integrated GPU offerings for their first year of conversion and then roll out apple gpu cards / discrete options in year two
 
  • Like
Reactions: macsplusmacs

leman

macrumors Core
Oct 14, 2008
19,516
19,664
I would add that reading the tea leaves in big sur and the 2020 wwdc really only give us a one year plan. They may well just initially plan integrated GPU offerings for their first year of conversion and then roll out apple gpu cards / discrete options in year two

This is very much true, but then Apple is potentially setting themselves up for a disaster. Because this year they tell everyone to use their fancy tile shaders to take advantage of the TBDR GPUs on ARM Macs and then next year they would be saying "oh, sorry, not we also have non-TBDR GPUs, so rewrite your rendering code back!". It would have been much simpler for them to just say that ARM Macs might use different GPU programming models. But they were quite adamant that ARM Macs uses Apple TBDR GPUs.

Don't get me wrong, I totally get your point and I have been wondering about the same thing. It is really vague, and I would have appreciated if Apple were a bit more clear in regards to their roadmap.
 
  • Like
Reactions: theorist9

dugbug

macrumors 68000
Aug 23, 2008
1,929
2,147
Somewhere in Florida
This is very much true, but then Apple is potentially setting themselves up for a disaster. Because this year they tell everyone to use their fancy tile shaders to take advantage of the TBDR GPUs on ARM Macs and then next year they would be saying "oh, sorry, not we also have non-TBDR GPUs, so rewrite your rendering code back!". It would have been much simpler for them to just say that ARM Macs might use different GPU programming models. But they were quite adamant that ARM Macs uses Apple TBDR GPUs.

Don't get me wrong, I totally get your point and I have been wondering about the same thing. It is really vague, and I would have appreciated if Apple were a bit more clear in regards to their roadmap.

Good points. Maybe they would consider their own discrete TBDR GPUs though. Having SOCs for every consumer/prosumer variation would have to be impractical unless they were bin-based (i.e. 1 defective core like A12X vs A12Z).

-d
 

MandiMac

macrumors 65816
Feb 25, 2012
1,433
883
"Despite the property name though, Apple GPUs are also way, way more power-effficient than both integrated and discrete GPUs."
So much this. We conveniently forget that the current A12Z and A13 chips have great graphics onboard, and still sipping around 5-8 watts. Even if one'd quadruple the TDP to 40 watts, that's no comparison to external graphics cards...
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
This is very much true, but then Apple is potentially setting themselves up for a disaster. Because this year they tell everyone to use their fancy tile shaders to take advantage of the TBDR GPUs on ARM Macs and then next year they would be saying "oh, sorry, not we also have non-TBDR GPUs, so rewrite your rendering code back!". It would have been much simpler for them to just say that ARM Macs might use different GPU programming models. But they were quite adamant that ARM Macs uses Apple TBDR GPUs.

Don't get me wrong, I totally get your point and I have been wondering about the same thing. It is really vague, and I would have appreciated if Apple were a bit more clear in regards to their roadmap.

They also mentioned at WWDC that you should always query what functions are on a GPU and not make any assumptions. So they were specifically not saying that anyone should write render code hard coded to Apple’s GPUs on ARM Macs.

Their code samples continued to deal with AMD style GPUs on ARM Macs. Doesn’t mean we’ll see that hardware ship, but they suggested against hard coded render pipelines specifically.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,604
8,624
but then Apple is potentially setting themselves up for a disaster. Because this year they tell everyone to use their fancy tile shaders to take advantage of the TBDR GPUs on ARM Macs and then next year they would be saying "oh, sorry, not we also have non-TBDR GPUs, so rewrite your rendering code back!".
I agree. For the changes Apple is pushing developers to implement, especially with the way TBDR and the unified memory fundamentally changes Mac expectations, it would be odd to switch models again next year. I think there’s an overlying assumption that there’s some “upper limit” of Apple’s GPU’s. This is primarily because everyone making GPU’s today benefit from customers expecting a clear tiered structure that means integrated graphics ALWAYS top out at some level well before the highest performing options.

Could AMD, Intel or NVIDIA produce better performing integrated chips? Maybe, but it would cut into the market they’ve built for themselves of expensive high end options. Apple’s only “customer” for GPU’s is themselves. And their “customer” is only looking for low power, high performance options, so there’s no need to hold back on features and performance just to carve out a ever increasing higher dollar niches. I think once we see the performance Apple’s family 1, family 2, etc. Mac GPU’s, there will be a much clearer understanding about what can be expected from them.
 

the8thark

macrumors 601
Apr 18, 2011
4,628
1,735
I think the eventual end game Apple is working towards (ie decades from now) is to do away with the concept of CPU and GPU. Just one SoC that does everything. What we know as the "integrated" sulution I feel is just a the transitional step. Sure the end game, you could call it integrated, but it'd be nothing like what we call integrated today.

This fits in with Apple's business model, very well. Apple doesn't have to sell so many CPU's like Intel or so many GPU's like AMD/NVIDIA to stay in the black. Apple only sells the whole package (ie your Mac or iOS device). Intel and AMD/NVIDIA, the only they stay relevant is to keep people wanting their parts of the package (ie your computer). When people stop wanting dedicated GPUs, AMD/NVIDIA become less relevant in the market. When people stop wanting high power, high wattage CPUs, Intel become less relevant in the market.

Intel and AMD/NVIDIA, without a huge shift, can only keep making better and better versions of what they already do. Eventually the physical limits of science will cap how good these CPUs and GPUs can be. Are they both willing to invest the billions into the next big (and potentially very different) thing?
Apple clearly are willing to take this risk.

My guess is when Apple kicked Intel CPUs back to the curb, they also gave notice to AMD graphics.
I happen to agree wuth this and it's my guess also. My reasoning is as I said above, I really believe Apple are working to a world where the is no CPU and no GPU. It's all just one Apple SoC.
 
  • Like
Reactions: Unregistered 4U

leman

macrumors Core
Oct 14, 2008
19,516
19,664
I think the eventual end game Apple is working towards (ie decades from now) is to do away with the concept of CPU and GPU. Just one SoC that does everything. What we know as the "integrated" sulution I feel is just a the transitional step. Sure the end game, you could call it integrated, but it'd be nothing like what we call integrated today.

Over the last decade, we saw CPUs and GPUs occupy two distinct specialized niches. CPUs excel at non-linear data access and complex logic, while GPUs became general-purpose parallel computation machines. And I believe this dichotomy is here to stay. I don't think one can make a single device that is good at both — you need functional units that target a particular use case. Intel has been trying to blur the lines with Larrabee, but they were ultimately unsuccessful.

Furthermore, graphical application will require a certain amount of fixed-function hardware for a foreseeable time. Geometry clipping (and now ray-tracing) is simply much more efficient when done in dedicated hardware rather then occupying the general-purpose compute units. In fact, since recent times, we have seen a renaissance of specialized coprocessors. It is more efficient to have a special unit that can just do half-precision matrix multiplication and nothing else, then use the parallel GPU hardware for that.

To sum it up, in a way what you say has already happened. CPU and GPU are simply two different types of devises, optimized to do two different types of things (both of them useful and necessary). The fusion of CPU and GPU into one chip (SoC) has happened years ago for many manufacturers. But if you mean that CPUs and GPUs will disappear as a device class, probably not.
 
  • Like
Reactions: the8thark

ksec

macrumors 68020
Dec 23, 2015
2,295
2,662
Yes I have yet to see a detail analysis of what is happening to Pro Level hardware. That is the 300W TDP CPU and 400W TDP Duo GPU Config.

Apart from simply attacking it with brute force. I dont see how they could achieve the economy of scale at big die size without using it in their own Datacenter. ( Which has its own set of issues and constrain. )

Replying to myself.

One of the thing I overlooked and assumed wrong was the required usage, or the intention of pushing high clock speed Single Core Performance. That Apple will *need* to use 7nm / 5nm HP node for its Mac Pro and iMac. And the need to push 4Ghz+. How is Apple going to recoup it development cost on a small volume Desktop CPU chip on HP node? It was the economical model that doesn't fit and puzzles me.

But now it just occurred to me, what if Apple decides Single Thread performance isn't as important? You end up having 64 Core CPU on LP node, running with SVE, at 3.2Ghz with 2.5W per Core. Once you include Quad Memory Channel and PCI-Express this is easily running at 240W TDP.

The same for GPU as well, Apple will be trading Die Size area for Energy Efficiency. And everything will be done on LP node. No new development tools, design rules, and additional mask. If there is a product with higher TDP needs, go parallel and add more core, whether that is CPU, GPU, or NPU.

I think this hypotheses fixes my previous model where making a Mac CPU on HP node doesn't make much ROI.
 

MandiMac

macrumors 65816
Feb 25, 2012
1,433
883
But now it just occurred to me, what if Apple decides Single Thread performance isn't as important? You end up having 64 Core CPU on LP node, running with SVE, at 3.2Ghz with 2.5W per Core. Once you include Quad Memory Channel and PCI-Express this is easily running at 240W TDP.
You're on to something I thought for myself as well. Of course, ST performance is important, but they have complete control over macOS and how it works with multi-threading. They are but one optimization away, and if they (big if there) somewhen get 4-way-multithreading per chip on the hardware level (AMD was rumored to have a breakthrough there), suddenly it's all about core count. And if you have, say, 64 cores in a device, they don't have to chase the GHz because the power is already there.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
But now it just occurred to me, what if Apple decides Single Thread performance isn't as important? You end up having 64 Core CPU on LP node, running with SVE, at 3.2Ghz with 2.5W per Core. Once you include Quad Memory Channel and PCI-Express this is easily running at 240W TDP.

Apple already has the best IPC in the COU segment. I doubt they will need to do anything special to excel in this department, neither AMD not Intel will be able to catch up with them any time soon.

As to the rest... you seem to be assuming that making large chips on LP node is cheaper. Is this really the case?. The big question is the yield. I suppose they can always go the chiplet route to address those issues.
 

ksec

macrumors 68020
Dec 23, 2015
2,295
2,662
Apple already has the best IPC in the COU segment. I doubt they will need to do anything special to excel in this department, neither AMD not Intel will be able to catch up with them any time soon.

As to the rest... you seem to be assuming that making large chips on LP node is cheaper. Is this really the case?. The big question is the yield. I suppose they can always go the chiplet route to address those issues.

They have the best IPC, but not the best Single Thread Performance. Still hard to beat 5Ghz x86 CPU, but I am not sure if that really matters or not at the current level of single thread performance. Or if Apple could design / test / validate just one Core that goes up to 4Ghz+.

Yield is not as of a big problem for Apple, die cost doesn't matter as much, as they dont sell it to anyone else. And the Chiplet being cheaper for AMD and cost effective has been blow way out of proportion. Chiplet allows you to "Design" and reuse 1 or 2 SKUs across multiple market, the beauty is in the "design" cost reduction, less so about the Die Cost and yield. Using the same LP node greatly simplifies the design process and cost. Plus a 5nm 64 Core Apple CPU is till less than 200mm2, hardly a "large die"

If they were to do Chiplet, which is not as energy efficient, they might as well do it in HP node. Apple would also need expertise in HyperTransport or some other form of external Interconnect, doable, but more complex. Reusing LP to save cost, all while using a Not Energy Efficient Chiplet Design is sort of counter intuitive in my opinion.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
I think the eventual end game Apple is working towards (ie decades from now) is to do away with the concept of CPU and GPU. Just one SoC that does everything. What we know as the "integrated" sulution I feel is just a the transitional step. Sure the end game, you could call it integrated, but it'd be nothing like what we call integrated today.

This fits in with Apple's business model, very well. Apple doesn't have to sell so many CPU's like Intel or so many GPU's like AMD/NVIDIA to stay in the black. Apple only sells the whole package (ie your Mac or iOS device). Intel and AMD/NVIDIA, the only they stay relevant is to keep people wanting their parts of the package (ie your computer). When people stop wanting dedicated GPUs, AMD/NVIDIA become less relevant in the market. When people stop wanting high power, high wattage CPUs, Intel become less relevant in the market.

Intel and AMD/NVIDIA, without a huge shift, can only keep making better and better versions of what they already do. Eventually the physical limits of science will cap how good these CPUs and GPUs can be. Are they both willing to invest the billions into the next big (and potentially very different) thing?
Apple clearly are willing to take this risk.


I happen to agree wuth this and it's my guess also. My reasoning is as I said above, I really believe Apple are working to a world where the is no CPU and no GPU. It's all just one Apple SoC.

I largely agree, and think is one of the smarter narratives that I've read in regards to Apples actual plans to move to their own Silicon. I just looked at the Mac mini DTK native results on Geekbench. The 2 year old CPU design from the iPad Pro is within striking distance of my MacBook Pro 16. On Metal? The recently released upgrade to A12Z is actually almost half the speed of the AMD Radeon Pro 5300M. Considering the GPU was only an enhancement this year makes me wonder what we will see will see from an A14X.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
They have the best IPC, but not the best Single Thread Performance. Still hard to beat 5Ghz x86 CPU, but I am not sure if that really matters or not at the current level of single thread performance. Or if Apple could design / test / validate just one Core that goes up to 4Ghz+.

An Apple A13 core running at 2.7ghz (5watts) almost matches a 5ghz desktop Intel CPU. They don’t have to do much to beat it to be honest. New process and slightly higher clocks (plus, knowing Apple, probably yet wider backend) should be enough.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Apart from simply attacking it with brute force. I dont see how they could achieve the economy of scale at big die size without using it in their own Datacenter. ( Which has its own set of issues and constrain. )
Big package with lots of dies.

Apple already has the best IPC in the COU segment. I doubt they will need to do anything special to excel in this department, neither AMD not Intel will be able to catch up with them any time soon.
Define "special." I would not be surprised if Apple has exceptional cache sizes on some of their pro-level CPUs. But I agree insofar as you are suggesting there will not be major architectural changes between A14 cores as they appear on the iPhone and A14 cores as they appear on the Mac.

They have the best IPC, but not the best Single Thread Performance. Still hard to beat 5Ghz x86 CPU, but I am not sure if that really matters or not at the current level of single thread performance. Or if Apple could design / test / validate just one Core that goes up to 4Ghz+.

Yield is not as of a big problem for Apple, die cost doesn't matter as much, as they dont sell it to anyone else. And the Chiplet being cheaper for AMD and cost effective has been blow way out of proportion. Chiplet allows you to "Design" and reuse 1 or 2 SKUs across multiple market, the beauty is in the "design" cost reduction, less so about the Die Cost and yield. Using the same LP node greatly simplifies the design process and cost. Plus a 5nm 64 Core Apple CPU is till less than 200mm2, hardly a "large die"

If they were to do Chiplet, which is not as energy efficient, they might as well do it in HP node. Apple would also need expertise in HyperTransport or some other form of external Interconnect, doable, but more complex. Reusing LP to save cost, all while using a Not Energy Efficient Chiplet Design is sort of counter intuitive in my opinion.
AMD is certainly a special case. I think they had an ongoing contract with Global Foundries that drove them to use the 14nm I/O die in so many places.

Nonetheless, chiplet designs are clearly the future. Intel has also embraced them and there's no reason to believe Apple won't as well. Your point about the interconnect is well-taken, but Apple does not need a smart silicon interposer from the future to create basic chiplet designs. AMD's infinity fabric is not complicated and Apple could replicate it easily.

Three advantages of chiplets over a monolithic die:

1. Chiplet allows you to "Design" and reuse 1 or 2 SKUs across multiple market. Have I read this somewhere before? An 8+4 CPU core part with 8 GPU cores is suddenly very versatile if it can be plugged into a package with another die full of GPU cores or a package with an HBM stack.

2. Binning. Especially relevant whenever they get to the Mac Pro - this machine is currently with up to 28 cores. I'd surely like those on different dies to prevent my clocks from being dragged down by a laggard core.

3. Having increasing numbers of GPU cores on the same die as CPU cores that are pushing increasingly higher clocks creates heat/area concerns that can be mitigated with a chiplet design. The incredible die density you cite is an incredible liability on machines like the Macbook Pro.

An Apple A13 core running at 2.7ghz (5watts) almost matches a 5ghz desktop Intel CPU. They don’t have to do much to beat it to be honest. New process and slightly higher clocks (plus, knowing Apple, probably yet wider backend) should be enough.
I would very much like to know where you found an A13 core (I assume lightning??) using 5W. I have been under the assumption these cores use around 1.5W.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,516
19,664
Have in mind that SPECint and SPECfp not only measure a single CPU core but also RAM and compiler speed.

Why compiler speed? You probably mean compiler maturity? I wouldn’t expect ARM64 codegen to be significantly worse than x86-64, it’s a well tested and well understood target.

As to RAM,yes, it’s kind of difficult to controlling for it. I suppose it depends on the problem size used in individual benchmarks. I am not too familiar with SPEC. Since it’s widely regarded as good micro benchmark suite I would assume they make an effort to ensure that the data stays on cache.

You can't isolate the core power draw, so the result shows how much power the system used to run the test.

Is that what Anandtech did? If I am not mistaken you can measure CPU power draw using Xcode, but it has been a while I looked at it
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.