Have you seen the recent series of YouTube videos from Max Tech?
Yes I have, but I do think that it's wrong for the entire lineup.
There is plenty to indicate in Apple's own developer presentations at WWDC that they are looking at integrated GPUs on the SoC. There's even a slide that has a table excluding NVidia & AMP GPUs in the "GPU column".
I'm not saying that they will *never* build a dGPU, and this may be necessary for the Mac Pro (something modular similar to the AfterBurner card)
That's pretty much exactly what I'm getting at, we're in agreement here.
But the first Apple Silicon Macs are very unlikely to have a dGPU as we have today. It's possible that in the MBP16 / iMac that the GPU is "on-package" but not included in the SoC, i.e. built into the chip, but from a separate silicon die.
Looking back on my post I think I wasn't communicating well and it sounded rude, I'm sorry.
But that scenario is the most likely I think. A separate silicon die at the very least is my guess for their "pro" models.
Given that modern consoles (XBox X & PS5) both have integrated GPUs (with XBox only 10% behind an NVidia 2080 Ti at 12 TFlops), what is the techhical limitation of producing a fast on-Soc GPU? Happy to be educated!
As impressive as the new consoles are (and I don't mean that sarcastically, they are very impressive), I doubt Apple's "Pro" machines are going to have on-SoC GPUs. The ones in the lineup that currently have integrated graphics are likely to remain so.
We have to recognize that some workloads require more power than even an excellent iGPU can provide. In addition to that, there's plenty of measurable benefits to having a separate GPU as well. Just the fact that dedicated GPUs exist when integrated GPUs have been with us for nearly two decades supports this.
As for technical limitations, the number one thing that comes to my mind is heat. Having the CPU and GPU on separate dies allows heat to be spread over a larger area and means that a workload on one wouldn't slow down the other. Video encoding and neural processing come to mind.
The second technical limitation that I think we have to recognize is size of the SoC itself.
The A12z has a miniscule die size of 10.1mm by 12.6mm, and the fact that it performs so well and has as many features as it does in such a small package is mind-boggling for sure. That's a third of the size of say, the i7 9700K (at 37.5mm sq) and leaves a lot of growing room, since the A14 (and the assumed Mac-deritive) are going to be on the smaller 5nm node.
Now let's look at the latest high-end AMD desktop-class GPU, the 5700xt. It sits at about 251^2mm.
That's over ten times the size of the A12z as a whole. Even if we give Apple some
heavy leeway and assume they can make an equivalent GPU in
half the size, that still means the chip is gonna be ten times as large as it would otherwise be.
The Xbox One X even has a die size of its SoC as over 360^2mm, gargantuan even to Intel's heavyweight Xeon chips, which stand at 76.16mm x 56.6 (for the 8380HL).
So what does all this mean?
When one manufactures microchips, there's always defects that show up, and the larger the size of the chip, means the more likely as defect will show up on any given chip. So, if you make smaller chips, then you have to throw out less of them.
Following this logic, means that there's greater benefit to having a dedicated GPU chip and CPU chip on their "Pro" machines.
With that said, I am in complete agreement that the first Apple Silicon Macs will not have dedicated GPUs, but by the time they get to Mac Pro and MacBook Pro level machines I'd say it's more likely than not that they will have dedicated GPUs.