They also said that they will hit performance equivalent to dGPUs from AMD with power consumption less than an integrated Intel GPU. Though I'm skeptical on that.
I’m not The iPad Pro GPU is already comparable to GTX 1050, give it few extra cores and more thermal headroom, and you can easily get to the levels of 5300M Pro. Assuming it performs in the TBDR mode of course, otherwise the RAM bandwidth will limit it too much. To get more performance though, they need faster RAM. My guess is that they plan to use something like HBM2 as system RAM to solve that particular issue.
Comparable to a GTX 1050? I haven't worked with graphics on iOS but that seems high. Based on what data? Is that sustained or burst? I mean I know Macs will have another thermal headroom, but if that's sustained in the iPad's chassis that's pretty wild.
I doubt they'll use HBM2 for system memory though. It's very pricy and people wouldn't be satisfied with a maximum of 8GB of system memory.
I know the A12X is supposed to have similar graphics performance to an Xbox one S (approx 1.4 TFLOP to the GTX 1050s 1.8*) and the A12Z ups that another notch - from what I've seen that's a perfectly plausible claim. Assuming double the TDP (~15W) and no need to thermally regulate itself (active cooling) I don't see why Apple couldn't get up to low-mid dedicated performance from an even more advanced 5nm based A14 chipset.Comparable to a GTX 1050? I haven't worked with graphics on iOS but that seems high. Based on what data? Is that sustained or burst? I mean I know Macs will have another thermal headroom, but if that's sustained in the iPad's chassis that's pretty wild.
At the aforementioned tech talk from WW, Apple did however show off their GPU running Dirt Rally at what looked like pretty good settings at a very consistent and seemingly high frame rate, with the immediate mode rendering toggles forced on since that was an unoptimised title; And just like when they showed off Tomb Raider, it was even x86 translation. So I am not at all doubting that they can deliver good performance. But less than iGPU power consumption, competing with their dGPUs just seem too good to be true. If the A12X/Z is as good as you say it may not be too far off though
I doubt they'll use HBM2 for system memory though. It's very pricy and people wouldn't be satisfied with a maximum of 8GB of system memory.
I suspect a hybrid, a bit like Intel's Crystal Well chips, where you have regular DDR memory for system RAM but there's a level 4 cache on die - perhaps with something like an interposer attaching HBM2 to the die as a lvl 4 cache for the GPU
I should mention that at the Metal for Apple Silicon talk, their diagrams seemed to suggest that their GPU memory at least in some cases be separate from main memory as they talked a lot about when to flush to system memory and when to read from system memory versus just relying on the already cached to VRAM data on the GPU with Apple Silicon
My speculation is that it might make sense for 16” system, which is already more expensive. Saving from simplifying the overall architecture as well as not having to rely on IHVs might make it cost efficient - and it would sufficiently discriminate the 16” from the rest of the line. But that is my wishful thinking of course.
I know the A12X is supposed to have similar graphics performance to an Xbox one S (approx 1.4 TFLOP to the GTX 1050s 1.8*) and the A12Z ups that another notch - from what I've seen that's a perfectly plausible claim. Assuming double the TDP (~15W) and no need to thermally regulate itself (active cooling) I don't see why Apple couldn't get up to low-mid dedicated performance from an even more advanced 5nm based A14 chipset.
*Yes I'm aware this is just one measure and of variable relevance depending on what you're measuring, but its one easily available quantified marker of performance.
A bit of further info in this article:
The 2018 Apple iPad Pro (11-Inch) Review: Doubling Down On Performance
www.anandtech.com
It would be very odd if Apple spent the last several years polishing off external GPU support in MacOS, and developing the new Mac Pro’s MPX interface for PCI cards if they were going to be dropping support for discrete graphics cards.
I think there will be Apple on-die GPU, but can’t believe there will not be an option for discrete GPU upgrades.
To me it wouldn't make sense to switch to Apple Silicon in order to get control over the hardware-software integration, diminish reliance on 3rd party chip manufacturers and then...... continue to rely on 3rd party chip manufacturers. They do GPUs really well in the A{}X series so, while nothing has been said officially, all signs point to them making their own IMO.
Whether those Apple-made GPUs will be integrated or discrete, well, I have no idea. We'll see soon enough!
Woah. I appreciate the discussion guys.
Take a look at this rumor by a video:. If Apple's ARM iMac (Q4 2020 or later) might just have an Apple GPU, the Navi 22 GPU might be reserved for Intel iMac. The video also estimates that Navi 2x will be announced or released in September or October 2020.
Yup, I see no reason why they couldn't do it. They leaned hard on the fact that they've been creating custom graphics modules for their X series of chips with 6 or 7 iterations now, and seemed very proud of the fact that the iPad's GPU is now 1000x more powerful than it was at launch.ya, but i guess the only reason to keep using 3rd party GPU's would be if Apple *couldn't* do it themselves. It wouldn't make sense, but then 3rd parties ave also been in this GPU-game longer than Apple I think as well.
Usually Apple excells well i reckon at perfomance and battery. so there maybe some potentional issues if they sed 3rd GPU still. You know the saying "You make all the components,, you have a better life"
It will be an Apple GPU. Not an arm gpu.Hi all, I think I heard at Keynote that GPUs in ARM Macs will ARM too. But I wasn't fully paying attention at that time. I could be very wrong. Do you think ARM Macs won't have AMD GPUs but will have custom, integrated and/or ARM GPUs?
Do you think ARM Macs won't have AMD GPUs but will have custom, integrated and/or ARM GPUs?
There will only be integrated Apple GPUs.We will probably see a mix of integrated and discrete.
"And to know if a GPU needs to be treated as integrated or discrete, use the isLowPower API. Note that for Apple GPUs isLowPower returns False, which means that you should treat these GPUs in a similar way as discrete GPUs. This is because the performance characteristics of Apple GPUs are in line with discrete ones, not the integrated ones. Despite the property name though, Apple GPUs are also way, way more power-effficient than both integrated and discrete GPUs."Sorry, but it’s going to take more than that to convince me. For the reason’s i mentioned above (recent time spent on external GPUs and recent time spent on the Mac Pros MPX interface).
That slide’s just meant to set the table for comparison’s sake. They’re not announcing final product specs in such an obtuse manner.
"And to know if a GPU needs to be treated as integrated or discrete, use the isLowPower API. Note that for Apple GPUs isLowPower returns False, which means that you should treat these GPUs in a similar way as discrete GPUs. This is because the performance characteristics of Apple GPUs are in line with discrete ones, not the integrated ones. Despite the property name though, Apple GPUs are also way, way more power-effficient than both integrated and discrete GPUs."
Bring your Metal app to Apple silicon Macs - WWDC20 - Videos - Apple Developer
Meet the Tile Based Deferred Rendering (TBDR) GPU architecture for Apple silicon Macs — the heart of your Metal app or game's graphics...developer.apple.com
It sounds like Apple GPUs are just going to be part of the SoC with the CPU. Think iPad chips with more cores, clocked up with the greater thermal headroom, and now on 5nm die. They're going to scream. No eGPU or dGPU needed.
Instead of quoting vague marketing images, we might as well just drop this whole conversation and wait and see instead.
It is on Apple to persuade its most demanding users about its platform future: what happens if they need more power than Apple thinks is enough? Desktops aren't like phones, most don't upgrade every year. In terms of business hardware depreciation, it's on average three years. Are these SoCs potentially swappable? What if memory needs to increase, is it slot-and-swap or is everyone stuck with some fixed default during purchase? What if that integrated GPU design is lackluster on a multi-thousand-dollar workstation, what can be done about that?
"And to know if a GPU needs to be treated as integrated or discrete, use the isLowPower API. Note that for Apple GPUs isLowPower returns False, which means that you should treat these GPUs in a similar way as discrete GPUs. This is because the performance characteristics of Apple GPUs are in line with discrete ones, not the integrated ones. Despite the property name though, Apple GPUs are also way, way more power-effficient than both integrated and discrete GPUs."
Bring your Metal app to Apple silicon Macs - WWDC20 - Videos - Apple Developer
Meet the Tile Based Deferred Rendering (TBDR) GPU architecture for Apple silicon Macs — the heart of your Metal app or game's graphics...developer.apple.com
I should mention that at the Metal for Apple Silicon talk, their diagrams seemed to suggest that their GPU memory at least in some cases be separate from main memory as they talked a lot about when to flush to system memory and when to read from system memory versus just relying on the already cached to VRAM data on the GPU with Apple Silicon