Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,520
19,670
Saying an integrated GPU and a SoC are the same thing is not really correct. A CPU that has an iGPU has those two items on the CPU die, and they access system memory via a bus (like PCIe). A SOC also includes a lot of other co-processor blocks. And yes this is one of the first times we have seen a full fledged SoC on a laptop and/or desktop.

Depends how you define what an "SoC" is. Intel has been calling their CPUs Systems on a Chip for a while now. An Intel CPU die includes: multiple CPU cores, multiple GPU cores, caches, memory controllers, video encoder/decoder hardware, I/O controllers (including WiFi and thunderbolt), and AI coprocessors in some newer chips. What about that is not SoC enough for you?
 

the8thark

macrumors 601
Apr 18, 2011
4,628
1,735
Saying an integrated GPU and a SoC are the same thing is not really correct. A CPU that has an iGPU has those two items on the CPU die, and they access system memory via a bus (like PCIe). A SOC also includes a lot of other co-processor blocks. And yes this is one of the first times we have seen a full fledged SoC on a laptop and/or desktop.

Depends how you define what an "SoC" is. Intel has been calling their CPUs Systems on a Chip for a while now. An Intel CPU die includes: multiple CPU cores, multiple GPU cores, caches, memory controllers, video encoder/decoder hardware, I/O controllers (including WiFi and thunderbolt), and AI coprocessors in some newer chips. What about that is not SoC enough for you?

I think the short answer to this is, what we traditionally consider to be"integrated" is changing. In fundamental ways. Which in my opinion is a great thing. It's means the industry is not stuck behind certain ways on thinking and certain ways of doing things. They are seeing the writing on the wall that the current way of thinking and doing things is not yeilding the returns they and everyone else expect so they have to take risks and move on to new ideas.

In my opinion, @Joelist is talking about the traditional concept of what integrated it and @leman is talking about is the future of integrated. The stepping stone to what the next futuristic version of integrated could be moving towards. I believe we don't know what that is yet but what the industry is taking risks on now is the transitional step towards this.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Sad to say @leman I do not think HBM2E is viable as all-system memory after reading this study. Section 5.3 has the info on power consumption.

tl;dr HBM2 uses over 10W in some applications and hits about 10W on average in heavy lifting. Even if we were using HBM2E at around 2Gbps (~250GB/s) we would be lucky for it to stay under 5W a stack. That is going to create problems not just for the APU throttling due to heat but also for power consumption in general.

We should start considering HBM2E as L4 (ish) cache more seriously. A single stack has more leeway in where it's placed on package (eg right by the well-behaved GPU cores, opposite the spikey heat-sensitive CPU cores), and if it's just one stack it can be run fast. Samsung has parts that go upwards of 500GB/s. Depending on how power scales you could potentially have a 16GB stack operating PDQ and it wouldn't be a problem if that one stack were regularly pulling 12W or so.

Dual channel LPDDR5 does give a solid ~100GB/s for everything else. Is quad channel DDR5 unreasonable? It's 2/3 the bandwidth of a standard HBM2 part (~200GB/s vs ~300GB/s) at just 4 x 2W. It's a lot of pinouts but otherwise seems quite reasonable for bespoke performance?
 

StellarVixen

macrumors 68040
Mar 1, 2018
3,254
5,779
Somewhere between 0 and 1
The goal isn't to become the leader.

They won't support dGPU with Apple Silicon. At least not dGPU coming from third party vendors. Apple doesn't need dGPU with its SoC. It just add another level of complexity to software.

They will continue to support dGPU for Intel platform for maybe a couple of years, but they won't improve the drivers for AMD GPUs after the Mac Pro transitioned to Apple Silicon.

SoC GPU of any kind just cannot compete with dGPUs from AMD and Nvidia, and who knows if it will ever be able. Apple HAS to include dGPU support unless they want to turn Macs into oversized iPads.
 

pldelisle

macrumors 68020
May 4, 2020
2,248
1,506
Montreal, Quebec, Canada
SoC GPU of any kind just cannot compete with dGPUs from AMD and Nvidia, and who knows if it will ever be able. Apple HAS to include dGPU support unless they want to turn Macs into oversized iPads.
It’s not because Macs are going to be on the « same » architecture as an iPad (and not sharing the same chip at all) that Macs are going to be « oversized iPads ». You are nothing but totally wrong.

What I think is that they can compete with discrete GPU, so they won’t have to support them, especially third party vendors. Apple has de resources,financially and peopleto make great GPU. The foundation of the Apple GPUs used in current chips is totally in phase with Metal library. They don’t have to make monster GPU to compete. Half the power of a 2080Ti might be totally sufficient for 100% of use cases on Macs. And they already achieved this “without even trying” (Craig Federighi)
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
Sad to say @leman I do not think HBM2E is viable as all-system memory after reading this study. Section 5.3 has the info on power consumption.

tl;dr HBM2 uses over 10W in some applications and hits about 10W on average in heavy lifting. Even if we were using HBM2E at around 2Gbps (~250GB/s) we would be lucky for it to stay under 5W a stack. That is going to create problems not just for the APU throttling due to heat but also for power consumption in general.

We should start considering HBM2E as L4 (ish) cache more seriously. A single stack has more leeway in where it's placed on package (eg right by the well-behaved GPU cores, opposite the spikey heat-sensitive CPU cores), and if it's just one stack it can be run fast. Samsung has parts that go upwards of 500GB/s. Depending on how power scales you could potentially have a 16GB stack operating PDQ and it wouldn't be a problem if that one stack were regularly pulling 12W or so.

Dual channel LPDDR5 does give a solid ~100GB/s for everything else. Is quad channel DDR5 unreasonable? It's 2/3 the bandwidth of a standard HBM2 part (~200GB/s vs ~300GB/s) at just 4 x 2W. It's a lot of pinouts but otherwise seems quite reasonable for bespoke performance?

Maybe HBM3 will be what is used for Tile Memory, and then DDR5 for the Unified Memory Architecture...?

wwdc20_09.png
 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
Sad to say @leman I do not think HBM2E is viable as all-system memory after reading this study. Section 5.3 has the info on power consumption.

If this is indeed correct, it would be quite discouraging. How reliable are these results? I understand that they use software modeling, these are not actual measurements. Some things I find rather surprising, like the very low latency of GDDR5, I though it should be higher?

The results of the paper also seem to be incompatible with other information. For example, Rambus marketing material claims that HBM2 offers a power advantage of 3-4x over GDDR6 — which already has lower power consumption than GDDR5... Also, isn't very low I/O power supposed to be the selling point of HBM? In the simulation the I/O power usage is through the roof...

Maybe HBM3 will be what is used for Tile Memory, and then DDR5 for the Unified Memory Architecture...?

We already talked about it. There is no point whatsoever to use HBM for tiled memory — the later is implemented using very fast on-chip cache that is faster than HBM

SoC GPU of any kind just cannot compete with dGPUs from AMD and Nvidia, and who knows if it will ever be able. Apple HAS to include dGPU support unless they want to turn Macs into oversized iPads.

You say that based on what? The iPad already offers performance competitive with mid-range dGPUs. Scale that up to power consumption levels typical for thin and light laptops and RTX 2060-like performance is very achievable. The big question is memory implementation.


Nope, not quite. The mid 2011 had a Radeon HD 6630M option.
No - there was a discrete option in the 2011 model.

You are right, how curious. Looks like a very brief experiment :)
 
Last edited:

toke lahti

macrumors 68040
Original poster
Apr 23, 2007
3,293
509
Helsinki, Finland
There isn't going to be those. The TBv3+ ( probably 4 or better) will have PCI-e.
Whether an external GPU works on not has nothing to do with Thunderbolt. It only has to do with a driver being available for macOS.
This is why after the first AS-mac sold, there might be 2-3 years when intel-macs support egpu and as-macs do not.
At present the current Apple systems are using AMD 5300 ( at different clocks) , 5500 (different clocks ) , 5700 (different clocks) , "Vega64" (gen 1 ) , and data center Vega ( Vega 20 ) . Vega64 is somewhat an artifact of a mostly comatose model ( iMac Pro 2017 ...creeping up on 3 years old ). So there is about 4 levels now.

5300 level is probably at some risk. ( the iMac 2020 5300 doing as well as the older 580X on some tasks shows that it is in the competitive toss up stage though. Even if Apple 'catches' 5300 with an iGPU in 1-2 years that AMD class will likely be in a higher performance zone. )
...
That doesn't work economically. The "bad core" defects are not going to generate enough volume to fulfill something that sells at much higher volume than the "bigger core count" model. You'd have to fill low core orders with dies with perfectly good cores in a very large substantial ratio to the dead ones.
You can fill a relatively low volume product with leftovers but that isn't how to do high volume products. Making the die to fit is much better. Far better wafer utilization ( get more dies out of a single wafer). [ doing a bigger die means get fewer dies off the wafer. Which means need to do higher wafer throughput to get t the same number of units to sell. That costs more. ]
4 levels, lets say lowest level covers 15% of macs, 2nd 7%, 3rd 2,5%, 4th 0,5%.
If they's manufacture two gpus, where the smaller one covers 2nd level when perfect and half gets marked as 1st level.
4th level could be harder to manufacture and perfect die yeld is 17%.
Does this sound funny?

They could put 1st level to "igpu base level", but they can't cover the 3 other levels economically with one chip.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
If this is indeed correct, it would be quite discouraging. How reliable are these results? I understand that they use software modeling, these are not actual measurements. Some things I find rather surprising, like the very low latency of GDDR5, I though it should be higher?

The results of the paper also seem to be incompatible with other information. For example, Rambus marketing material claims that HBM2 offers a power advantage of 3-4x over GDDR6 — which already has lower power consumption than GDDR5... Also, isn't very low I/O power supposed to be the selling point of HBM? In the simulation the I/O power usage is through the roof...
OK, strap in, this is gonna get a bit complicated. One useful tool I recommend picking up if you're not already familiar is: whenever someone quotes a "Gbps" speed for their memory, they mean the speed of a single pin. So multiply the number of pins x Gbps and divide by eight (converting Gb > GB) to get the full speed of the memory in GB/s.

For the power consumption of GDDR, it is because they are only using a single module. Normally GDDR would not be configured this way; it might use four modules (four "channels") - effectively quadrupling the number of pins - to hit high bandwidths. Look at figure five of the study for a good diagram. So a GPU might use 4 GDDR6 modules to get to the same place, capacity and bandwidth-wise, as 1 stack of HBM2E - but it's up to the designer.

HBM2, by contrast, has multiple channels within every package - two per die with four dies stacked on top of each other in its smallest configuration, which is what is tested here. So it's no surprise that it would use more power than GDDR.

Note that as you stack more dies, your bandwidth and power consumption for HBM2E could also increase. This is why AMD's Radeon Pro 5600M advertises a bandwidth of 2048 Bit: it's using HBM2 (not E), one GB per stack, and eight stacks. Every stack has 2 channels of 128bit bandwidth; 2 x 8 x 128 = 2048. You can check the math on their pin speed, total pins, and total bandwidth using the numbers on AMD's page. One thing I want to point out is how slowly they've clocked these pins - 1.54Gbps - presumably to keep the memory from melting.

However, I don't think this is what Samsung and SK Hynix are doing. If you look at press releases and do the math on the numbers Samsung presents: 3.4Gbps x (unknown bits) / 8 = 410GB/s... which yields me 1024 bits on what is confirmed to be an 8-Hi stack. It looks a lot like Samsung is disabling one of the two channels on each die and clocking them up. I am just guessing here, but I believe Samsung figured out that they could get better performance / power this way. Since HBM2E is theoretically on the low end of the power curve (wide and slow is the philosophy), this makes sense. And this is probably why Samsung and SK Hynix are regularly exceeding the speeds set by JEDEC: they've cut the lanes in half.

If that's true, then Samsung/SKY Hynix's HBM2E might be able to stay within 15W at some speed or another, and superfast HBM2E could be viable as L4 cache.

One last note. HBM2E has already delivered the speeds promised by HBM3 and the capacities. If what I'm saying is true, it's also delivered on the last design promise: low power configurations with half the bandwidth. I am more certain than ever now that HBM2E is, and has always been, HBM3. But because it fell short of the lofty visions and low costs JEDEC set out for it, they chose not to overpromote it.
 
  • Like
Reactions: Woochoo

leman

macrumors Core
Oct 14, 2008
19,520
19,670
For the power consumption of GDDR, it is because they are only using a single module. Normally GDDR would not be configured this way; it might use four modules (four "channels") - effectively quadrupling the number of pins - to hit high bandwidths. Look at figure five of the study for a good diagram. So a GPU might use 4 GDDR6 modules to get to the same place, capacity and bandwidth-wise, as 1 stack of HBM2E - but it's up to the designer.

Wait, now I am totally confused. Looking at table 2 and figure 5 it seems that they are measuring different configs for the simulation: one chip for HBM2 (with 4GB total capacity), for chips for GDDR5 (with 4GB total capacity), 8 chips for LPDDR4 (with 6GB total capacity) etc... but then I don't understand the difference between their one channel vs quad channel LPDDR4 config, one seems to use 4 dies while the other one seems to use 4 (for total capacity of 3GB)??? And yet the power consumption is comparable?

But you are saying that the power consumption figures are per module? In that case, HBM is amazingly good — after all, you'd need only 2 of these chips to get the same capacity as 8 DDR4 chips...

What I would want to know is the performance of these configs per GB of memory.

Note that as you stack more dies, your bandwidth and power consumption for HBM2E could also increase. This is why AMD's Radeon Pro 5600M advertises a bandwidth of 2048 Bit: it's using HBM2 (not E), one GB per stack, and eight stacks. Every stack has 2 channels of 128bit bandwidth; 2 x 8 x 128 = 2048. You can check the math on their pin speed, total pins, and total bandwidth using the numbers on AMD's page. One thing I want to point out is how slowly they've clocked these pins - 1.54Gbps - presumably to keep the memory from melting.

If they are using 8 stacks of HBM2, wouldn't that consume massive amounts of power (according to the paper)? And yet 5600M manages to pack almost 20 extra CUs into the same TDP... clearly HBM2 must give them a massive power-saving advantage over GDDR6... they would need at least extra 10watts to fit a full Navi 10 chip into the same bracket as Navi 14...
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Saying an integrated GPU and a SoC are the same thing is not really correct. A CPU that has an iGPU has those two items on the CPU die, and they access system memory via a bus (like PCIe). A SOC also includes a lot of other co-processor blocks. And yes this is one of the first times we have seen a full fledged SoC on a laptop and/or desktop.

First, Almost all the iGPUs also include video/media en/decode blocks so it isn't just CPU and GPU.

Second, Intel's Y series and most of the current AMD implementations include several classic PCH function units on the chip package. Intel doesn't have them all on one die. But having one and only one die isn't a good definition guideline for SoC either. The ""Chip" in 'System on a Chip' is more so about the chip package than the die.

Apple waves their hand at SoC for the Watch SoC by wrapping plastic around a logic board for a 'chip'. It is effectively something that is a bill of materials unit. Dies typically aren't to system builders . They have to be in some sort of carrier/package.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Nope, not quite. The mid 2011 had a Radeon HD 6630M option.

With 256MB of DDR5 VRAM. The other two models with iGPU technically had access to a larger memory footprint (slower but still more).


If an app gets to the point 'spill' out of the VRAM that dGPU was sometimes in worse shape than the iGPUs. High CPU memory pressure and a workload that could sit inside of ~160-192MB of VRAM , then yes those did much better. This was going back to what a 2008-2009 era MBP with a dGPU would have had for VRAM capacity. ( circa 2011 MPB dGPUs were up around 1GB of VRAM. More expensive but past the 512MB stage. ). If hooked up a 27" display to it was "less bad" than the iGPU but not much VRAM left after accounting for the frame buffers for the large display. A dGPU that is 'starved' of VRAM also has issues.


Thermally and board space availability for enough VRAM to make a difference really didn't work out all that well ... which is way it disappeared back into oblivion for about a decade. it is a odd-ball corner case , BTO configuration , that some folks trot out like it was some kind of normal design mode that succeeded. It didn't. What current systems can do with a decent eGPU (TBv3 or better port ) blows the doors off of that (specially with a high res display).

So as the Intel iGPUs got better, it wasn't that hard to displace most of what the lowest of low dGPU placeholder was covering with far less space and far less cost.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
,,,,
If you have a look through the extensions viewer in system profiler, you will notice, that by now all relevant kexts have been ported over to arm64e and depending on their purpose are supporting either arm64e only or both arm64e and x86_64.

The interesting thing is that even stuff like the Apple Afterburner driver has been ported to arm64e already and is present as a universal kext for both ARM and x86. Considering that is the case at such an early stage is a clear hint at the fact that we will see a Mac Pro based on Apple Silicon rather sooner than later and that it will still support PCIe expansion.

1. Kexts are a dead end. A system extension perhaps but
2. All Apple needs is a test harness logic board for a Apple Silicon SoC supporting just one x16 connection to test out the Afterburner board. ( So a SoC aimed at MBP 16" or iMac 27 class). Apple has more than just Afterburner to worry about. There is are lots of other more widely used non GPU cards that go into an Mac Pro or a TB external PCI-e card enclosure. All of those kexts are about to be "blown up" Apple kicking drivers out of the kernel plan for the future.
They all need some work done and the DTK doesn't do diddly poo to scaffold getting that work done ( not TB port and not way to attach a PCI-e card). [ I doubt Apple can really afford to wait until the Mac Pro prototype is late beta to start to hand something to the critical players here. ]

3. You don't need 5-6 concurrent slots to test individual cards.

However, looking at the graphics drivers for AMD, those are the only relevant drivers available for x86_64 only. If Apple was planning to keep supporting AMD GPUs in their Apple Silicon Macs, those would have been ported as universal kexts already as well. So it really looks like we will only see Apple's own GPUs in the future. Quite a boomer IMHO and I really don't think this will work out well.

That more so means that Apple isn't building driver for current AMD GPUs for future Apple Silicon machines. That shouldn't be surprising. Apple largely only builds drivers for GPU implementation instances that have ended up inside of Macs as some point. ( lands in MBP 16' and appears later with support in discrete card form. In iMac 27' and later in discrete card form). None of the current AMD graphics are probably going into the first AS Mac. And Apple isn't going to put eGPU over Thunderbolt on some "super high priority" fast track either. When an Apple system uses an AMD dGPU then the driver will get some reasonable resource allocation. If not coming soon then won't see one.
Likewise if "highly custom GPU" from some other vendor not a good reason to drop them in to this macOS branch of development.

Pretty likely the Mac Pro is coming late , which means the drivers will come late. And that Apple disrupts external Thunderbolt PCI-e for a long while since the DTK doesn't enable making progress on that in the slightest. No tool to make progress generally leads to no progress getting made.

If Apple's first Apple Silicon system is some one (or two port) USB-C MacBook rebirth then all the more not surprising at all.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Wait, now I am totally confused. Looking at table 2 and figure 5 it seems that they are measuring different configs for the simulation: one chip for HBM2 (with 4GB total capacity), for chips for GDDR5 (with 4GB total capacity), 8 chips for LPDDR4 (with 6GB total capacity) etc... but then I don't understand the difference between their one channel vs quad channel LPDDR4 config, one seems to use 4 dies while the other one seems to use 4 (for total capacity of 3GB)??? And yet the power consumption is comparable?

But you are saying that the power consumption figures are per module? In that case, HBM is amazingly good — after all, you'd need only 2 of these chips to get the same capacity as 8 DDR4 chips...

What I would want to know is the performance of these configs per GB of memory.



If they are using 8 stacks of HBM2, wouldn't that consume massive amounts of power (according to the paper)? And yet 5600M manages to pack almost 20 extra CUs into the same TDP... clearly HBM2 must give them a massive power-saving advantage over GDDR6... they would need at least extra 10watts to fit a full Navi 10 chip into the same bracket as Navi 14...
Sorry, I explained poorly and caused a lot of confusion... I am just starting to understand everything in that study myself!

"Module" was the wrong term. They test one 2GB 4-Hi HBM stack (8 channels, 128bits per channel, @ 2Gbps per bit) and one 1GB GDDR5 part @ 6Gbps.

I caused more confusion by saying "eight stacks" when I meant "8 dies" when referring to the Radeon Pro. It uses one 8 die stack, which has two channels per stack (16 channels). The newer Samsung and SK Hynix Parts are also 8-Hi, but seem to disable one channel per die (thus 8 channels) and then run them at over twice the speed AMD uses.

What I can say for sure is this: The 2GB, 8 channel HBM2 part running at 2Gbps in the study used 10-12W.

A 16GB, 16 channel HBM2E running at any speed would probably use more than that. If we use a Samsung or SK Hynix part, which has half of the channels disabled, it's probably going to perform similarly: 10-12W at 2Gbps. If we just want one stack as L4 cache, we might get good returns clocking this up. Or we could take the same approach AMD did, use all 16 channels, and run them at around 1.5GBps. Either way our bandwidth will be in the 400GB/s ballpark and our power consumption will be in the 15W+ ballpark.

But it's not possible to fuel four stacks at any speed. Even if we disable half the channels and back down to 1.5Gbps (8 x 128 x 1.5 / 8 = 192GB/s) we're likely using comparable power (maybe ~9W * 4 stacks = ~36W) and we could anyways get better performance out of quad channel LPDDR5 (64 x 4 x 6.4 / 8 = 204.8 GB/s) at this point, probably for less power.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,520
19,670
"Module" was the wrong term. They test one 2GB 4-Hi HBM stack (8 channels, 128bits per channel, @ 2Gbps per bit) and one 1GB GDDR5 part @ 6Gbps.


Ok, what about DDR and LPDDR then? How many dies of those are in the test? I am unable to find this info in the paper but then again I am too lazy to read it properly ?
 

Joe The Dragon

macrumors 65816
Jul 26, 2006
1,031
524
Apple has already been in the GPU business for years.

The set of workloads that an integrated GPU can cover has been growing larger. There isn't any substantive sign that Apple wants to make a "does everything for everybody" GPU anymore than they want to make a "does everything for everybody" CPU. They are probably only out to do a subset.

About half of the Mac line up has no discrete GPU ( Macbook Air , MacBook Pro 13 , Mac Mini , [ Macbook if bring back] ) 3-4 systems.
About half of the Mac line up does ( MBP 16" , iMac , Mac Pro , [ iMac Pro if count seperate ) 3-4

If Apple can drop the MBP 16" out of the second group with a future SoC then probably shovel it from one list to another. [ Tehcnicaly the non-retina, "Educational" iMac is sitting in the iGPU group already. If it stays after conversion that too ]

Unlike the Intel workstation chips ( Xeon E5 or now W ) in the iMac Pro and Mac Pro there is very good chance that Apple's top end SoC will still have a iGPU ( not a 'world beater' but something good enough to run an unmodified iPhone app reasonably well. )

Apple probably is going to make the Apple GPU ubiquitous. Present in every Mac so that every Mac runs many/most iOS apps. They don't need to make the "biggest" Apple GPU to do that. They just need to make it present by default. Pretty likely the Apple GPU will aways be there no matter what with a future Mac. So it is an option that developers "have to" work on.


dGPUs aren't dead. they just aren't the primary focus effort by Apple Macs. The higher end Mac will probably still have them, but they also probably are not coming any time soon .
and what about multi screen out? Drive 5K+ screens?? phone gpu's don't do that.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
and what about multi screen out? Drive 5K+ screens?? phone gpu's don't do that.

iGPUs aren't limited to phone GPUs.

202008172036171.jpg


four 4K streams ( the 8K here is a bit of a stretch unless have the LPDDR5 memory and strong breeze as a tailwind.

https://www.anandtech.com/show/1597...cture-deep-dive-building-up-from-the-bottom/6 ).
Tiger Lake Xe-LP also supports two e-DisplayPort connections (embedded screens). So it is six displays total. Not going to do that with the relatively low allocation of die space Apple has used in the iPhone GPUs. But, it still would be an integrated GPU.

Apple can't bring a toy gun to a real gun fight. but also don't have to cover the whole discrete GPU space with their iGPU. It will be kind of lame though if Intel can cover 6 displays with their supposedly "failed" 10nm process and Apple can't cover that same space with 5nm. Apple is gong to allocate room on the die though ( and probably some corner case memory by-pass for the frame buffer as the Xe-LP iGPU does. )

When Apple Silcom comes to marz(ket it is going to have to be better than Intel and AMD options. One or two 5K ( e.g. 2 sets of a pair of DisplayPort 1.2 streams or some compressed DP v1.4 streams) should be part of the baseline.

Can this Xe-LP handle a highly complex 3D model on two 5K screens at 60 (or better ) Hz? Probably not. There is still going to be a space for discrete GPUs. Apple chopping them off permanently long term is highly unlikely to work. It is also unlikely though that Apple is going to sign up to do everything for everybody in the GPU space (jump into relatively low volume , high end discrete GPUs).
 

jinnyman

macrumors 6502a
Sep 2, 2011
762
671
Lincolnshire, IL
iPad Pro supports 5k external monitors.
I don't think the pro can support "scaled" output. I've tried the pro on 4k and it gives horrible black strips up and bottom (probably all around 4 sides, but I don't remember really. I pretty much gave up on an external display for ipad after my first try).
 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
I don't think the pro can support "scaled" output. I've tried the pro on 4k and it gives horrible black strips up and bottom (probably all around 4 sides, but I don't remember really. I pretty much gave up on an external display for ipad after my first try).

Never tried it myself, so can’t comment. At any rate, there is little doubt that the upgraded GPU in the Apple Silicon Macs will have good support external displays.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.