Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
Something that Apple was stressing over and over again during WWDC is that Apple ARM Macs will be based on their integrated system on a chip. Among other, it means that CPU and GPU share system memory (you may say that this makes the GPU integrated since having own video RAM is the primary criterion of distinguishing between iGPUs and dGPUs). This approach does have its benefits - there is no need to transfer data between CPU and GPU for example, the GPU can take advantage of the virtual memory system, the power consumption cost is much lower and it does work well for Apple GPUs who because of their architecture need much lower memory bandwidth than current dGPUs. However, this won’t scale to high-performance applications. Even if Apple uses LPDDR5 in their new Macs (RAM bandwidth approaching 50 GB/s), they won’t be able to complete with modern GDDR6 etc. solutions that deliver bandwidth of 200 GB/s and higher.

This is where my speculation starts. What if Apple kept the unified memory approach and its many advantages, but used high bandwidth memory instead? They already have a lot of experience with HBM2 and if I understand it correctly, it’s latency is comparable to the latency of regular RAM, so it can be used as CPU RAM (unlike typical video RAM that trades latency for bandwidth). Combining an Apple SoC with 32GB of HBM2 will allow bandwidths of over 400GB/s, which compares to those of fast desktop GPU, while also potentially allowing speed ups on the CPU side.

There are reasons why I think Apple could potentially pull this off. First of all, this kind of system is going to be very expensive (interposers are complex and cost a lot of money). This is probably why we don’t see it much in everyday computing as companies prefer more conservative solutions that scale to different markets. But Apple doesn’t care about this. They don’t have to cater to different markets, they have their target pretty much locked in. The 16” MBP already costs a lot of money - and they might as well funnel the savings from using their own chips into a more expensive memory subsystem. This would also be advantageous to Apple, since nobody else would be even close to offering anything even remotely comparable. This would be a very power efficient design, potentially capable of very high performance, and at the same time it would be in some ways simpler than the traditional PC designs (no need for different types of memory, no need for a bus between CPU and GPU, power delivery system can be radically simpler). A single SoC at 80watt TDP could potentially deliver desktop-class CPU and GPU performance.

Note: Unified memory architectures are used by gaming consoles, I assume in order to simplify the design and optimize the memory transfer. But consoles use high latency memory, so programmers have to take this into account.

What do you think?
 

bluecoast

macrumors 68020
Nov 7, 2017
2,256
2,673
It sounds plausible OP!

In the WWDC opening video, Apple spent quite a lot of time saying that they’ve had 10 years of designing for retina screens - but they were holding back a little until the iPad Pro.

And they seemed to giving the impression there that they can do a lot better than the iPad Pro (which is a processor design from 2018).

I think the graphics performance of the ARM Macs is going to be great.
 
  • Like
Reactions: MrUNIMOG

Apple Knowledge Navigator

macrumors 68040
Mar 28, 2010
3,690
12,911
I think there will be VRAM. One has to remember that many Macs which previously (or currently) had discrete GPUs will no longer be present, and therefore the space left from those chips could be used for other components such as VRAM.
 
  • Haha
Reactions: LeeW

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
I think there will be VRAM. One has to remember that many Macs which previously (or currently) had discrete GPUs will no longer be present, and therefore the space left from those chips could be used for other components such as VRAM.

Why do you think that they would go VRAM route instead of using fast unified RAM shared by both CPU and GPU? In the end, VRAM is a hack, used to build fast modular systems and keep the cost reasonable. Apple doesn’t have to care about modularity (except on Mac Pro), and I think they are more flexible with cost.
 

Trusteft

macrumors 6502a
Nov 5, 2014
873
971
I will tell you what I think. Gaming and everywhere where graphics speed is important, the new ARM computers will severely fall behind other (x86 based) computers.
There is a good reason there are only a handful GPU companies remaining and it isn't because it's easy.

I am not optimistic at all about this new route, but I will wait till the final product is out.
 

Bug-Creator

macrumors 68000
May 30, 2011
1,783
4,717
Germany
I think there will be VRAM.

There is no point in having VRAM when the GPU is one the same chip as the CPU.

They might end up with multiple memory channels on the higher end Mac (iMac,MacBookPro,MacPro) and the SoC may depending on the load decide to make one channel exclusive for the GPU but I'm not sure that would have much benefits.

On the new MacPro it might be different as it is quite plausible that they won't be able to fit all the power onto 1 SoC, so the might do separate CPUs and GPUs.
 

Brian Y

macrumors 68040
Oct 21, 2012
3,776
1,064
Here's the thing, both routes have their advantages and disadvantages.

First off, having shared memory could be good. Rather than having 16gb of regular memory plus 8gb of video memory, you could have 24gb shared. If you need 20gb for a non-gpu workload, cool. Playing a game which requires loads of textures loaded into the graphics memory, awesome. Having this able to dynamically adjust would be a game changer.

On the other side, current GDDR chips are much faster than regular old DDR4. Unlike consoles, PC game devs haven't needed to program for slower memory access - largely because iGPUs which share memory have been pretty crap, so there's no real need to worry about their performance.

Will be interesting to see the approach they take. If I had to guess, I'd assume we'll see entry level devices first with a unified memory architecture, followed by devices with the current higher end GPUs (eg an A14Z with an AMD graphics chip.)
 
  • Like
Reactions: jdb8167

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
On the other side, current GDDR chips are much faster than regular old DDR4. Unlike consoles, PC game devs haven't needed to program for slower memory access - largely because iGPUs which share memory have been pretty crap, so there's no real need to worry about their performance.

That’s why I am speculating about using something like HBM2 for unified system memory. It could serve both CPU (low latency) and GPU (high bandwidth, multiple independent channels to feed an async processor), at the increased implementation cost.
 

DearthnVader

Suspended
Dec 17, 2015
2,207
6,392
Red Springs, NC
That’s why I am speculating about using something like HBM2 for unified system memory. It could serve both CPU (low latency) and GPU (high bandwidth, multiple independent channels to feed an async processor), at the increased implementation cost.
I like the idea, sell it to Apple. ?
 

Zackmd1

macrumors 6502a
Oct 3, 2010
815
487
Maryland US
Combining an Apple SoC with 32GB of HBM2

I would hate to see the price tag on 32gb of HBM2... There is a reason why it isn’t used much and only used in high end GPUs... That sh*t is expensive!

With that said, I can certainly see them running some type of high performance system ram in order to boost GPU performanc, just not HBM2.
[automerge]1593316016[/automerge]
I will tell you what I think. Gaming and everywhere where graphics speed is important, the new ARM computers will severely fall behind other (x86 based) computers.
There is a good reason there are only a handful GPU companies remaining and it isn't because it's easy.

I am not optimistic at all about this new route, but I will wait till the final product is out.

I completely disagree. The A12z is already comparable to an Xbox one S/ GTX 1050. Give it the proper thermal and power budget with more ram and I don’t think it is crazy to think the could match or exceed the current AMD Navi mobile graphics. They can certainly exceed the crappy igpus from Intel... the A12z already does that.
 

Trusteft

macrumors 6502a
Nov 5, 2014
873
971
I would hate to see the price tag on 32gb of HBM2... There is a reason why it isn’t used much and only used in high end GPUs... That sh*t is expensive!

With that said, I can certainly see them running some type of high performance system ram in order to boost GPU performanc, just not HBM2.
[automerge]1593316016[/automerge]


I completely disagree. The A12z is already comparable to an Xbox one S/ GTX 1050. Give it the proper thermal and power budget with more ram and I don’t think it is crazy to think the could match or exceed the current AMD Navi mobile graphics. They can certainly exceed the crappy igpus from Intel... the A12z already does that.
If you base on what a good GPU is by what an Intel igpu can or even an AMD mobile. I don't know. I mean, that's your opinion and all. As for the "proper thermal and power budget" all I have to say is, have you used an Apple computer lately or at least read/watched a review of one?
Obviously time will tell what is going to happen and if you are optimistic, good for you, I wish I were.
 

bluecoast

macrumors 68020
Nov 7, 2017
2,256
2,673
I think the clue was that the dev boxes have 16GB RAM.

You could imagine that there will be ample enough shared RAM for most games to have a reasonable level of performance ie ‘high’.

I suspect that’s going to be the minimum RAamount for ARM pro machines at least.

Knowing Apple, they’ll likely put 8GB in their lowest end machine (those meant for people who are browsing the web, writing email, writing a few docs etc) & playing Apple Arcade games.

But we just don’t know.
 

theluggage

macrumors G3
Jul 29, 2011
8,009
8,443
There is a good reason there are only a handful GPU companies remaining

...one of whom is Apple (or their subsidiaries/partners/whatever) who make the GPUs in one of the major gaming and digital photography platforms (iPhone/iPad) which deliver very decent 3D, video and photography performance compared to competing phones, tablets and STBs.

I think people get too bogged down in arguing over whether the A12 CPU and GPU benchmarks really mean they're as fast as a 16" MacBook Pro with an i7 and dGPU (clue: they probably don't) and miss the real point - which is that they thrash the MacBook Air and 12" MacBook, which are the sort of products that the A12 would be suitable for.

We haven't seen Apple's laptop/desktop silicon yet - is it somehow easier to design GPUs under incredibly tight space, battery power and thermal constraints than it is for full-size laptops and desktops with more cooling and better power supply?

It's not like, 18 months ago, Tim and Craig sat down with a copy of "GPU design for dummies" and said "how hard can it be?" - Apple already have serious skin in the game, and huge piles of cash with which to buy companies and hire experts.

In any case, Macs with Intel and AMD GPUs have been lagging behind x86 systems for years now - people don't buy Macs for the bleeding-edge graphics performance. The only job for Apple Silicon GPUs is to not suck.
 

theluggage

macrumors G3
Jul 29, 2011
8,009
8,443
That's one loser attitude if I ever saw one.
I am glad you are not in charge.

Why? It's about the best thing you can say about the GPUs in most of Apple's range at the moment - especially if you factor in price/performance.

Go look at Apple's own pages for the Mac Pro and see how it outclasses comparable PC solutions... oh, wait, no, all you'll see is that it beats the much cheaper iMac Pro and the 2013 trashcan which even Apple themselves wrote off as a flop. (...and we're not talking about the $6k entry level MP here). Pretty clear what criteria Apple are applying there - and it's not "be the best of the best".

For their biggest selling 13" models, the Mac Mini and the 21.5" iMac, Apple have a gloriously low bar to aim for in the form of Intel's iGPUs. Because Intel integrated graphics is the gold standard, right?
 

Zackmd1

macrumors 6502a
Oct 3, 2010
815
487
Maryland US
If you base on what a good GPU is by what an Intel igpu can or even an AMD mobile. I don't know. I mean, that's your opinion and all. As for the "proper thermal and power budget" all I have to say is, have you used an Apple computer lately or at least read/watched a review of one?
Obviously time will tell what is going to happen and if you are optimistic, good for you, I wish I were.

You are of course not going to get RTX level performance out of an SOC and that is not what I am saying. Having SOC with a 15-20 watt TDP should give a substantial gain in performance over one that is limited to 5-6 watts and passive cooling. Again, I wouldn't be shocked at graphics on par with Navi 5300m. That being said, these macs are not going to be gaming machines so don't even set your expectations that high.
[automerge]1593355065[/automerge]

 
  • Like
Reactions: Roode

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
That being said, these macs are not going to be gaming machines so don't even set your expectations that high.

With an architecture similar to newest gaming consoles, faster hardware and full software control over that hardware, I am fairly confident that new Macs can be capable gaming machines. The question is whether game developers will be interested in targeting the platform.
 

Zackmd1

macrumors 6502a
Oct 3, 2010
815
487
Maryland US
With an architecture similar to newest gaming consoles, faster hardware and full software control over that hardware, I am fairly confident that new Macs can be capable gaming machines. The question is whether game developers will be interested in targeting the platform.

And that is my point. The 5300m is a capable gaming chip so if Apple can match or exceed that, the power will be there. I doubt developers will target Macs for gaming though. Macs will get the side benefit of games being released on IOS being able to run on apple silicone.
 

Pressure

macrumors 603
May 30, 2006
5,178
1,544
Denmark
You are of course not going to get RTX level performance out of an SOC and that is not what I am saying. Having SOC with a 15-20 watt TDP should give a substantial gain in performance over one that is limited to 5-6 watts and passive cooling. Again, I wouldn't be shocked at graphics on par with Navi 5300m. That being said, these macs are not going to be gaming machines so don't even set your expectations that high.
[automerge]1593355065[/automerge]



There is nothing stopping a SoC from performing like a high-end graphic card.

Both the PS5 and the Xbox Series X does and they both have a shared memory pool as well.

What's stopping it from the low-end is obviously cost, thermal design power and form factor.

A tile-based renderer can get away with lower specs (compared to immediate-mode rendering) as it only render what is visible on the screen.
 
  • Like
Reactions: Zackmd1

Zackmd1

macrumors 6502a
Oct 3, 2010
815
487
Maryland US
There is nothing stopping a SoC from performing like a high-end graphic card.

Both the PS5 and the Xbox Series X does and they both have a shared memory pool as well.

What's stopping it from the low-end is obviously cost, thermal design power and form factor.

A tile-based renderer can get away with lower specs (compared to immediate-mode rendering) as it only render what is visible on the screen.

The Xbox and PS5 can use GDDR6 RAM without much worry. The same cannot be said for a general purpose computer (although the EEC GDDR6 ram in the Xbox is interesting...). An SOC will always be limited by shared RAM in applications other then dedicated gaming.
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
Why do you think that they would go VRAM route instead of using fast unified RAM shared by both CPU and GPU? In the end, VRAM is a hack, used to build fast modular systems and keep the cost reasonable. Apple doesn’t have to care about modularity (except on Mac Pro), and I think they are more flexible with cost.

It reduces bandwidth on the system bus. Sending pixel data back and forth over system ram is not as efficient as dedicated VRAM with much smaller commands to the GPU from the CPU.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
It reduces bandwidth on the system bus. Sending pixel data back and forth over system ram is not as efficient as dedicated VRAM with much smaller commands to the GPU from the CPU.

At the same time, not having to copy data between devices is also an efficiency win. I would say it depends on how well the memory subsystem can deal with heterogenous concurrent requests. And of course, TBDR GPUs tend to exhibit much better cache locality in typical scenarios.
 
  • Like
Reactions: PeteY48
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.