Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
The problem with onboard GPU's is that they will always be starved for bandwidth compared to dGPU's. Even tile based rendering will have a hard time getting past that.
I'd be surprised if Apple goes to the very expensive HBM route.
Can you elaborate on this a bit?
 

macsplusmacs

macrumors 68030
Nov 23, 2014
2,763
13,275
As long as Apple doesn't return to crap deceptive advertising they used to do in PPC days with using filters in Photoshop I'm fine with it.

I hear you.

But this time around I believe to be a diff case since we can see what they have done and are doing with the iDevices and how that will also be similar on the mac to see this time around its going to be the real deal.


The problem with onboard GPU's is that they will always be starved for bandwidth compared to dGPU's. Even tile based rendering will have a hard time getting past that.
I'd be surprised if Apple goes to the very expensive HBM route.

I have no inside information and I don't think we will have the answer till they start to replace higher end macs a year from now, but I have to believe this is the billion dollar question that apple is confident will be solved with their tech.
 

magbarn

macrumors 68040
Oct 25, 2008
3,016
2,380
Can you elaborate on this a bit?
A GPU built into a SOC like Apple's A12X/A13 has to share memory with the CPU. A DGPU always has memory dedicated to it so it doesn't have to share with the CPU. Apple gets somehow around this using tile based rendering.

The only work around is to have HBM which can be put on the same package as the SOC. But HBM is much more expensive compared to GDDR6 even.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
A GPU built into a SOC like Apple's A12X/A13 has to share memory with the CPU. A DGPU always has memory dedicated to it so it doesn't have to share with the CPU. Apple gets somehow around this using tile based rendering.

The only work around is to have HBM which can be put on the same package as the SOC. But HBM is much more expensive compared to GDDR6 even.
So what’s to stop Apple’s current approach from working in a desktop context?

I’m asking because the fact that they’re “getting around” this is contradictory with the idea that they have to adopt the practices of the rest of the industry.

I’ve seen the common knowledge notion out that that Tile Based is somehow inferior, but wouldn’t that all rely on the entirety of the system architecture making this not a valid Apples to Apples (no pun intended) comparison?
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Yet,.. I still don't believe Apple will introduce Xeon and 3080ti level chips :p
Well, they have 2 years to reach the high-end. Current Xeon and soon-to-be 3080ti will probably be decent targets by that point. Whatever NVidia brings out in 2 years I'd be skeptical about, but I don't see Xeon improving that much in performance at Intel's current rate, so I think it's a reasonable target.
 
  • Like
Reactions: macsplusmacs

pioneer9k

macrumors member
Oct 22, 2016
81
132
I've been supportive of this move since I first heard about it as a rumor over a year ago. Intel is a sinking ship in many ways. They are far too unreliable and they were causing Apple all sorts of problems.

Even since the unveiling of Big Sur, for some reason I've had a change of heart. Something about that OS just doesn't appeal to me. Being locked in to Apple's ecosystem without the chance to run Windows and Linux without emulation is concerning.

Possible lack of compelling software apps. No old games like GTA III. I'm starting to think buying a final gen Intel Mac is the best way to go, considering it will be supported by Apple for at least 5 years and fully compatible with Windows and Linux for as long as you own it.

Anyone else feel this way?
Sorry I dont want to read through 7 pages but GTA 3, Vice City, and San Andreas should be playable on macOS with ARM since its available on iOS I think
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
A GPU built into a SOC like Apple's A12X/A13 has to share memory with the CPU. A DGPU always has memory dedicated to it so it doesn't have to share with the CPU. Apple gets somehow around this using tile based rendering.

The only work around is to have HBM which can be put on the same package as the SOC. But HBM is much more expensive compared to GDDR6 even.

It all depends on what system RAM you use. For example:

  • LPDDR5 is viable in the lower-tier, it gives you ~100GB/s which will outperform GDDR5
  • HBM2 is viable in the upper tier, it offers bandwidth competitive with high-end GDDR6 setups (or even surpassing them), while offering low latency (suitable for CPU workflows) and low power consumption.
Yes, HBM is expensive, but then again so are higher-end Apple computers. And you are saving some circuitry by not having to manage multiple chips with their dedicated RAM. It's probably not too far-fetched to assume that dropping the dGPU, VRAM, DDR system RAM and the power circuitry to feed the dGPU will save enough money to offset the higher costs of HBM + interposer. And, it would make the logic board layout more compact. Besides, that would allow Apple to keep the current 16" MBP price while actually making the value proposition more attractive. One is more at ease spending $3000 on a laptop knowing that its CPU has access to 300+ GBps bandwidth memory.

Also, tile rendering is great and all, but you still need GPU RAM bandwidth for compute. HBM as system RAM will solve this, not to mention that it will allow completely new levels of system performance.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
I’ve seen the common knowledge notion out that that Tile Based is somehow inferior, but wouldn’t that all rely on the entirety of the system architecture making this not a valid Apples to Apples (no pun intended) comparison?

This is an interesting one. This notion is partly based in the preconceived perception of TBDR GPUs being slow (mostly because they have their roots in mobile CPUs, which tend to be slow — duh), partly because TBDR GPUs traditionally had issues with complex geometry (mostly because vertex pipeline used to be fixed function and mobile GPUs didn't allocate many resources there — but modern unified shader pipelines mostly take care of that particular problem), partly because there are worries that the more complex TBDR design will have difficulties scaling up to the desktop use case (quite justified as nobody has yet managed to deliver it, and additional complexity often comes with inherent limitations), and partly because TBDR might be incompatible with some modern trends such as mesh shading (although I believe it is solvable).

All in all, I am really looking forward to seeing what Apple has cooked up. If high-performance, desktop-class TBDR GPUSs are at all possible, Apple is certainly the company closest to implementing them. They have the technology and the finances.
 

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
There are worries that the more complex TBDR design will have difficulties scaling up to the desktop use case (quite justified as nobody has yet managed to deliver it

Has anyone tried? I'm guessing Nvidia and AMD have been barrelling down their chosen path(s) for some time.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
Has anyone tried? I'm guessing Nvidia and AMD have been barrelling down their chosen path(s) for some time.

Not in the recent years. IMR has been working very well for desktop, especially with tiling optimizations Nvidia and AMD recently borrowed. I guess you can call current desktop GPUs TBIR :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.