Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

diamond.g

macrumors G4
Mar 20, 2007
11,455
2,685
OBX
LPDDR4 in the higher-end 13" MBP is faster than DDR4



Why do you think it's unlikely?
Uh, newegg has DDR4 RAM that goes all the way up to 5100. Of course this is desktop (288 pin) RAM and not mobile (260 pin). So there is that, though if you are soldering the memory onto the board I see no reason you have to stick with using RAM chips found on SODIMM sticks.
 

leman

macrumors Core
Oct 14, 2008
19,530
19,709
Uh, newegg has DDR4 RAM that goes all the way up to 5100. Of course this is desktop (288 pin) RAM and not mobile (260 pin). So there is that, though if you are soldering the memory onto the board I see no reason you have to stick with using RAM chips found on SODIMM sticks.

That's custom overclocked limited-availability RAM modules for enthusiasts. I don't think using it in an argument makes much sense. It is always possible to get more performance by cherry-picking and carefully matching components while spending a lot of money on testing and cooling. But it's not a solution that is suitable for the mass market.
 

diamond.g

macrumors G4
Mar 20, 2007
11,455
2,685
OBX
That's custom overclocked limited-availability RAM modules for enthusiasts. I don't think using it in an argument makes much sense. It is always possible to get more performance by cherry-picking and carefully matching components while spending a lot of money on testing and cooling. But it's not a solution that is suitable for the mass market.
If I am not mistaken anything that isn't 2400 (or was it 2133) DDR4 is basically an XMP profile (overclocking). Better memory can run at higher rates with tighter timings. But I get your point.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,465
959
Never stopped Apple before.
I don't remember Apple adopting cutting-edge technology that no one else uses (in a given context) unless said tech is user-facing. Yes, Apple were the first to adopt some expensive IO and display techs, but that's different. These make immediate selling points.
GDDR/HMB would have to make a big difference in performance for Apple to use these as main RAM. Because they're not cheap. Apple isn't ready to lower their margins or reduce their sales for no good reason.
 

leman

macrumors Core
Oct 14, 2008
19,530
19,709
I don't remember Apple adopting cutting-edge technology that no one else uses unless said tech is user-facing. Yes, Apple were the first to adopt some expensive IO and display techs, but that's different. These make immediate selling points.
GDDR/HMB would have to make a big difference in performance for Apple to use these as main RAM. Because they're not cheap. Apple isn't ready to lower their margins or reduce their sales for no good reason.

If I am not mistaken Apple is the only company to offer HBM2 as a laptop GPU RAM (Intel+AMD experiments notwithstanding). It is certainly the only company to offer a Navi GPU with HBM2.

HBM2 might not be cheap but it would radically simplify mainboard design and enable high-performance SoC. Instead of CPU + GPU + RAM + VRAM + all kind of circuitry to power all that goodness, Apple could use a single SoC+HBM2. They save on power circuitry, they save valuable mainboard space, they save the PCI-e lanes and so on.
 
  • Like
Reactions: BigSplash

Unregistered 4U

macrumors G4
Jul 22, 2002
10,617
8,639
I would argue that the supply of HBM2 chips isn’t as high as would be needed for the level of Apple sales.
I question if it’s even needed. Most of what we know about GPU’s and their requirements for high performance computing is related to the current leaders, AMD and NVDIA. While both of those use some form of Tile Based Deferred Rendering, they’re more hybrids than Apple TBDR solutions will be. When you consider that the iPad Pro 12.9 has a processor essentially from 2018, does not include GDDR or HBM2 AND could hit 120 fps with Fortnite (hee hee) on a 5k screen, that shows there may still a lot of performance to be obtained even before you get to GDDR/HBM2.
 
  • Like
Reactions: 2Stepfan

jeanlain

macrumors 68020
Mar 14, 2009
2,465
959
If I am not mistaken Apple is the only company to offer HBM2 as a laptop GPU RAM (Intel+AMD experiments notwithstanding). It is certainly the only company to offer a Navi GPU with HBM2.

HBM2 might not be cheap but it would radically simplify mainboard design and enable high-performance SoC. Instead of CPU + GPU + RAM + VRAM + all kind of circuitry to power all that goodness, Apple could use a single SoC+HBM2. They save on power circuitry, they save valuable mainboard space, they save the PCI-e lanes and so on.
HBM2 isn't common in laptops yet, but it's a standard for VRAM. Using it as main RAM is on a different league entirely, IMO.
WWDC sessions tell us that Apple GPUs, at least the first models, won't have VRAM. So I guess HBM2 is out of the question for some time.
 

leman

macrumors Core
Oct 14, 2008
19,530
19,709
HBM2 isn't common in laptops yet, but it's a standard for VRAM. Using it as main RAM is on a different league entirely, IMO.

Just because HBM is commonly used for VRAM, it doesn't mean that it's not suitable for main RAM. It is in fact used as system RAM in some supercomputers. While something like GDDR doesn't make sense as main RAM in a PC (very high latency, high power consumption), HBM might be a different story.

WWDC sessions tell us that Apple GPUs, at least the first models, won't have VRAM. So I guess HBM2 is out of the question for some time.

WWDC sessions tell us that Apple Silicon with use unified memory. That doesn't mean "no VRAM". It means that VRAM as concept becomes meaningless. You just have memory, accessible by CPU, GPU, Neural Processor or whatever other accelerator component is present in the system. For lower-end Macs, Apple could use LPDDR — it's fast enough to offer good performance, especially paired with Apple's ridiculous cache sizes and the efficiency of their graphics architecture. But if they want to achieve higher performance, at some point they have the choice: either abandon the unified memory concept (which I don't believe they will do — it's one of their main differentiating factors for the pro market) or use faster memory. HBM seems to fit the bill perfectly, provided there is a good supply of it.

But I don't want to start the old discussion again :) We will see next year.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,465
959
WWDC sessions tell us that Apple Silicon with use unified memory. That doesn't mean "no VRAM".
Apple says "The Apple GPU doesn't have VRAM", or something to that effect. I don't remember the precise session, but that shouldn't be hard to find.

Edit: here at 1:10 "the GPU does not have video memory, so bandwidth could be a problem". They don't use the future tense, but I interpret this sentence a valid for Apple GPUs in Macs.
 

diamond.g

macrumors G4
Mar 20, 2007
11,455
2,685
OBX
Is there anything that shows Apple being bandwidth limited, where going with HBM2 is the appropriate solution?
 

mattyjoe

macrumors newbie
Oct 12, 2020
20
107
Los Angeles, CA
I firmly believe that for mainstream computing (aka the majority of what the consumer market space encompasses) dedicated GPUs will meet the same fate that serial controller cards, dedicated storage controllers, and network controllers met many years ago. Integrated GPUs will eventually get to a point where they are fast enough for a huge majority of users, and dedicated chips will subsequently be relegated to niche markets and special use cases, such as competetive gaming, professional video and photo editing, GPGPU applications, and last but not least AI.

At the end of the day it's all a matter of performance. Once integrated features work well enough to satisfy most demands supplying dedicated chips to perform the same task becomes sort of a moot point.

Agree with this wholeheartedly. No one ever said we NEED to have dGPUs forever, it's just been the standard way to get better performance.

Apple shares your foresight on this, I believe. Better performance and better specs just come with time and more innovation. Eventually a Mac will be capable of running AAA gaming titles with their graphics integrated into the SoC. Same with iPads and iPhones most likely.
 

leman

macrumors Core
Oct 14, 2008
19,530
19,709
Is there anything that shows Apple being bandwidth limited, where going with HBM2 is the appropriate solution?

They will become bandwidth limited if they strive for higher performance. Think of it as performance being a function of bandwidth. Even though Apple's TBDR technology delivers more performance per unit of bandwidth, at some point they will need more of it if they want to continue scaling the number of GPU cores. LPDDR5 (~100GB/s) might be enough to feed let's say 16 GPU cores, but 32 cores will be starved.

I thought HBM had higher latency than GDDR.

As far as I understand it, HBM is basically regular DDR, just a lot of it in parallel. Its like using DDR with a lot of channels. Each module runs at relatively low speed, but since you have 1024 or 2048 bit bus, you can transfer tons of data at the same time. GDDR instead has a narrower channel but runs at faster speed. To achieve it it has to lie with higher latency (not a problem for GPUs anyway, where memory access latency can be easily hidden).

This is also the reason why HBM is used in HPC applications, where latency can be more important than for games.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,617
8,639
Ok, but the iPad pro does not have a 5k screen. It has 2.6 times fewer pixels. And I suppose that the default settings correspond to rather medium-low settings on the PC.
LOL I AM SOO SORRY :) Wow, shows what happens when I skip coffee. I meant the performance required to drive the iPad Pro screen with over 5 million pixels at 120 fps is greater than what current gen consoles required to push out 1080p at 60 fps. And, that was the tech available to Apple two years ago. No exotic memory is required to iteratively achieve better performance than that in the same thermal footprint.
 

diamond.g

macrumors G4
Mar 20, 2007
11,455
2,685
OBX
LOL I AM SOO SORRY :) Wow, shows what happens when I skip coffee. I meant the performance required to drive the iPad Pro screen with over 5 million pixels at 120 fps is greater than what current gen consoles required to push out 1080p at 60 fps. And, that was the tech available to Apple two years ago. No exotic memory is required to iteratively achieve better performance than that in the same thermal footprint.
Using Jaguar cores is more of the limiting factor there, especially comparing the One S to the One X.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
if you really want to know what kind of memory technology Apple will be using on the AS Macs, it is as simple as looking at the memory technology that they will be using on the iPhone 12. While people on this board like to indulge in fantasies of HBM2 and GDDRx, the truth of the matter is that it is a guiding principle of supply management is that you try and maximize volume of fewer parts, rather than buying X amount of one part, and Y amount of another part, and Z amount of a third part. Buying one part type allows for the cheapest price, and flexibility in allocating parts. Tim Cook is a manufacturing guy, and supply managemnt guy. He knows this stuff like he knows how to brush his teeth. Unless there is an overwhelming (not just good or compelling), he is not going to deviate from that. Apple will probably ship with 180-220M iPhones this year. The pricing for the RAM on these phones will be the best in the industry, by far. Why would you throw that possible cost advantage out for the AS Mac? And while there is speculation about GPUs being starved of RAM bandwidth, Apple will do what is cost effective, and still performs well, and that is piggy back RAM technology off of the iPhone.
 

jdb8167

macrumors 601
Nov 17, 2008
4,867
4,603
if you really want to know what kind of memory technology Apple will be using on the AS Macs, it is as simple as looking at the memory technology that they will be using on the iPhone 12. While people on this board like to indulge in fantasies of HBM2 and GDDRx, the truth of the matter is that it is a guiding principle of supply management is that you try and maximize volume of fewer parts, rather than buying X amount of one part, and Y amount of another part, and Z amount of a third part. Buying one part type allows for the cheapest price, and flexibility in allocating parts. Tim Cook is a manufacturing guy, and supply managemnt guy. He knows this stuff like he knows how to brush his teeth. Unless there is an overwhelming (not just good or compelling), he is not going to deviate from that. Apple will probably ship with 180-220M iPhones this year. The pricing for the RAM on these phones will be the best in the industry, by far. Why would you throw that possible cost advantage out for the AS Mac? And while there is speculation about GPUs being starved of RAM bandwidth, Apple will do what is cost effective, and still performs well, and that is piggy back RAM technology off of the iPhone.
Apple hasn’t done that in the past for Macs. Why would they start now? It’s not like they don’t sell millions of Macs per year. Plenty to get volume advantages.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.