Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So, TechInsights’ teardown photos are up, and, interestingly, their A15 is marked with an SK hynix part number (as opposed to the Micron part number on the A15 Wikipedia page). TechInsights’ preliminary analysis is LPDDR4X, though.

Edit: I concur with TechInsights regarding the RAM: SK hynix’s LPDDR4X parts start with H9H, while their LPDDR5 parts start with H9J. TechInsights’ iPhone Pro’s A15 has markings start with H9H.
 
Last edited:
  • Like
Reactions: leman and EntropyQ3
So, TechInsights’ teardown photos are up, and, interestingly, their A15 is marked with an SK hynix part number (as opposed to the Micron part number on the A15 Wikipedia page). TechInsights preliminary analysis is LPDDR4X, though.

Edit: I concur with TechInsights regarding the RAM: SK hynix’s LPDDR4X parts start with H9H, while their LPDDR5 parts start with H9J. TechInsights’ iPhone Pro’s A15 has markings start with H9H.
Thanks - I was waiting for that earlier data point to be confirmed by benchmarks, but this seems rather conclusive.
On one hand it means that they are leaving performance on the table, but then again, that also means that they have some low hanging fruit (particularly in graphics) to serve us the next time around and the money keeps flowing. I’m not sure if it means anything for the Macs though. If the rumours of quadrupled GPU configurations turn out true, I really can’t see Apple starving it for bandwidth.
 
  • Like
Reactions: altaic
Yeah, it’s probably the same RAM as Apple has been using for last couple of years. It’s not a bad thing per se, those phones are still faster than anything else on the market, there is not much reason for Apple to spend more money on new RAM tech

Thanks - I was waiting for that earlier data point to be confirmed by benchmarks, but this seems rather conclusive.
On one hand it means that they are leaving performance on the table, but then again, that also means that they have some low hanging fruit (particularly in graphics) to serve us the next time around and the money keeps flowing.

It's a question of economics vs. chasing performance. Sure, LPDDR5 would probably make these chips run faster (especially in the graphics department). Then again, it wouldn't make them run that much faster. And it would have been more expensive to manufacture, plus, LPDDR5 is still not as readily available.

I’m not sure if it means anything for the Macs though. If the rumours of quadrupled GPU configurations turn out true, I really can’t see Apple starving it for bandwidth.

I don't think it means anything at all for the Macs. iPhones use a very different packaging technology (RAM over SoC) and are a different class of devices. Apple will have to increase the bandwidth of their RAM for the prosumer silicon, there is no way around it, so they will do it, one way or another. Whether they use LPDDR4, LPDDR5 or something else entirely, doesn't really matter as long as the performance is satisfactory.
 
Last edited:
  • Like
Reactions: AgentMcGeek
Yeah, it’s probably the same RAM as Apple has been using for last couple of years. It’s not a bad thing per se, those phones are still faster than anything else on the market, there is not much reason for Apple to spend more money on new RAM tech



It's a question of economics vs. chasing performance. Sure, LPDDR5 would probably make these chips run faster (especially in the graphics department). Then again, it wouldn't make them run that much faster. And it would have been more expensive to manufacture, plus, LPDDR5 is still not as readily available.



I don't think it means anything at all for the Macs. iPhones use a very different packaging technology (RAM over SoC) and are a different class of devices. Apple will have to increase the bandwidth of their RAM for the prosumer silicon, there is no way around it, so they will do it, one way or another. Whether they use LPDDR4, LPDDR5 or something else entirely, doesn't really matter as long as the performance is satisfactory.
I’ve been ruminating on this for a couple of days— machines that need higher bandwidth may have more RAM ICs thus more channels, so throughput can scale. I’m therefore expecting the new higher-end Mac SoCs to have many more memory controllers.

It would be nice to see the best available tech used, but as others have quoted Apple engineers (paraphrasing), throughput is only as useful as the systems that produce and consume it.

My extrapolation is that if that balance exists in a small chip with its throughput to RAM, it can be maintained at a scale of 2x-6x (different parts of the SoC will scale differently), which is within the target of what I expect for the mid-range Macs to be announced. It seems to me that LPDDR4X can scale adequately.
 
Last edited:
I’ve been ruminating on this for a couple of days— machines that need higher bandwidth will have more RAM ICs thus more channels, so throughput can scale. I’m therefore expecting the new higher-end Mac SoCs to have many more memory controllers.

It would be nice to see the best available tech used, but as others have quoted Apple engineers (paraphrasing), throughput is only as useful as the systems that produce and consume it.

My extrapolation is that if the balance exists in a small chip with its throughput to RAM, that balance can be maintained at a scale of 2x or 4x, which is within the target of what I expect for the mid-range Macs to be announced. It seems to me that LPDDR4X can scale adequately.

I think that the prosumer Macs will definitely benefit from having higher bandwidth (especially on the high-end GPU config), so I will be disappointed if we don’t see LPDDR5.
 
I think that the prosumer Macs will definitely benefit from having higher bandwidth (especially on the high-end GPU config), so I will be disappointed if we don’t see LPDDR5.
I agree, though my point was that you can scale slower memory so long as you have more controllers and RAM ICs. Perhaps it’s cheaper for Apple to design more memory controllers into the SoC than buy supply constrained higher bandwidth RAM. The total bandwidth should be similar— IIRC, you made the point to someone earlier that HBM2 is just comprised of a stack of slower memory (though, TBH, I haven’t looked much at HBM2 and I didn’t verify if that’s correct).
 
Last edited:
  • Like
Reactions: leman
I agree, though my point was that you can scale slower memory so long as you have more controllers and RAM ICs. Perhaps it’s cheaper for Apple to design more memory controllers into the SoC than buy supply constrained higher bandwidth RAM. The total bandwidth should be similar— IIRC, you made the point earlier that HBM2 is just comprised of a stack of slower memory.

I very much doubt that 6x LPDDR4X is going to be cheaper than 4x LPDDR5 (not to mention power consumption, package size etc.). Unless we see some sort of yet unannounced stacking technology make its debut…
 
I very much doubt that 6x LPDDR4X is going to be cheaper than 4x LPDDR5 (not to mention power consumption, package size etc.). Unless we see some sort of yet unannounced stacking technology make its debut…
I can’t comment on the cost differences of the current constrained supply…

I’ve only designed one CPU for undergrad ages ago, but I doubt one can just swap in new memory controllers while maintaining the efficiency of the old design— without rejigging a lot of other stuff. OTOH, I don’t know much about the industry’s secret sauce, so maybe it’s trivial. Paging @cmaier 😊

Edit: Your statement about unannounced stacking tech is a confusing proposition (while the first sentence made sense to me). There is basically a 1 to 1 relationship between memory controllers and memory. Sure, you can encode a signal to address multiple modules, or encode multiple bits into a signal to one module, but ultimately scaling the channels is directly related to scaling the throughput.
 
Last edited:
I can’t comment on the cost differences of the current constrained supply…

I’ve only designed one CPU for undergrad ages ago, but I doubt one can just swap in new memory controllers while maintaining the efficiency of the old design— without rejigging a lot of other stuff. OTOH, I don’t know much about the industry’s secret sauce, so maybe it’s trivial. Paging @cmaier 😊

Maybe it’s presumptuous of me, but I remember @cmaier saying that controllers shouldn’t be a problem. It is not uncommon for CPUs and GPUs to support multiple memory standards anyway. Like the 5x00M Pro series supporting both GDDR6 and HBM2…
 
I can’t comment on the cost differences of the current constrained supply…

I’ve only designed one CPU for undergrad ages ago, but I doubt one can just swap in new memory controllers while maintaining the efficiency of the old design— without rejigging a lot of other stuff. OTOH, I don’t know much about the industry’s secret sauce, so maybe it’s trivial. Paging @cmaier 😊
LPDDR5 controllers are available from the usual IP-suppliers, although I can’t see Apple not rolling their own.
As far as supply is concerned, it’s also simple - if Apple had wanted to use LPDDR5 in their phones they would have been available. All three manufacturers (Micron, Samsung, Hynix) have a portfolio of the stuff. Apples iPhone/(iPad) volumes are so large that to cover their demand, the memory manufacturers would need to be pinged in advance, they have no reason to manufacture such volumes just hoping that someone will buy. But then again that would be business as usual in Apples procurement process.
The volumes needed for new Macs is one to two orders of magnitude lower but Apple is sure to have contracts in place even for these smaller volumes unless they use exactly the same parts that the phones do. Which I hope and assume they won’t.
 
  • Like
Reactions: altaic
Maybe it’s presumptuous of me, but I remember @cmaier saying that controllers shouldn’t be a problem. It is not uncommon for CPUs and GPUs to support multiple memory standards anyway. Like the 5x00M Pro series supporting both GDDR6 and HBM2…
I suppose that with large caches, pulling from RAM at odd times shouldn’t introduce coamplifying* delays. I’m out of my wheelhouse, though.

* apologies, the right word is currently out of my grasp
 
I suppose that with large caches, pulling from RAM at odd times shouldn’t introduce coamplifying* delays. I’m out of my wheelhouse, though.

* apologies, the right word is currently out of my grasp

Me too 😁 I a really have no clue about 99% I am rambling about, I am a linguist, not a CPU designer 😅
 
  • Like
Reactions: AgentMcGeek
Me too 😁 I a really have no clue about 99% I am rambling about, I am a linguist, not a CPU designer 😅
I feel like we’ll cross trails sometime. I’m a language theorist at heart. Bob Harper’s teachings were formative for me. Human linguistics are also a passion, though that’s been more difficult recently.
 
Maybe it’s presumptuous of me, but I remember @cmaier saying that controllers shouldn’t be a problem. It is not uncommon for CPUs and GPUs to support multiple memory standards anyway. Like the 5x00M Pro series supporting both GDDR6 and HBM2…
Intel Tiger Lake supports both LPDDR4 and LPDDR5. It seems likely that any new memory controller from Apple could also both or have the appropriate controller based on the intended use.
 
  • Like
Reactions: leman
The A15 being LPDDR4X doesn't seem too much of an issue as I don't see iPhone being as bottlenecked by RAM bandwidth compared to desktop environments. In the context of games, most of them are going to be locked to 720 ~ 800p as most games have demonstrated so the current bandwidth at the moment seems fine. It will be interesting if the iPad also move up to higher bandwidth architecture (especially the Air) as Apple traditionally does pair higher bandwidth modules to accommodate the higher resolution display.
 
It's a question of economics vs. chasing performance. Sure, LPDDR5 would probably make these chips run faster (especially in the graphics department). Then again, it wouldn't make them run that much faster. And it would have been more expensive to manufacture, plus, LPDDR5 is still not as readily available.
IMO, this is just how the scheduling worked itself out. Apple was probably more apt to let chip design changes slip out of one schedule if there was enough risk to their slipping the overall schedule (November Mac month?). There's always next year.

Maybe it was a result of brain drain, or first time through issues of these new more complex 'pro' chips - if only there was a forum on which to speculate.
 
I’ve only designed one CPU for undergrad ages ago, but I doubt one can just swap in new memory controllers while maintaining the efficiency of the old design— without rejigging a lot of other stuff. OTOH, I don’t know much about the industry’s secret sauce, so maybe it’s trivial. Paging @cmaier 😊
I'm not Cliff and I have much less experience than him, but I'm pretty confident saying this isn't that big a deal. Not trivial, but also not something which requires serious (or any) redesign of blocks like CPU clusters or GPU cores.

Blocks like those aren't directly coupled to main memory. Instead, they all talk to some form on-chip interconnection network, aka NoC (network-on-chip), which in turn has one (or more) memory controllers on it. So, for example, it's very unlikely that an A14/M1 performance or efficiency CPU cluster knows anything at all about the details of LPDDR4x transactions - they only speak whatever Apple's NoC interface looks like.

If that surprises you, keep in mind that there's a memory hierarchy, and the DRAM is the last, largest, and slowest member of that hierarchy. The type of memory a CPU core is designed around is its L1 cache. L2 cache isn't as tightly integrated, but still a bit painful to change.
 
I very much doubt that 6x LPDDR4X is going to be cheaper than 4x LPDDR5 (not to mention power consumption, package size etc.). Unless we see some sort of yet unannounced stacking technology make its debut…

Oh yeah we spoke about that previously. Stacking could be part of the new ARM V9 architecture but that wouldn’t be part of the current M series architecture unless it was a major surprise bombshell.

this is what worries me about the M1X. It may be great and new but it’s super late and we could be <1yr from some insane M2 ARM V9 Fabric stacked HBM2e chip.
 
Oh yeah we spoke about that previously. Stacking could be part of the new ARM V9 architecture but that wouldn’t be part of the current M series architecture unless it was a major surprise bombshell.

ARMv9 is an ISA architecture, it describes CPU instructions and operation, not the physical design. Even if Apple uses ARMv9 at some point in the future (which is far from certain), the layout and packaging technology of their chips is a completely different manner. BTW, iPhones have used stacked packaging for a while now (RAM and SoC are stacked), and it's likely that the RAM modules on A12X/M1 are stacked as well.

As to stacking on the more powerful chips... how are you going to cool them?
 
  • Like
Reactions: BenRacicot
ARMv9 is an ISA architecture, it describes CPU instructions and operation, not the physical design. Even if Apple uses ARMv9 at some point in the future (which is far from certain), the layout and packaging technology of their chips is a completely different manner. BTW, iPhones have used stacked packaging for a while now (RAM and SoC are stacked), and it's likely that the RAM modules on A12X/M1 are stacked as well.

As to stacking on the more powerful chips... how are you going to cool them?
Vapor Chamber?
 
Vapor Chamber?

Cooling the surface of the chip itself is a solved problem, Apple Silicon doesn’t run that hot to begin with. The problem is cooling the inside of the chip. If you are stacking multiple hot components on top of each other you have to make sure that no layer overheats. That’s a bit more tricky than sticking a heatsink to the chip cover.
 
Cooling the surface of the chip itself is a solved problem, Apple Silicon doesn’t run that hot to begin with. The problem is cooling the inside of the chip. If you are stacking multiple hot components on top of each other you have to make sure that no layer overheats. That’s a bit more tricky than sticking a heatsink to the chip cover.
Cooling both sides of the chip? How many heat producing layers are we talking about?
 
Cooling both sides of the chip? How many heat producing layers are we talking about?

How do you imagine doing cooling on both sides? How would you mount the chip? Anyway, google around, there is a lot of active research going on in this area. The complexity is mind-boggling (most of solutions I’ve read about involve cutting small trenches in chips and pumping water or other materials through them).
 
I've recently been researching Nvidia 3090's for a build I am doing, and find it fascinating that all the heat problems these cards are having are in the VRAM more than the GPU itself.

If you compare the 3090 to the A6000, they basically have similar performance, but the 3090 uses GDDR6X, and the A6000 uses straight GDDR6. The "X" version is faster, but overheats easily. The "non-X" RAM does not see similar problems, even with much simpler coolers.

I only mention this because the RAM Apple is using for the M1's GPU might be limited by heat issues more than anything else.
 
  • Like
Reactions: BenRacicot
How do you imagine doing cooling on both sides? How would you mount the chip? Anyway, google around, there is a lot of active research going on in this area. The complexity is mind-boggling (most of solutions I’ve read about involve cutting small trenches in chips and pumping water or other materials through them).
Mount the chip normally, cool the PCB side. It won’t be as good as cooling between the interconnects, but it will be better than not cooling it at all.

I've recently been researching Nvidia 3090's for a build I am doing, and find it fascinating that all the heat problems these cards are having are in the VRAM more than the GPU itself.

If you compare the 3090 to the A6000, they basically have similar performance, but the 3090 uses GDDR6X, and the A6000 uses straight GDDR6. The "X" version is faster, but overheats easily. The "non-X" RAM does not see similar problems, even with much simpler coolers.

I only mention this because the RAM Apple is using for the M1's GPU might be limited by heat issues more than anything else.
Yup the memory is cooled along with the GPU, though in most cases the memory can run at higher temps (I’ve read 110ºC) before there are issues.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.