Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Just slap a big old heat spreader on that puppy and and you are good to go ;) As you say, the RAM will probably be clocked a bit lower in a laptop, more like 2-3 watts per die. Pair it with a 2048 bit memory interface and you still have some incredible memory bandwidth in a compact laptop. Imagine what having 300GB/s memory bandwidth will do for CPU performance.
I actually found a good article on this you might like. You're probably familiar with Kaby G - the only known consumer implementation of a CPU, GPU, and HBM on one package.

Take a look at this review of the Dell XPS 15, which used it. The article explains why special measures were needed to cool the dense 65W package. Amongst other things are three fat copper heat pipes and two big fans. There's also another measure in here that we've discussed before: Intel opts for a chiplet design that puts space between the GPU and CPU.

I've been eyeing roughly 65W as a target for a 15" Macbook Pro APU and I think now this was the right number. I really wonder how much power HBM2E would use at around 2Gbps or 2.4Gbps because I think that information is crucial to its viability as all-system-memory.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
Apple should be on SK Hynix to be their priority customer for the forthcoming HBM3; which is supposed to be cheaper, use less power, & run cooler than HBM2e...
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Re:HBM power draw vs other memory types, this is from JEDEC. Note - twice the power efficiency even over LPDDR5.

82C69665-7B89-46A8-85F0-3B37A901DA4B.jpeg
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
If you want to get best performance out of Apple GPUs (and simplify your code), yes, you need to use Metal and Apple-specific rendering techniques. At the same time, most developers use one of the popular game engines (Unity, UE etc.) that take care all the platform-specific stuff for you. There are also open source wrappers that allow you to use standard APIs such as Vulcan on Apple's platforms. Finally, let's not forget WebGPU — an upcoming standard for high-performance GPU programming for web, which is partly based on Apple Metal.

At the same time, you don't need to code specifically for Apple GPUs in order to take advantage of their increased power and memory efficiency. They approach rendering differently to mainstream GPUs and that works for any application. It is just that in some cases, things can be done much more efficiently (and simpler) on the Apple GPU than on the popular GPUs due to their architectural difference.

I also would like to point out that Metal is the AP used on iOS and so far, that platform seems to be striving. Metal is very easy to learn, very flexible, and it comes with a set of good tools. If you are a graphics programmer, picking up Metal takes literally 30 minutes. It's a very straightforward API.

Metal is also at 50% of the CUDA performance in rendering and GPU render devs are struggling with it.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
Metal is also at 50% of the CUDA performance in rendering and GPU render devs are struggling with it.

Octane X (currently in beta) seems to be doing alright on AMD Vega / Navi GPUs & the teases of it running on an iPhone 11 are pretty compelling...
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
Not sure why you think Metal is "a joke". I've seen several reports that it runs slightly faster for video rendering, even on Premiere Pro which competes against Apple's Final Cut Pro:


I said joke because it can't come close to CUDA rendering capabilities so Mac will still crippled for GPU Rendering.

Also rendering video in anything but CUDA workflow is a waste.

Resolve running OpenCL vs SCRATCH running CUDA

 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
I can only repeat myself here - Listen closely to what Apple has to say about the transition: "Intel Macs are here to stay for *many* years to come". With that in mind, I think people should start to separate wishful thinking from actual reality.

They stated support for intel macs for years, but more importantly they also stated that the full transition to Apple silicon will take about 2 years. This is presumably their entire lineup (including the Mac Pro) especially if we look at their last transition from PPC to Intel.

They also said that their SoC’s scale up really well for the Mac. Sure we can speculate how long they have been planning this, but my guess is their move to 64-bit apps across iOS and macOS was the beginning stages of the transition externally, meaning they already set this in motion before that. So at least 3 years of planning to get here, and another 2 to roll out the Xeon competitors.
 

aednichols

macrumors 6502
Jun 9, 2010
383
314
they also stated that the full transition to Apple silicon will take about 2 years
Apple loves to under-promise and over-deliver.

WWDC 2005:
"Starting next year, we will begin introducing Macs with Intel processors in them," he said at WWDC 2005. "So when we meet again this time next year, our plan is to be shipping Macs with Intel processors by then. And when we meet here again two years from now, our plan is that the transition will be mostly complete. And we think it will be complete by the end of 2007. So this is a two-year transition."

In fact, the transition finished with the Mac Pro unveiling in August 2006, just 14 months later.
 

MyopicPaideia

macrumors 68020
Mar 19, 2011
2,155
980
Sweden
How do you define "Xeon replacement ARM chips"? Are you comparing to low end, low energy, low-core Xeons? Are you comparing to HEDT Xeons like they are being used in the current Mac Pro? Are you comparing to AMD's Zen 4 Epycs that are going to hit the market next year (which you should)?

I am sure they are working on this in their R&D labs, but I am also 100% certain that Apple won't have an ARM chip that will outperform 2022 HEDT CPUs like Zen 4 - which is what they are going to have to compete against. Let's not forget that there are incredibly fast x86 CPUs out there, and they will only get better in the next two years, with major developments like AMD's Zen 4 architecture just around the corner.
I would define it as outperforming any configuration of their current Mac Pro offerings AND real world performance (mostly video rendering and audio production) using FCPX/LPX vs a comparable x86 based CPU+GPU workstation using Premier/DaVinci.
I can only repeat myself here - Listen closely to what Apple has to say about the transition: "Intel Macs are here to stay for *many* years to come". With that in mind, I think people should start to separate wishful thinking from actual reality.
You can repeat this misquote as many times as you like, but that doesn’t make it correct. In the keynote Tim Cook said specifically that Intel Macs would be supported for many years to come, not that Intel Macs are “here to stay” (wft?? Where did you even get that from? You completely made that up.) for many years to come. He specifically said that the full transition would be complete in 2 years time. So by June 2022 at the latest, all new models will have moved over to Apple Silicon.

Will they continue to sell older models that do have Intel CPU’s? Maybe there could be an iMac and Mac Pro hold out of the older models available for purchase, like they have done in the past when releasing new models (i.e. Macbook Pro) but they will not be updated or spec bumped, and will fall out of the line-up probably when the 2nd gen ASi models for that line-up are released.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
Supported means exactly that, as it does now. Any new versions of MacOS will not exclude the Intel Macs for 5 years going forward. After that, MacOS will likely cut off the Intel Macs. Just because MacOS supports the Intel Macs, does not mean that Apple will be building them for 5 years. This is the meaning for support today, on the Intel Macs. With each new release, Apple drops support for some older model Intel Macs. With MacOS 11/Big Sur, for example late 2012 MacBook Airs will not be supported by Apple. Doesn't mean they stop working, doesn't mean that all the software on them magically disappears, it means that they will not run officially MacOS 11.
 
  • Like
Reactions: Yebubbleman

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
I said joke because it can't come close to CUDA rendering capabilities so Mac will still crippled for GPU Rendering.

Also rendering video in anything but CUDA workflow is a waste.

Resolve running OpenCL vs SCRATCH running CUDA

Is this software written from scratch for Metal or OpenCL or is it a port from a software originally written for CUDA? Poorly optimised code can kill every GPU out there. Remember CUDA was first and lots of code is optimised for that.

Unsure why you bring in NVIDIA at all as Mac has not had them for years.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Metal is also at 50% of the CUDA performance in rendering and GPU render devs are struggling with it.

CUDA is a compute API, so when you say “rendering” I assume you mean offline rendering acceleration for 3D suites such as Blender. I’m not too familiar with that area so I can’t comment too much. I don’t know where your “50%” comes from but I don’t see a single reason why CUDA should be superior for this kind of work.
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
I was making another itemised response but this is getting needlessly wordy.

My bottom line is that Apple have always sold refurbished Macs and sometimes these are just leftover out of date models. Very few of the machines you claim were kept available were actually kept in production or weren't kept available for very long. Those that were were mostly for other reasons, not to prolong the life of an obsolete technology.

Apple has a history of ruthlessly dragging us along in a forward direction. Its in their DNA. They will want everything moving to their own silicon as soon as possible.

And Apple ALSO has had more recent examples of being gentler about that in certain areas. Whether that be for price point or not is almost a moot point (you're firmly arguing it one way despite any evidence to the contrary that I present; this is why I suggest we agree to disagree). With the iPhone, iPad, and MacBook Pro product lines, we've seen that numerous times and it's highly likely that we'll see it again during this transition albeit with only one or two products tops and certainly not on the lower-end of the Mac product line.

I agree. As this thread points out, Apple seems to have carefully positioned its current offerings, and it's clear where an APU + LPDDR5 will cut it. It's the exceptions that are the most interesting.

Do you think either of these solutions would be viable?

1. Using HBM2E as all-system-memory. I think the MBP16 is going to be deeply uncomfortable with four stacks of 16GB HBM2E. Would it work if they used 2.4Gbps or 2Gbps to limit power consumption?

2. Using HBM2E as cache. Apple could just put a stack of HBM on the GPU / APU package, call it cache and stick with LPDDR5 as main memory? This seems like the efficient option.
Apple should be on SK Hynix to be their priority customer for the forthcoming HBM3; which is supposed to be cheaper, use less power, & run cooler than HBM2e...

I missed the part where we had reason to suspect that they'll do any form of HBM memory given that they've already stated numerous times that their GPUs and CPUs will share the same memory. Where's the information supporting this being plausible? I'm not trying to dismiss it. Just that I appear to have missed something that prompts this speculation to begin with.

Apple loves to under-promise and over-deliver.

WWDC 2005:


In fact, the transition finished with the Mac Pro unveiling in August 2006, just 14 months later.

That was certainly my thought when Tim Cook first announced two years, however, I think that they have enough working for them here (that wasn't even working for them back in 2006) that the two year estimate is probably reasonable. The A12Z released this year in the iPad Pro is telling. The A12X that it is essentially a souped up clone of is two years old. If they had an A13X to stuff into an iPad Pro, that'd honestly be one thing, but my guess is that they didn't and that's why, for this year's iPad Pro (and Apple Silicon DTK), we got another round of A12-based iPad Pro chips.

All that to say that, with that level of performance, again (and apologies for beating this horse to death), they can handily take on any 8th Gen Intel based Mac in the line-up RIGHT NOW and presently, they sell three Macs with those processors in tow (2-port 13" MacBook Pro, Mac mini, and 21.5" iMac). Given that the 10th Gen CPUs in the current Intel MacBook Air do not beat out the 8th Gen CPUs in the 2-port 13" Pro, we can lump the 10th Gen based 2020 MacBook Air in to that list as well. The 10th Gen CPUs in the 2020 4-port 13" MacBook Pro are, relative to 4-port 13" MacBook Pros, impressive, but still not substantially better so as to not be beaten by the A12X/A12Z processors, so Apple is ready to take on those Macs today.

There's a sizable difference between all of the Macs I've mentioned and those on the higher-end. We got a 27" iMac update last week because Apple won't have an Apple Silicon replacement for that machine for at least a year. If it was coming sooner (say, around the time of the Intel 21.5" iMac's rumored Apple Silicon 24" iMac replacement), then Apple would have no reason to introduce an Intel model while not also updating the 21.5" iMac as well. Again, that is REALLY telling. But Apple will be able to replace the higher-end Macs, just not as soon as they did during the PowerPC transition. The 16" MacBook Pro and 27" iMac will both need more time and take longer to make the jump. And the Mac Pro will need EVEN LONGER. Given how long it took for them to give us the 2019 Mac Pro, two years is a reasonable estimate.

Unsure why you bring in NVIDIA at all as Mac has not had them for years.

At this point comparing NVIDIA CUDA implies a Windows PC based workflow. Many high-end Mac customers left the Mac platform because (a) the 2013 Mac Pro was not what they wanted and (b) Apple dropped NVIDIA entirely and seemingly forever after the discontinuation of the Mid 2014 15" MacBook Pro. It's my understanding that, while Metal rendering is still an improvement over OpenCL, it still pales in comparison to CUDA, meaning that I'm likely to not buy a Mac to do my Adobe Premiere and Adobe After Effects rendering if what I want is the maximum performance...that is, unless Apple Silicon GPUs are THAT MUCH more powerful to offset the difference. But that is a pretty big if at this point, unless Adobe REALLY gives them that degree of optimization. I suspect the Apple Silicon GPU with Metal on Apple Silicon Macs with macOS vs. NVIDIA GPUs with CUDA on x86-64 PCs with Windows will become a long-standing debate for years to come...
 
  • Like
Reactions: yunuzzz

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
And Apple ALSO has had more recent examples of being gentler about that in certain areas. Whether that be for price point or not is almost a moot point (you're firmly arguing it one way despite any evidence to the contrary that I present; this is why I suggest we agree to disagree). With the iPhone, iPad, and MacBook Pro product lines, we've seen that numerous times and it's highly likely that we'll see it again during this transition albeit with only one or two products tops and certainly not on the lower-end of the Mac product line.




I missed the part where we had reason to suspect that they'll do any form of HBM memory given that they've already stated numerous times that their GPUs and CPUs will share the same memory. Where's the information supporting this being plausible? I'm not trying to dismiss it. Just that I appear to have missed something that prompts this speculation to begin with.



That was certainly my thought when Tim Cook first announced two years, however, I think that they have enough working for them here (that wasn't even working for them back in 2006) that the two year estimate is probably reasonable. The A12Z released this year in the iPad Pro is telling. The A12X that it is essentially a souped up clone of is two years old. If they had an A13X to stuff into an iPad Pro, that'd honestly be one thing, but my guess is that they didn't and that's why, for this year's iPad Pro (and Apple Silicon DTK), we got another round of A12-based iPad Pro chips.

All that to say that, with that level of performance, again (and apologies for beating this horse to death), they can handily take on any 8th Gen Intel based Mac in the line-up RIGHT NOW and presently, they sell three Macs with those processors in tow (2-port 13" MacBook Pro, Mac mini, and 21.5" iMac). Given that the 10th Gen CPUs in the current Intel MacBook Air do not beat out the 8th Gen CPUs in the 2-port 13" Pro, we can lump the 10th Gen based 2020 MacBook Air in to that list as well. The 10th Gen CPUs in the 2020 4-port 13" MacBook Pro are, relative to 4-port 13" MacBook Pros, impressive, but still not substantially better so as to not be beaten by the A12X/A12Z processors, so Apple is ready to take on those Macs today.

There's a sizable difference between all of the Macs I've mentioned and those on the higher-end. We got a 27" iMac update last week because Apple won't have an Apple Silicon replacement for that machine for at least a year. If it was coming sooner (say, around the time of the Intel 21.5" iMac's rumored Apple Silicon 24" iMac replacement), then Apple would have no reason to introduce an Intel model while not also updating the 21.5" iMac as well. Again, that is REALLY telling. But Apple will be able to replace the higher-end Macs, just not as soon as they did during the PowerPC transition. The 16" MacBook Pro and 27" iMac will both need more time and take longer to make the jump. And the Mac Pro will need EVEN LONGER. Given how long it took for them to give us the 2019 Mac Pro, two years is a reasonable estimate.



At this point comparing NVIDIA CUDA implies a Windows PC based workflow. Many high-end Mac customers left the Mac platform because (a) the 2013 Mac Pro was not what they wanted and (b) Apple dropped NVIDIA entirely and seemingly forever after the discontinuation of the Mid 2014 15" MacBook Pro. It's my understanding that, while Metal rendering is still an improvement over OpenCL, it still pales in comparison to CUDA, meaning that I'm likely to not buy a Mac to do my Adobe Premiere and Adobe After Effects rendering if what I want is the maximum performance...that is, unless Apple Silicon GPUs are THAT MUCH more powerful to offset the difference. But that is a pretty big if at this point, unless Adobe REALLY gives them that degree of optimization. I suspect the Apple Silicon GPU with Metal on Apple Silicon Macs with macOS vs. NVIDIA GPUs with CUDA on x86-64 PCs with Windows will become a long-standing debate for years to come...
Looks like OTOY got some good scores of Octane X that was written completely from scratch for Metal. Granted they needed a Vega duo to reach 415 score (Titan V gets nearly 400). I hope Apple put a dedicated ray tracer in the ASi.

 
  • Like
Reactions: Yebubbleman

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
I missed the part where we had reason to suspect that they'll do any form of HBM memory given that they've already stated numerous times that their GPUs and CPUs will share the same memory. Where's the information supporting this being plausible? I'm not trying to dismiss it. Just that I appear to have missed something that prompts this speculation to begin with.

What prompts the speculation regarding RAM architecture is exactly that Apple has stated that CPU and GPU will share the same memory.
For the next iPad Pro the extrapolation is straightforward - it will connect its 5nm SoC to 128-bit wide LPDDR5, upgrading from the LPDDR4x of the old models. This will offer 100GB/s nominal which is actually very good compared to the PC parts with integrated graphics Apple has used and will serve fanless MacBooks nicely.
However that is a SoC Apple supplies for fanless devices. What memory solutions will they provide for Mac SoCs that are geared to higher TDPs?
The current 16" MacBook Pro has an option with a 50W GPU+memory solution that using HBM provides 394GB/s to the GPU alone. In a laptop Apple basically has two options: extending the LPDDR5 interface to 256 bit wide, yielding roughly 200 GB/s, or going with a HBM interface similar to what the AMD GPU uses. Going with LPDDR5 would be cheaper, but would realistically limit graphics performance to below the current (excellent!) AMD option. If Apple want to be able to match their current highest end, they pretty much have to go with HBM, but using LPDDR5 would still be pretty damn performant, and allow a lower cost baseline configuration. Take your pick

When it comes to the replacement for the iMac 27", the options for a performant graphics memory subsystem is again two fold. One is to do as the upcoming gaming consoles, and use GDDR6(+). That would allow 600GB/s for a 256bit interface, sufficient to outperform the 5700xt of the current iMac. The power draw would be acceptable in an iMac enclosure. The alternative, again, would be HBM, at somewhat higher cost, but lower power draw and higher potential bandwidth, if needed.

Personally, seeing as HBM would be suitable for both higher end laptops and desktops, my money is on Apple going that path for their higher end Mac SoCs, but they may of course choose the cheaper option. The premium for HBM isn’t that bad, AMD used it for years on their moderately priced Vega graphics cards. And Apple would probably be able to negotiate favourable pricing, as they are not locked to a single source.

At the end of the day, if Apple want to provide higher graphics performance than their fanless SoCs can provide, they need to use suitably performant RAM. There aren’t all that many options around for that.
 

OldCorpse

macrumors 68000
Dec 7, 2005
1,758
347
compost heap
We got a 27" iMac update last week because Apple won't have an Apple Silicon replacement for that machine for at least a year. If it was coming sooner (say, around the time of the Intel 21.5" iMac's rumored Apple Silicon 24" iMac replacement), then Apple would have no reason to introduce an Intel model while not also updating the 21.5" iMac as well. Again, that is REALLY telling.

I'm not sure. After all, there are many, many, many other reasons for why Apple did not come out with AS iMacs, than a purely technological one of not having sufficiently advanced AS chips. It could have to do with giving people enough time to make the transition: customers who absolutely need an Intel machine and developers who need to port their apps and transition their workflow - it would be way too abrupt to have Tim stand up there, announce the transition, and a month later out come AS machines! So what are their options - NOT update their Intel iMacs and wait until enough time has passed for the AS to come out without it feeling like an abrupt sucker punch? That's too long with zero updates to their product lines. Therefore, the logical thing to do is to do one last Intel update on their higher capability machines, such as the iMac line to give everyone enough time to make a smoother transition. And here we are.

There may of course be other reasons. Perhaps the radical chassis re-design for the iMacs is not as yet ready. Or perhaps there are many other technologies that need to be finished before the transition, like, say the new screens 24" and 30"/32". There may be purely non-Apple reasons. They have a ton of contractors they are working with. It takes time to introduce them to a new product line and have them set up production lines with sufficient volume and quality control - which means they'd need at least many months if not a year or two... and if Apple didn't want to show their hand and prevent leaks, that means they can't start the process until they've officially announced the transition, or were not far away from announcing the transition. So, in that case it would not be that Apple doesn't have their advanced chips ready, but rather that all their other components and designs that go into making an iMac have not been released to their contractors early enough to allow them to spring AS iMacs not long after the transition has been announced.

And finally, it makes sense to transition gradually. If you are transitioning into a whole new chipset architecture, you don't want to jump into all your lines simultaneously - you'd rather start with the lower powered ones and make sure everything is working and the inevitable bumps along the road are worked out before coming out with the other lines. That way they are not watching and cooking 10 pots at once, but only 2-3 low heat ones at a time.

Bottom line, I believe that Apple has very, very advanced AS chips that absolutely destroy whatever Intel has out now or is likely to have over the next few years. But Apple wants to make sure everything goes smoothly with the transition, so they're going slow and easy and gradually. They'll pull out all the stops after 2022, when they no longer have to cater to any Intel machies in their lineup. At that point, it's war with all the armies out. YMMV.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
We got a 27" iMac update last week because Apple won't have an Apple Silicon replacement for that machine for at least a year. If it was coming sooner (say, around the time of the Intel 21.5" iMac's rumored Apple Silicon 24" iMac replacement), then Apple would have no reason to introduce an Intel model while not also updating the 21.5" iMac as well. Again, that is REALLY telling.

I'm not sure. After all, there are many, many, many other reasons for why Apple did not come out with AS iMacs, than a purely technological one of not having sufficiently advanced AS chips. It could have to do with giving people enough time to make the transition: customers who absolutely need an Intel machine and developers who need to port their apps and transition their workflow - it would be way too abrupt to have Tim stand up there, announce the transition, and a month later out come AS machines! So what are their options - NOT update their Intel iMacs and wait until enough time has passed for the AS to come out without it feeling like an abrupt sucker punch? That's too long with zero updates to their product lines. Therefore, the logical thing to do is to do one last Intel update on their higher capability machines, such as the iMac line to give everyone enough time to make a smoother transition. And here we are.

There may of course be other reasons. Perhaps the radical chassis re-design for the iMacs is not as yet ready. Or perhaps there are many other technologies that need to be finished before the transition, like, say the new screens 24" and 30"/32". There may be purely non-Apple reasons. They have a ton of contractors they are working with. It takes time to introduce them to a new product line and have them set up production lines with sufficient volume and quality control - which means they'd need at least many months if not a year or two... and if Apple didn't want to show their hand and prevent leaks, that means they can't start the process until they've officially announced the transition, or were not far away from announcing the transition. So, in that case it would not be that Apple doesn't have their advanced chips ready, but rather that all their other components and designs that go into making an iMac have not been released to their contractors early enough to allow them to spring AS iMacs not long after the transition has been announced.

And finally, it makes sense to transition gradually. If you are transitioning into a whole new chipset architecture, you don't want to jump into all your lines simultaneously - you'd rather start with the lower powered ones and make sure everything is working and the inevitable bumps along the road are worked out before coming out with the other lines. That way they are not watching and cooking 10 pots at once, but only 2-3 low heat ones at a time.

Bottom line, I believe that Apple has very, very advanced AS chips that absolutely destroy whatever Intel has out now or is likely to have over the next few years. But Apple wants to make sure everything goes smoothly with the transition, so they're going slow and easy and gradually. They'll pull out all the stops after 2022, when they no longer have to cater to any Intel machies in their lineup. At that point, it's war with all the armies out. YMMV.
A simple explantation is that Apple always releases iPhone chips first on a new node or processor generation so when the iPhone is released on 5 nm/A14, the rest of the lines will follow. Always being the case with iPads.
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
What prompts the speculation regarding RAM architecture is exactly that Apple has stated that CPU and GPU will share the same memory.
For the next iPad Pro the extrapolation is straightforward - it will connect its 5nm SoC to 128-bit wide LPDDR5, upgrading from the LPDDR4x of the old models. This will offer 100GB/s nominal which is actually very good compared to the PC parts with integrated graphics Apple has used and will serve fanless MacBooks nicely.
However that is a SoC Apple supplies for fanless devices. What memory solutions will they provide for Mac SoCs that are geared to higher TDPs?
The current 16" MacBook Pro has an option with a 50W GPU+memory solution that using HBM provides 394GB/s to the GPU alone. In a laptop Apple basically has two options: extending the LPDDR5 interface to 256 bit wide, yielding roughly 200 GB/s, or going with a HBM interface similar to what the AMD GPU uses. Going with LPDDR5 would be cheaper, but would realistically limit graphics performance to below the current (excellent!) AMD option. If Apple want to be able to match their current highest end, they pretty much have to go with HBM, but using LPDDR5 would still be pretty damn performant, and allow a lower cost baseline configuration. Take your pick

When it comes to the replacement for the iMac 27", the options for a performant graphics memory subsystem is again two fold. One is to do as the upcoming gaming consoles, and use GDDR6(+). That would allow 600GB/s for a 256bit interface, sufficient to outperform the 5700xt of the current iMac. The power draw would be acceptable in an iMac enclosure. The alternative, again, would be HBM, at somewhat higher cost, but lower power draw and higher potential bandwidth, if needed.

Personally, seeing as HBM would be suitable for both higher end laptops and desktops, my money is on Apple going that path for their higher end Mac SoCs, but they may of course choose the cheaper option. The premium for HBM isn’t that bad, AMD used it for years on their moderately priced Vega graphics cards. And Apple would probably be able to negotiate favourable pricing, as they are not locked to a single source.

At the end of the day, if Apple want to provide higher graphics performance than their fanless SoCs can provide, they need to use suitably performant RAM. There aren’t all that many options around for that.

You're still looking at these GPUs as though they are designed with the same kind of architecture (and are essentially the same kind of GPUs) that are in current Intel Macs and x86-64 PCs. They're not. Apple is throwing the GPU playbook out of the window with its SoC GPUs. Plus, they've stated that the RAM being used will be shared with the system memory (which I presume will be the case on at least all non-Mac Pro Macs, if not ALL Macs). This is why I'm confused about why we're talking about an AMD VRAM technology when AMD GPUs and Apple Silicon integrated GPUs are about as Apples and Oranges as you can get.

We got a 27" iMac update last week because Apple won't have an Apple Silicon replacement for that machine for at least a year. If it was coming sooner (say, around the time of the Intel 21.5" iMac's rumored Apple Silicon 24" iMac replacement), then Apple would have no reason to introduce an Intel model while not also updating the 21.5" iMac as well. Again, that is REALLY telling.

I'm not sure. After all, there are many, many, many other reasons for why Apple did not come out with AS iMacs, than a purely technological one of not having sufficiently advanced AS chips. It could have to do with giving people enough time to make the transition: customers who absolutely need an Intel machine and developers who need to port their apps and transition their workflow - it would be way too abrupt to have Tim stand up there, announce the transition, and a month later out come AS machines! So what are their options - NOT update their Intel iMacs and wait until enough time has passed for the AS to come out without it feeling like an abrupt sucker punch? That's too long with zero updates to their product lines. Therefore, the logical thing to do is to do one last Intel update on their higher capability machines, such as the iMac line to give everyone enough time to make a smoother transition. And here we are.

There may of course be other reasons. Perhaps the radical chassis re-design for the iMacs is not as yet ready. Or perhaps there are many other technologies that need to be finished before the transition, like, say the new screens 24" and 30"/32". There may be purely non-Apple reasons. They have a ton of contractors they are working with. It takes time to introduce them to a new product line and have them set up production lines with sufficient volume and quality control - which means they'd need at least many months if not a year or two... and if Apple didn't want to show their hand and prevent leaks, that means they can't start the process until they've officially announced the transition, or were not far away from announcing the transition. So, in that case it would not be that Apple doesn't have their advanced chips ready, but rather that all their other components and designs that go into making an iMac have not been released to their contractors early enough to allow them to spring AS iMacs not long after the transition has been announced.

And finally, it makes sense to transition gradually. If you are transitioning into a whole new chipset architecture, you don't want to jump into all your lines simultaneously - you'd rather start with the lower powered ones and make sure everything is working and the inevitable bumps along the road are worked out before coming out with the other lines. That way they are not watching and cooking 10 pots at once, but only 2-3 low heat ones at a time.

Bottom line, I believe that Apple has very, very advanced AS chips that absolutely destroy whatever Intel has out now or is likely to have over the next few years. But Apple wants to make sure everything goes smoothly with the transition, so they're going slow and easy and gradually. They'll pull out all the stops after 2022, when they no longer have to cater to any Intel machies in their lineup. At that point, it's war with all the armies out. YMMV.

My point is that if Apple didn't need more time on a proper Apple Silicon-based successor to the Intel 27" iMac, they wouldn't have updated it. They didn't update the Intel 21.5" iMac (when they just as easily could have) which only further lends credibility to the rumors that an Apple Silicon 24" iMac is imminently ready to replace it. Otherwise, why only update one of the iMac models? There's no reason for it. Apple is also very clearly not 100% all-in on 10th Generation Intel (having also opted to skip it for the 2-port 13" MacBook Pro and Mac mini when given the opportunity to include it).
 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
My point is that if Apple didn't need more time on a proper Apple Silicon-based successor to the Intel 27" iMac, they wouldn't have updated it.

There is also simply the explanation that Apple, Intel and AMD have long ties, and they already had a contract to use 10th gen chips and Navi GPU’s so this is them fulfilling that contract. My guess is apples contract with intel ends after these final macs are pushed out.

It’s not unheard of for them to do a spec bump, and then a refresh 6 months later like they did with the 15” - 16” MBP.
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
There is also simply the explanation that Apple, Intel and AMD have long ties, and they already had a contract to use 10th gen chips and Navi GPU’s so this is them fulfilling that contract. My guess is apples contract with intel ends after these final macs are pushed out.

It’s not unheard of for them to do a spec bump, and then a refresh 6 months later like they did with the 15” - 16” MBP.

Right, but that doesn't explain why the 21.5" iMac didn't get updated.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
You're still looking at these GPUs as though they are designed with the same kind of architecture (and are essentially the same kind of GPUs) that are in current Intel Macs and x86-64 PCs. They're not. Apple is throwing the GPU playbook out of the window with its SoC GPUs. Plus, they've stated that the RAM being used will be shared with the system memory (which I presume will be the case on at least all non-Mac Pro Macs, if not ALL Macs). This is why I'm confused about why we're talking about an AMD VRAM technology when AMD GPUs and Apple Silicon integrated GPUs are about as Apples and Oranges as you can get.



My point is that if Apple didn't need more time on a proper Apple Silicon-based successor to the Intel 27" iMac, they wouldn't have updated it. They didn't update the Intel 21.5" iMac (when they just as easily could have) which only further lends credibility to the rumors that an Apple Silicon 24" iMac is imminently ready to replace it. Otherwise, why only update one of the iMac models? There's no reason for it. Apple is also very clearly not 100% all-in on 10th Generation Intel (having also opted to skip it for the 2-port 13" MacBook Pro and Mac mini when given the opportunity to include it).
I think software is the problem rather than technical problem with high performance ASi. Users of 27 inch iMacs and upward in term of performance usually uses specialised software. It seems that Microsoft and also Adobe will release Mac ASi optimised software in the autumn which fits the user need of a home/office machine ie a 24 inch iMac/MBP13/Air.

Fully agree with you that the 21.5 inch stagnation is a good indicator of a soon (within this year) ASi replacement. A 24 inch would attract low end 27 inch and the 21.5 inch buyers.
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
I think software is the problem rather than technical problem with high performance ASi. Users of 27 inch iMacs and upward in term of performance usually uses specialised software. It seems that Microsoft and also Adobe will release Mac ASi optimised software in the autumn which fits the user need of a home/office machine ie a 24 inch iMac/MBP13/Air.

Fully agree with you that the 21.5 inch stagnation is a good indicator of a soon (within this year) ASi replacement. A 24 inch would attract low end 27 inch and the 21.5 inch buyers.

Most definitely. Specialized software that isn't updated super regularly (and therefore hasn't moved to Metal) is going to be a huge problem. Then again, Apple killing 32-bit support a year early was very likely helpful to Apple in this regard. Though, I can't imagine that there aren't developers of software more commonly used by the higher-end Macs (and users that own them) that moved to 64-bit earlier, but will still need time to make the jump to Apple Silicon.

The lower-end of the spectrum (both in terms of users and lower-end hardware) will be by far the easiest to get done. But I suspect that was all a part of Apple's plan. They clearly studied up on their PowerPC-to-Intel Transition history.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
You're still looking at these GPUs as though they are designed with the same kind of architecture (and are essentially the same kind of GPUs) that are in current Intel Macs and x86-64 PCs. They're not. Apple is throwing the GPU playbook out of the window with its SoC GPUs. Plus, they've stated that the RAM being used will be shared with the system memory (which I presume will be the case on at least all non-Mac Pro Macs, if not ALL Macs). This is why I'm confused about why we're talking about an AMD VRAM technology when AMD GPUs and Apple Silicon integrated GPUs are about as Apples and Oranges as you can get.
I'm struggling a bit to put this in a way that is guaranteed to be inoffensive, but the above signals serious confusion.

First off, Apple is not "throwing the GPU playbook out the window", the most notable difference with their GPUs vs. NVidia/AMD products is that Apples GPU:s are Tile Based Deferred Renderers (TBDR), as opposed to Immediate Mode Renderers(IMR). This is by no means new, TBDRs has been around over three decades, and many seminal patents are in the public domain.
Cutting it short, TBDRs can save some memory traffic for certain operations, which is nice for SoCs, but the difference vs. IMRs are not night and day, and TBDRs come with their own caveats. You'll notice that even though TBDRs have been around since the beginning of dedicated graphics silicon, they haven't taken the world by storm. Apple has a good implementation, but that's it. And yes, they still need bandwidth. I you want to dive deeper, Apple has published a bunch of videos, not least this WWDC 2020, but ImgTech has good stuff as well. They are both quite biased as to the advantages obviously.

There is no such thing as VRAM technology. It doesn't exist. In the days of yore (80s) VRAM was shorthand for dual-ported RAM, but the concept died long ago. Fundamentally DRAM is DRAM, the differences lie in packaging and communication protocols. For PC systems with "integrated" graphics, the GPU and CPU share system RAM which unfortunately is the same as the CPU would otherwise use on its own causing serious congestion. Today, that typically is dual 64-bit channels of DDR4, which seriously hampers performance, but allows easy expansion via DIMMs and really low cost.

GPUs really need higher bandwidth RAM systems though, so when Sony And Microsoft designed their new gaming consoles, they supply their SoCs with just under 500GB/s (which is still a significant drop in bandwidth/FLOP compared to the previous generation), in order not to hamstring their performance too much. That necessitates GDDR6 that has higher frequency signalling, and signal integrity makes putting the DRAM on the PCB rather than in DIMMs mandatory, and you loose any post sales upgradeability. Which is par for the course in graphics, of course. Increasing bandwidth further is difficult with this approach, as you have to go wider meaning more complex PCBs, more area dedicated to driving I/O pins on the GPU and significant power draw from the memory subsystem alone. Still, it is done for high end offerings.

A better solution is to put the memory even closer to the SoC, using an interposer, avoiding the need to drive signals through a PCB. This allows your data paths to be wider, and thus signalling frequencies and power draw to be lower. There is additional expense, but also cost savings in PCB design and power supplies.

Again, there is nothing that says that these higher bandwidth solutions cannot be used for SoCs as well as GPUs, in fact it is already done! The major issue is that the PC infrastructure isn't built around such a paradigm, and users cannot upgrade memory capacities post purchase. Also, absolute maiximum RAM capacity is constrained. But I would trade the capability of having 256GB of RAM for an order of magnitude or more in memory bandwidth any day of the week and twice on Sundays, both privately and professionally. And if Apple wants to be able to compete with discrete GPUs, whether in 3D rendering and general compute, it is an absolute necessity.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
You're still looking at these GPUs as though they are designed with the same kind of architecture (and are essentially the same kind of GPUs) that are in current Intel Macs and x86-64 PCs. They're not. Apple is throwing the GPU playbook out of the window with its SoC GPUs. Plus, they've stated that the RAM being used will be shared with the system memory (which I presume will be the case on at least all non-Mac Pro Macs, if not ALL Macs). This is why I'm confused about why we're talking about an AMD VRAM technology when AMD GPUs and Apple Silicon integrated GPUs are about as Apples and Oranges as you can get.
I think I can clear this up quite simply.

HBM2E is suitable for both graphics and computing applications, so it is an excellent choice for all-system memory. While we've primarily seen it used as VRAM, it is - unlike GDDR - very capable as primary RAM.

Again, there is nothing that says that these higher bandwidth solutions cannot be used for SoCs as well as GPUs, in fact it is already done! The major issue is that the PC infrastructure isn't built around such a paradigm, and users cannot upgrade memory capacities post purchase. Also, absolute maiximum RAM capacity is constrained. But I would trade the capability of having 256GB of RAM for an order of magnitude or more in memory bandwidth any day of the week and twice on Sundays, both privately and professionally. And if Apple wants to be able to compete with discrete GPUs, whether in 3D rendering and general compute, it is an absolute necessity.
Is maximum capacity an actual constraint? Heat/Energy is a constraint because 16 stacks @ 2.8Gbps would use 80W and cost is a constraint because 256GB would be around $3k. But is there another reason why you can't have 256GB of HBM2E?

The difference is important because only the Mac Pro is configurable with over 128GB of RAM. But the Mac Pro has ample cooling and could reasonably charge $6k for a 256GB capacity. So unless there is a hard limit on the amount of HBM2E you can use, I don't think using it as all-system-memory in any machine currently using HBM for graphics would constrain capacity, except maybe the MBP 16 which may have unresolvable thermal issues.

If there is a hard constraint on capacity this is a real problem. Computing tasks vastly favor having a large pool of memory to having high bandwidth. I'm not sure Apple would be willing to live with a 128GB ceiling.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.