Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Why do you think that? A13 currently beats both AMD and Intel in single-threaded performance at the same clock by at least 40%. If they can boost their new chips to 3.5 Ghz at the same IPC, the single-threaded performance will be excellent.
Well, I went to notebookcheck to find benchmarks comparing Intel's 45W 8 core part, the Core i9-10980HK, to Apple's A13. And what I found were a handful of single-core benchmarks that mostly indicated that the A13 was neck-and-neck with, if not slightly ahead of, Intel's part. So I'm open to changing my mind.

Do you remember the clockspeed/power curve I shared earlier? This is basically the problem. Getting to 3.5GHz might not be possible in the 16" space or maybe even any space, because at some point that curve is just a straight line going up. It may not be necessary, though. 3.2GHz would be a healthy 22% increase, and adding whatever is yielded from Apple's architectural changes would likely keep them ahead of even a theoretical 45W Tiger Lake part.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
leman:

Do you realize that the 2018 Mac Mini has end user upgrade-able RAM? 2 DDR4-2666 SODIMM slots.

It may or may not be available on the Apple Silicon Macs. We will all have to wait and see when the ARM based Macs arrive.
 

MandiMac

macrumors 65816
Feb 25, 2012
1,433
883
Seems simple to me. Soldered RAM in GDDR5 or GDDR6 speeds, so you won't notice anything in terms of GPU, and the normal CPU usage benefits from that blazingly fast RAM. Done.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
Seems simple to me. Soldered RAM in GDDR5 or GDDR6 speeds, so you won't notice anything in terms of GPU, and the normal CPU usage benefits from that blazingly fast RAM. Done.

That would be a bad idea. GDDR trades latency for performance. CPU workloads would be significantly negatively impacted if GDDR is used. GPU's don't care because are good at hiding latency.
 
  • Like
Reactions: reallynotnick

throAU

macrumors G3
Feb 13, 2012
9,198
7,344
Perth, Western Australia
I very much suspect you'll see HBM in the higher end models. Probably as a cache between the SOC and main memory much like AMDs HBCC in the Vega series (on PC Cards at least, not sure the Mac drivers expose that functionality).

It makes sense. It will work with the SOC in a small form factor device and provide the performance required.

Further to this, apple even had "high bandwidth cache" on the slide-deck regarding Apple Silicon.

They will be using HBM as a cache between the GPU and main memory on higher end/future models no doubt.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
Further to this, apple even had "high bandwidth cache" on the slide-deck regarding Apple Silicon.

They could have been simply referring to the SoC-wide cache though. As far as caches go, HBM is not really high-bandwidth. SRAM-based caches can offer bandwidth in excess of TB/second with very low latency. And besides, if one already goes through the trouble of using HBM as a cache, one can just go the full way and use it as system memory directly — it would simplify the system, make it more efficient and won't actually cost that much more.
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,344
Perth, Western Australia
They could have been simply referring to the SoC-wide cache though. As far as caches go, HBM is not really high-bandwidth. SRAM-based caches can offer bandwidth in excess of TB/second with very low latency. And besides, if one already goes through the trouble of using HBM as a cache, one can just go the full way and use it as system memory directly — it would simplify the system, make it more efficient and won't actually cost that much more.

Yeah, but HBM is used in Vega based cards explicitly for caching system memory when you put them in HBCC mode. This can be done to make more efficient use of video memory; entire textures don't need to be pre-loaded into video memory, the system can just cache them as and when required.

It's all about context; In the context of using system memory for video memory, HBM is "high bandwidth" vs. the shared system memory. Vs. system DRAM, HBM is WAY higher bandwidth.

It would very much not surprise me if Apple enable HBM as a video memory cache for main memory on their higher end parts. It would mean that all systems are shared memory architecture, the higher end parts just have more cache for improved performance.

It will simplify the architecture from a developer/coder perspective (it's all just unified memory as far as the developer goes). It seems very "apple" to me to handle things that way.

If I was a betting man, I'd bet that's what we see in 2021-2022.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
It's all about context; In the context of using system memory for video memory, HBM is "high bandwidth" vs. the shared system memory. Vs. system DRAM, HBM is WAY higher bandwidth.

It would very much not surprise me if Apple enable HBM as a video memory cache for main memory on their higher end parts. It would mean that all systems are shared memory architecture, the higher end parts just have more cache for improved performance.

It will simplify the architecture from a developer/coder perspective (it's all just unified memory as far as the developer goes). It seems very "apple" to me to handle things that way.

If I was a betting man, I'd bet that's what we see in 2021-2022.

The scenario you describe is definitely possible, even if technically a bit more involved than using HBM as system memory directly (one needs to maintain a hierarchy of memory controllers etc.). Regardless of how they end up implementing it, I think we both agree that we will see unified memory concept in all new Macs. Some sort of NUMA-based system is probably inevitable in a Mac Pro-class machine anyway.
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,344
Perth, Western Australia
The scenario you describe is definitely possible, even if technically a bit more involved than using HBM as system memory directly (one needs to maintain a hierarchy of memory controllers etc.). Regardless of how they end up implementing it, I think we both agree that we will see unified memory concept in all new Macs. Some sort of NUMA-based system is probably inevitable in a Mac Pro-class machine anyway.

Definitely. I see unified memory for most, and HBM cache for video memory on the high end. Programming model will be identical, the cache will be transparent to the programmer (controlled by hardware), and just make things faster if available.

As you say SRAM, etc. is faster but you just can't get multiple gigabytes of it at any sort of affordable cost. Sure SRAM might be involved as well, but that's generally on-die these days, HBM is a good speed for the next level out between SRAM and DRAM.

I'm thinking 4-8 GB of HBM cache (or more) on something like a Mac(book) pro as video memory cache - for working with data sets in the tens of gigabytes (high end 3d modelling work, etc.).

Regular MacBook Air, iMac, etc. won't get HBM cache and just use totally unified memory architecture with standard DRAM.
 
Last edited:

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Definitely. I see unified memory for most, and HBM cache for video memory on the high end. Programming model will be identical, the cache will be transparent to the programmer (controlled by hardware), and just make things faster if available.

As you say SRAM, etc. is faster but you just can't get multiple gigabytes of it at any sort of affordable cost. Sure SRAM might be involved as well, but that's generally on-die these days, HBM is a good speed for the next level out between SRAM and DRAM.

I'm thinking 4-8 GB of HBM cache (or more) on something like a Mac(book) pro as video memory cache - for working with data sets in the tens of gigabytes (high end 3d modelling work, etc.).

Regular MacBook Air, iMac, etc. won't get HBM cache and just use totally unified memory architecture with standard DRAM.
This is a really interesting idea and a good middle ground between full-on HBM2E as system memory or just having the GPU use LPDDR5 (bleh). It's also great from an engineering perspective to be able to have your system memory on a pinout and only need to worry about a single HBM die which can go on the farside of the GPU cores now.

They could have been simply referring to the SoC-wide cache though. As far as caches go, HBM is not really high-bandwidth. SRAM-based caches can offer bandwidth in excess of TB/second with very low latency. And besides, if one already goes through the trouble of using HBM as a cache, one can just go the full way and use it as system memory directly — it would simplify the system, make it more efficient and won't actually cost that much more.

I like it a lot in theory but I also wonder if it's really the best design. The GPU wants the HBM nearby, but now the CPU has to access it too, and you won't get by without at least two stacks of it if it's system memory. So that's a very dense 10W of power consumption that really wants to be placed in the middle of the action. It would be awesome to see. But it might not actually be the best solution if your CPU cores are getting thermal throttled and/or you had to cut GPU cores to accommodate the presence of those HBM stacks.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
I like it a lot in theory but I also wonder if it's really the best design. The GPU wants the HBM nearby, but now the CPU has to access it too, and you won't get by without at least two stacks of it if it's system memory. So that's a very dense 10W of power consumption that really wants to be placed in the middle of the action. It would be awesome to see. But it might not actually be the best solution if your CPU cores are getting thermal throttled and/or you had to cut GPU cores to accommodate the presence of those HBM stacks.

Unified memory implies that you are accessing the RAM trough some sort of shared interface anyway. If I remember correctly, HBM actually uses less power than regular DDR, so it might be a win from the efficiency standpoint. Have a late enough SoC-level cache and use multiple memory controllers, and such system should be fairly simple and very performant. The only downside really is cost (which again is less of a concern for Apple IMO).

I don't think that your worries about component heat are warranted, we've had huge SoCs that consume over 200Watts for a while, and they seem to manage.
 
  • Like
Reactions: throAU

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Unified memory implies that you are accessing the RAM trough some sort of shared interface anyway. If I remember correctly, HBM actually uses less power than regular DDR, so it might be a win from the efficiency standpoint. Have a late enough SoC-level cache and use multiple memory controllers, and such system should be fairly simple and very performant. The only downside really is cost (which again is less of a concern for Apple IMO).

I don't think that your worries about component heat are warranted, we've had huge SoCs that consume over 200Watts for a while, and they seem to manage.
It seems pretty clear you are thinking about desktop machines and I don't want to compare apples to oranges here. The machine in my mind is usually the MBP16.

That said I don't think you are correct on this one. Sure, there are SoCs that consume over 200 watts. Doesn't mean I can rush out tomorrow and make a 100mm^2 SoC that uses 180 watts. You should start thinking of heat/area as your primary constraint and treat total SoC power draw as an indication for what the "heat" part of that equation is.

Your problem is the other parameter - area. HBM uses less power than DDR, but who cares? DDR was never dense enough for pro levels of it to go on package, you have a pinout for it and it's over there somewhere being totally irrelevant to heat/area. If you chose to use DDR5 as your unified system memory, you just gave up on the design goal of having high bandwidth memory for your GPU.

If you are choosing HBM2E, you are putting it on package right up next to the GPU cores so it can take advantage of that high bandwidth. Now that's a tiny thing generating a lot of heat/mm next to another hot thing, but normally that's OK because it's just VRAM. It's siloed off over there with the GPU and we just need one stack of it.

But now it's also the system memory. So it also needs to be beside the CPU cores. And one stack isn't enough. You need two stacks, minimum, using 10W of power in a tiny little area nestled right between the CPU cores and the GPU cores which just so happen to be manufactured using the absolute densest state-of-the-art silicon manufacturing known to man. I hope this illustrates the problem better for you. Please keep heat/area in your thoughts and prayers.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
If you are choosing HBM2E, you are putting it on package right up next to the GPU cores so it can take advantage of that high bandwidth. Now that's a tiny thing generating a lot of heat/mm next to another hot thing, but normally that's OK because it's just VRAM. It's siloed off over there with the GPU and we just need one stack of it.

But now it's also the system memory. So it also needs to be beside the CPU cores. And one stack isn't enough. You need two stacks, minimum, using 10W of power in a tiny little area nestled right between the CPU cores and the GPU cores which just so happen to be manufactured using the absolute densest state-of-the-art silicon manufacturing known to man. I hope this illustrates the problem better for you. Please keep heat/area in your thoughts and prayers.

I am not sure I see the problem. Why would the system RAM need to be physically this close to the CPU/GPU cores? The RAM - no matter which kind you choose, is probably going to be a separate chip, just as it is with current solutions. I don’t see a reason why it has to be in extreme proximity to the SoC itself. Even more, RAM probably doesn’t have to be actively cooled. HBM2, no matter which quantities we talk about, is going to use less power than regular DDR and current Macs have no problems with cooling a 60 watt CPU + 50 watt GPU (up to max combined of 80 watts) + a mix of DDR4 and GDDR5. I think a 80Watt SoC + some HBM2 on a separate chip won’t be a problem. And all this will occupy significantly less area than the components in the current 16”.
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,344
Perth, Western Australia
The RAM - no matter which kind you choose, is probably going to be a separate chip, just as it is with current solutions. I don’t see a reason why it has to be in extreme proximity to the SoC itself.

Part of the reason HBM is so fast is its very close physical proximity to the CPU/GPU die. Basically on the same package, you're talking traces measured in MM rather than CM.

Very short traces. Physical trace length is a thing at the memory speeds we're talking about here. It's also why various motherboards run faster with 2 sticks of RAM than 4, for example. We are pushing physics.

As far as heat goes - I think this is going to be why we haven't seen this setup with intel. Intel processors are too hot. Apple Silicon is getting good performance in far less heat/power, which maybe makes on package HBM less of a problem. Also, intel processors are made for the regular market (not specifically for Apple's designs) where cost is also a thing.

Apple have the freedom to design for THEIR market which is less cost sensitive. They're also not paying intel's margin on every processor they make for themselves, which will offset this somewhat.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
I am not sure I see the problem. Why would the system RAM need to be physically this close to the CPU/GPU cores? The RAM - no matter which kind you choose, is probably going to be a separate chip, just as it is with current solutions.
I've talked to you a few times on here so I know you are very knowledgeable about computer technology. It'd probably be most helpful for you to go look at some GPU designs (or Kaby G) where HBM is used and look at where they are actually placing it and why. I'd also suggest taking another look at the MBP16 on iFixit and where the system RAM and VRAM on that machine are relative to the CPU and GPU.
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,344
Perth, Western Australia
I've talked to you a few times on here so I know you are very knowledgeable about computer technology. It'd probably be most helpful for you to go look at some GPU designs (or Kaby G) where HBM is used and look at where they are actually placing it and why. I'd also suggest taking another look at the MBP16 on iFixit and where the system RAM and VRAM on that machine are relative to the CPU and GPU.

Yeah put it this way - look at Fury/Vega (desktop cards) or Volta which have HBM and where it is. Then keep in mind that HBM is heat sensitive, and Vega has it literally butted up against the GPU die.

Why? Trace length. It's a tradeoff. In fact, with Vega tweaking, you can often get better performance by giving the GPU less power so you can run the HBM faster in the same heat.

If they didn't have to put the HBM that close, they wouldn't due to the heat. But they have to (to keep the traces short) so...

Vega 64 is a 300 watt GPU. HBM can live and function in those cards just fine. I think Apple will be able to make good use of it in their designs.
 
  • Like
Reactions: Roode

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
I've talked to you a few times on here so I know you are very knowledgeable about computer technology. It'd probably be most helpful for you to go look at some GPU designs (or Kaby G) where HBM is used and look at where they are actually placing it and why. I'd also suggest taking another look at the MBP16 on iFixit and where the system RAM and VRAM on that machine are relative to the CPU and GPU.

As @throAU points out, minimizing trace length is important. But in the world of silicon, the few mm of distance between chips is eternity itself. The RAM is a separate physical chip and I believe it can be cooled just as with any existing solution. It might have been a different story if the RAM were stacked on SoC itself, but I doubt we will see that.
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,344
Perth, Western Australia
It might have been a different story if the RAM were stacked on SoC itself, but I doubt we will see that.

Look up what intel is working on with 3d stacking. the things they're building with it suck right now (because the building blocks they are using suck), but yes, they are trying to do exactly that.

The big problem with it is shedding heat. You're essentially stacking a big silicon insulator (or worse, heat generator) on top of another silicon die (heat generator) which will make cooling hard.

With HBM on an interposer (not stacked on top) like AMD and Nvidia have done with their GPUs, you can at least cool it with a shared heat sink. Which isn't perfect itself (due to problems getting the die height exactly the same, and heat from one component heating the heat sink shared by the others) but it's better than 3d stacking in that respect.

Swings and roundabouts. If you can keep things cool, 3d stacking will let you run it faster due to shorter trace lengths. But running faster generates more heat which makes it impossible to cool, and the stacking itself makes it harder to cool. I'm sure there's a cross-over point there somewhere.

Stacking probably makes sense in very small/dense low power devices, where cooling is less of an issue (mobile, possibly ultrabook). In the high end with large heat generation (desktop, workstation laptop), probably not so much.
 
Last edited:
  • Like
Reactions: leman

Pressure

macrumors 603
May 30, 2006
5,178
1,544
Denmark
VEGA 10 is 486mm2 and can happily chow down 400W if running unrestricted.

The only thing speaking against HBM2E is that it is limited to 24GB per stack (12Hi) but that can be alleviated somewhat if the CPU cores has a large amount of cache on-die so it rarely needs to hit the main memory.

On the bright side a single stack gives you up to 460GB/s memory bandwidth at very low power. HBM1 uses an estimated 3.65W per stack.

HBM3 allows up to 64GB and 512GB/s memory bandwidth per stack while having a lower core voltage than HBM2 (1.2V), less expensive and should be available here in 2020 H2.
 
Last edited:
  • Like
Reactions: Boil

NotTooLate

macrumors 6502
Jun 9, 2020
444
891
Part of the reason HBM is so fast is its very close physical proximity to the CPU/GPU die. Basically on the same package, you're talking traces measured in MM rather than CM.

Very short traces. Physical trace length is a thing at the memory speeds we're talking about here. It's also why various motherboards run faster with 2 sticks of RAM than 4, for example. We are pushing physics.

As far as heat goes - I think this is going to be why we haven't seen this setup with intel. Intel processors are too hot. Apple Silicon is getting good performance in far less heat/power, which maybe makes on package HBM less of a problem. Also, intel processors are made for the regular market (not specifically for Apple's designs) where cost is also a thing.

Apple have the freedom to design for THEIR market which is less cost sensitive. They're also not paying intel's margin on every processor they make for themselves, which will offset this somewhat.
". It's also why various motherboards run faster with 2 sticks of RAM than 4, for example. "

This is not the reason , the reason is because they usually have a 2 channel memory controller , where each channel supports dual rank , when you load a dual rank on a single channel you pay in performance as they share the same data bus thus cannot be utilise in parallel (and other channel integrity issues that arise from reflections thus using ODT that also impacts performance) , that's why on most mother boards they advice you to put your 2 sticks in certain places i.e the 2 channels (colour them to make it obvious) , because if you put them on the same channel (i.e dual rank) and keep the other one vacant , you basically crippled yourself for no reason.

Physical traces does impact performance but not because the amount of travel you need to do on the wire (i.e latency impact , not BW) , its because you need to lower your frequency , as a high frequency on a long channel will create margin issues of your data eye , to compensate you do calibrations and/or reduce frequency (the lower the frequency the easier it is to get the strobe into the middle of your data).

Other then that you make some interesting points!!
 
  • Like
Reactions: Roode

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
As @throAU points out, minimizing trace length is important. But in the world of silicon, the few mm of distance between chips is eternity itself. The RAM is a separate physical chip and I believe it can be cooled just as with any existing solution. It might have been a different story if the RAM were stacked on SoC itself, but I doubt we will see that.
I agree about the RAM being layered on with one caveat that this could occur on the Air if it is just using the stock A14. But re HBM2E designs, I am usually thinking of a multi-die chiplet design with a high speed interposer mostly because this is what has been done. Possibly our confusion comes down to what constitutes a "separate physical chip." To the extent being on a separate die helps prevent heat/area problems, I would say that is true, just not to the extent you are asserting.

With HBM on an interposer (not stacked on top) like AMD and Nvidia have done with their GPUs, you can at least cool it with a shared heat sink. Which isn't perfect itself (due to problems getting the die height exactly the same, and heat from one component heating the heat sink shared by the others) but it's better than 3d stacking in that respect.
Which reminds me. The die height problem was a serious issue with Kaby G and also not one I'm sure Apple/TSMC can resolve for 8-Hi stacks. Still, interposer design is the most feasible of designs I can think of if Apple goes full chiplet.

The only thing speaking against HBM2E is that it is limited to 24GB per stack (12Hi) but that can be alleviated somewhat if the CPU cores has a large amount of cache on-die so it rarely needs to hit the main memory.
In theory. But no one is actually producing anything denser than 8-Hi stacks. There is always a bit of wish-fulfillment in theorycrafting, but I think we should stick to 16GB / stack of HBM in our designs until the viability of 24GB stacks are proven.
On the bright side a single stack gives you up to 460GB/s memory bandwidth at very low power. HBM1 uses an estimated 3.65W per stack.
Best I could find for HBM2E is 5W/stack. This is good for its speed but not good in the abstract world of thin-and-light designs. You can't put it in a Macbook Air, for instance.
HBM3 allows up to 64GB and 512GB/s memory bandwidth per stack while having a lower core voltage than HBM2 (1.2V), less expensive and should be available here in 2020 H2.
Big if true. Do we have any firm manufacturing commitments on that? HBM3 would make N5P chiplet designs in 2H 2021 a lot more viable.

Physical traces does impact performance but not because the amount of travel you need to do on the wire (i.e latency impact , not BW) , its because you need to lower your frequency , as a high frequency on a long channel will create margin issues of your data eye , to compensate you do calibrations and/or reduce frequency (the lower the frequency the easier it is to get the strobe into the middle of your data).
I didn't know this either - TIL. I love the knowledge sharing that goes on in these threads.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
I agree about the RAM being layered on with one caveat that this could occur on the Air if it is just using the stock A14. But re HBM2E designs, I am usually thinking of a multi-die chiplet design with a high speed interposer mostly because this is what has been done. Possibly our confusion comes down to what constitutes a "separate physical chip." To the extent being on a separate die helps prevent heat/area problems, I would say that is true, just not to the extent you are asserting.

For the 16" MBP, I envision a 70-80Watt SoC with a "side" of HMB2. Basically taking the current logic board layout and collapsing the multiple chips we have now into a large CPU/GPU complex + the RAM. I think its feasible and follows the established principles as far as I am able to understand them.

How the SoC itself is implemented is a different question. For the 16" MBP, it's probably going to be a monolithic die. Larger Macs would probably have to use a chipset design, similar to the new Threadripper.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
For the 16" MBP, I envision a 70-80Watt SoC with a "side" of HMB2. Basically taking the current logic board layout and collapsing the multiple chips we have now into a large CPU/GPU complex + the RAM. I think its feasible and follows the established principles as far as I am able to understand them.

How the SoC itself is implemented is a different question. For the 16" MBP, it's probably going to be a monolithic die. Larger Macs would probably have to use a chipset design, similar to the new Threadripper.
Yeah, this design is feasible and a lot of fun. I just don't think you get 70-80W to play with.

The problem is that your "large" CPU/GPU complex isn't large on a monolithic die. Remember, Intel's APU is on their 14nm process - your APU is on TSMC's ultradense 5nm process. So your 70-80W APU actually fits easily into a space formerly occupied by a 45W APU even with 40 GPU cores. This will melt!

The existence of two stacks of HBM2E nearby is a secondary concern at this point, but by way of explanation, look at the size of HBM stacks compared to the size of the DRAM module on the MBP. Bear in mind that HBM2E actually uses more power than LPDDR5 and you'll see why it becomes a liability when placed next to an APU with thermal concerns, even if it's on a separate chip that's just close by rather than on-package.

You can get farther with a chiplet design that puts your CPU and GPU on different dies. I made one here that takes advantage of all the reclaimed space from reducing redundant parts - it uses 56W ignoring whatever the neural engine etc use. You would probably prefer the HBM2E on a separate chip below-center the CPU/GPU package to mitigate the heating concerns, and you could pile on a few more GPU cores if you did it that way.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,516
19,664
The problem is that your "large" CPU/GPU complex isn't large on a monolithic die. Remember, Intel's APU is on their 14nm process - your APU is on TSMC's ultradense 5nm process. So your 70-80W APU actually fits easily into a space formerly occupied by a 45W APU even with 40 GPU cores. This will melt!

Good point. You think that contemporary cooling solution won't be able to transfer heat from a small die fast enough?
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Good point. You think that contemporary cooling solution won't be able to transfer heat from a small die fast enough?
I think they can up to a point. I don't think you need to fit into a 45W threshold, especially because the rest of the interior is cool. I wouldn't push it beyond 55W on a single die, but that's a gut feeling, not something I did the math on.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.