Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Yes...



siliconelement-5bc77f65c9e77c0051c71605.png
Ha ha being trained in chemistry I should have guesses.
 
  • Like
Reactions: Boil

anshuvorty

macrumors 68040
Sep 1, 2010
3,482
5,146
California, USA
You're thinking about this from the standpoint of it still being an Intel-based Mac. The Apple GPU doesn't even work in the same way that your typical AMD, NVIDIA, or even Intel integrated GPU does. The Apple GPUs work more efficiently, so while their on-paper specs won't be impressive compared to, say, the AMD Radeon Pro 5000 series that's present in the 16" MacBook Pro or the 27" iMac, it'll still match or exceed performance because it is running many times more efficiently by design. The caveat to that is that developers need to build around Metal and actually optimize for that kind of a GPU (also that this kind of way of designing GPUs and engineering GPU performance is not terribly common and that developers may find difficulty/annoyance in doing so that wouldn't otherwise be present with AMD GPUs because that mode of operation has been around much longer).

Apple has already stated that all GPUs in Apple Silicon Macs will be integrated.

That sounds like a pretty big gamble....it reminds me of the days of the Wii U and the Playstation 3. 2 game consoles that failed to attract a significant portion of the gaming developer community and what resulted was a lack of games on both consoles and in the case of the Wii U, a shortened lifecycle, which Nintendo quickly replaced with much easier to develop for console, called the Switch.

There are downsides to making a bespoke and completely proprietary platform that developers have to spend time learning - Wii U and PS 3 are just 2 examples that spring to my mind.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
No - Apple has made it clear that they are going for higher performance AND much higher efficiency.

Apple's slide clearly shows this - greater than desktop performance while using less than current laptop systems. We will see when they release end of the year.

In any case, what is the use of switching if they don't get higher performance? The A13 core with no active cooling in a phone already matches the top Intel processor - It isn't a great leap to imagine an improved A14 core and giving it much more cooling would exceed the best Intel has to offer.
True, but "performance" is not one dimensional. See my post above. I agree that is seems nearly impossible not to get better or equal performance (as in compute) of AS compared to Intel as things stands. However remember that GPU scaling is still an issue to consider.
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
That sounds like a pretty big gamble....it reminds me of the days of the Wii U and the Playstation 3. 2 game consoles that failed to attract a significant portion of the gaming developer community and what resulted was a lack of games on both consoles and in the case of the Wii U, a shortened lifecycle, which Nintendo quickly replaced with much easier to develop for console, called the Switch.

There are downsides to making a bespoke and completely proprietary platform that developers have to spend time learning - Wii U and PS 3 are just 2 examples that spring to my mind.

You're not wrong there at all. However, most of the big players are on-board already and much faster than they were in 2005-06's PowerPC to Intel transition. So, the only real casualty I see here are Mac game developers that develop games that don't get regularly updated as well as those that would've otherwise ported from a Windows/XBox/Playstation port. I do believe we will, most sadly, see a lot less of that than we even saw in the PowerPC era, let alone the Intel era.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
There are numerous benchmarks of the A12X (which is basically the A12Z with one less GPU core enabled) benchmarked on iPadOS (so not running emulated in Rosetta 2) and it wiping the floor with every contemporary 2018 Mac excluding the 8th Gen Hexa-core Core i9 15" MacBook Pro, and iMac Pro. Figure that, from today's standpoint, that mostly translates to the A12Z wiping the floor with anything that isn't a 16" MacBook Pro, 27" iMac (from either 2019 or 2020), iMac Pro, or 2019 Mac Pro. Pretty sure that the quad-core 10th Gen Intel chips in the 2020 4-port 13" MacBook Pro aren't THAT much faster that they beat the native performance of either the A12X or A12Z and certainly no CPU that has ever graced a MacBook Air can beat it.

So, yeah, we're there already! And Apple has all but said that we're not getting the A12Z in a Mac, but rather something newer and faster.
Yes I know but are these benchmarks running Mac OS on AS? I am not fully convinced that there is 1:1 relation between iPadOS and MAcOS number in for instance Geekbench. If it is, Apple will have hard time not to outperform Intel. However, I do not think that raw performance as in will be the primary goal for the first release.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
What is required for Apple to be able to credibly ditch discrete GPUs is higher bandwidth memory subsystems.
The upcoming game consoles (PS5/XBox sx) show that you can achieve excellent graphics performance from a SoC - if you can feed it. The die area won't be a problem on TSMC 5nm. Memory bandwidth still needs to be adressed.

On the low end, I guess iPad Pro level is fine, which should mean an 128-bit wider bus to (upgraded) LPDDR5. That should allow pretty much twice the graphics performance of the A12z.

The next step up, if you want to match the dGPU of the 16" MacBook Pro, is either 256-bit wide to LPDDR5 or HBM, just as on the dGPU. HBM will be a bit restrictive as far as total RAM size, but offers great bandwidth obviously.

As far as iMacs go, Apple pretty much needs to go with GDDR6 (as the new generation consoles) or HBM to match the currently best build to order dGPUs Apple offers. And those dGPUs are what Apple offers right now, not what will be on the market at the time the new AS iMacs will be introduced.

Matching dGPU performance is a much taller order than CPU performance, and absolutely requires higher bandwidth memory subsystems. It will still be difficult due to power draw concerns.
 
  • Like
Reactions: awesomedeluxe

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
Yes I know but are these benchmarks running Mac OS on AS? I am not fully convinced that there is 1:1 relation between iPadOS and MAcOS number in for instance Geekbench. If it is, Apple will have hard time not to outperform Intel. However, I do not think that raw performance as in will be the primary goal for the first release.

If there is a differential, it's not substantial. It's mostly the same OS anyway.

Also, I'm not sure why you think that matching or exceeding performance of their Intel models won't be a main objective. It's THE objective, second only to performance per watt improvements.
 

thunng8

macrumors 65816
Feb 8, 2006
1,032
417
Yes I know but are these benchmarks running Mac OS on AS? I am not fully convinced that there is 1:1 relation between iPadOS and MAcOS number in for instance Geekbench. If it is, Apple will have hard time not to outperform Intel. However, I do not think that raw performance as in will be the primary goal for the first release.
Not sure if this counts - it's MacOS running the ipad version of geekbench:


More or less 1:1 mapping of geekbench CPU results. It does look like the DTK is running Geekbench compute faster than an ipad though (about 25%).

In any case, Apple has already stated that the ASi chips coming out far exceed the DTK A12Z chip - that's one of the reason why they don't allow benchmarking on the DTK - it is in no way representative of what is coming out.

Apple don't want people to judge the DTK by benchmarks running on it and concluding this is what Apple will come out at the end of the year.
 

johngwheeler

macrumors 6502a
Dec 30, 2010
639
211
I come from a land down-under...
No, it's not.

Absolutely not.

Most ARM designs are highly specific designs to do one particular thing really well. Those chips in those super computers would do nothing in a Mac Pro. Your photoshop would probably underperform.

Well, he did say "most powerful super-computing cluster is ARM-based" (https://en.wikipedia.org/wiki/TOP500), which is factually correct based on the metrics quoted.

You obviously wouldn't be using
Fujitsu A64FX in a desktop / workstation; that's understood. The point was that ARM can used in powerful computers. Apple's SoC will be designed to work well with Apple hardware.

How well it really performs will be revealed later this year.
 
Last edited:

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
Not sure if this counts - it's MacOS running the ipad version of geekbench:


More or less 1:1 mapping of geekbench CPU results. It does look like the DTK is running Geekbench compute faster than an ipad though (about 25%).

In any case, Apple has already stated that the ASi chips coming out far exceed the DTK A12Z chip - that's one of the reason why they don't allow benchmarking on the DTK - it is in no way representative of what is coming out.

Apple don't want people to judge the DTK by benchmarks running on it and concluding this is what Apple will come out at the end of the year.

Even when running the Intel version of Geekbench 5 for Mac under Rosetta 2, the Apple Silicon DTK ran comparable to the native performance of the current 2020 MacBook Air. Given that we're most certainly getting faster stuff on the first Apple Silicon Macs that means that Rosetta 2 will AT LEAST MATCH the performance of the CURRENT lowest end Mac. That's honestly not bad considering the PowerPC to Intel Rosetta, at best, matched the performance of a PowerPC G3 (which, at that time was MUCH slower than the lowest-end Mac sold at the onset of that transition).
 

johngwheeler

macrumors 6502a
Dec 30, 2010
639
211
I come from a land down-under...
If you are using Mac for living and not for web browsing, Office typing & whatever you will jump to Windows a lot quicker than you think when ARM comes into play. Software gap will be huge for a very long time. 3D rendering has been Mac's Achilles but with ARM and ditching of OpenGL it will become even more so, Metal is a joke.

People are fantasizing about some Mac Pro running ARM desktop silicone.. my god. That's all I'm gonna say.


If you have a look at some of the recent Max Tech videos on YouTube, many developers usering the Apple DTK are reporting very good performance of x86 apps converted with Rosetta 2, and relatively quick development cycles rebuilding x86 apps for ARM. So there may not be a long delay before software vendors have ARM versions of their apps, and Rosetta 2 provides reasonably good performance until they do.

Not sure why you think Metal is "a joke". I've seen several reports that it runs slightly faster for video rendering, even on Premiere Pro which competes against Apple's Final Cut Pro:


I don't see a technological reason why Apple won't be able to match Mac Pro performance (at least a 16-core Intel Xeon W) within 2-3 years. There are already ARM-based chips that can match these Xeon CPUs, and Apple can optimize their SoC to get the best out of their hardware & software ecosystem.

Do you really think Apple didn't analyse whether they can do these before making the decision to move to Apple Silicon? There are very unlikely to release a machine that is slower than the current Mac Pro, and there is no indication that they will exclude the Mac Pro from the transition.

BTW, the semiconductor element used in microprocessors is "silicon". Silicone is used in breast implants.
 
  • Like
Reactions: Yebubbleman

johngwheeler

macrumors 6502a
Dec 30, 2010
639
211
I come from a land down-under...
Any ASi Mac needs to outperform its predecessor to be seen as a tangible replacement...

Agreed. The question is by how much. The first AS iMac will be the 21"/24" and it should outperform the current 21" Intel iMac, but not by so much that it outperforms the current 27" Intel iMac that was just released! Now, a year from now, when the next iteration of 21"/24" comes out (which should be after the first 27"/30"/32" AS iMac, they can go to town with the improvement, at that point it can be as dramatic as all get out.

I agree with this. New ASi Macs need to demonstrate an improvement over the equivalent Intel models, but not eclipse the flagship models. This is why the entry-level / mid-level machines will be the first to transition.

The ASi MacBook Pro 13 (maybe 14?) will need to beat the current one by a reasonable margin - maybe 20% single-core, 50% multi-core, 50-100% better GPU, and better battery. But it still needs to be somewhat less powerful than the MBP 16.

Same story with an ASi iMac 24" and the current Intel 8-10 core iMac 27"
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
There are very unlikely to release a machine that is slower than the current Mac Pro, and there is no indication that they will exclude the Mac Pro from the transition.

Given the uproar leading up to the 2019 Mac Pro; I can't fathom they'd exclude it. It WILL be fundamentally different in many ways on the other end of the Apple Silicon transition, but it will surely not be left out.

BTW, the semiconductor element used in microprocessors is "silicon". Silicone is used in breast implants.

?

I agree with this. New ASi Macs need to demonstrate an improvement over the equivalent Intel models, but not eclipse the flagship models. This is why the entry-level / mid-level machines will be the first to transition.

The ASi MacBook Pro 13 (maybe 14?) will need to beat the current one by a reasonable margin - maybe 20% single-core, 50% multi-core, 50-100% better GPU, and better battery. But it still needs to be somewhat less powerful than the MBP 16.

Same story with an ASi iMac 24" and the current Intel 8-10 core iMac 27"

The only important thing is that an ASi Mac's performance is greater than the Intel model it replaces. Given that, just as the Core Duo iMacs were faster than their G5 predecessors but not as fast as the Power Mac G5 Quad, you'll have ASi 13" Mac Notebooks that eclipse the 2020 Intel 4-port 13" MacBook Pro but still don't touch the Intel 16" MacBook Pro. But again, the ASi replacements to the higher end Intel Mac models need more time in the oven anyway (hence the point of this thread to begin with.)
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
There are downsides to making a bespoke and completely proprietary platform that developers have to spend time learning - Wii U and PS 3 are just 2 examples that spring to my mind.

If you want to get best performance out of Apple GPUs (and simplify your code), yes, you need to use Metal and Apple-specific rendering techniques. At the same time, most developers use one of the popular game engines (Unity, UE etc.) that take care all the platform-specific stuff for you. There are also open source wrappers that allow you to use standard APIs such as Vulcan on Apple's platforms. Finally, let's not forget WebGPU — an upcoming standard for high-performance GPU programming for web, which is partly based on Apple Metal.

At the same time, you don't need to code specifically for Apple GPUs in order to take advantage of their increased power and memory efficiency. They approach rendering differently to mainstream GPUs and that works for any application. It is just that in some cases, things can be done much more efficiently (and simpler) on the Apple GPU than on the popular GPUs due to their architectural difference.

I also would like to point out that Metal is the AP used on iOS and so far, that platform seems to be striving. Metal is very easy to learn, very flexible, and it comes with a set of good tools. If you are a graphics programmer, picking up Metal takes literally 30 minutes. It's a very straightforward API.
 
Last edited:

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
If there is a differential, it's not substantial. It's mostly the same OS anyway.

Also, I'm not sure why you think that matching or exceeding performance of their Intel models won't be a main objective. It's THE objective, second only to performance per watt improvements.
Exactly, second to performance/watt ratio is the king. Performance will follow the improved performance/watt ratio.

The aim would be to sell laptop that has a full day battery life under heavy load or iMacs with other form factors or passive cooling. To sum up, to build significantly different computers enabled by the improved performance to watt ratio.

For the high end lines (>=27 inch iMacs, MBP16, Mac Pro), performance will be the driver at he expense of passive cooling, slimmer form factors or increased battery life.
 

psingh01

macrumors 68000
Apr 19, 2004
1,586
629
Why is everyone assuming we will get a 13” MacBook Pro with Apple Silicon? Rumors have consistently claimed it will have a 14” screen.

I think there has only been one rumor that there will be a 14” MBP and it came fairly recently (a month ago?). Almost everything else has been wishful thinking by forum posters based on the 15” being replaced with 16” going back to last year.
 

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
You're arguing semantics at this point. Apple made it a point to keep much more of a supply in those channels for the reasons I stated than is usually done. They didn't ordinarily do that (I think you agree with me on that, at least).


I was making another itemised response but this is getting needlessly wordy.

My bottom line is that Apple have always sold refurbished Macs and sometimes these are just leftover out of date models. Very few of the machines you claim were kept available were actually kept in production or weren't kept available for very long. Those that were were mostly for other reasons, not to prolong the life of an obsolete technology.

Apple has a history of ruthlessly dragging us along in a forward direction. Its in their DNA. They will want everything moving to their own silicon as soon as possible.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
If you are using Mac for living and not for web browsing, Office typing & whatever you will jump to Windows a lot quicker than you think when ARM comes into play. Software gap will be huge for a very long time. 3D rendering has been Mac's Achilles but with ARM and ditching of OpenGL it will become even more so, Metal is a joke.

People are fantasizing about some Mac Pro running ARM desktop silicone.. my god. That's all I'm gonna say.
They showed all of the apps I use to make a living in the frickin' keynote. Better than my current Intel MBP I might add.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
What is required for Apple to be able to credibly ditch discrete GPUs is higher bandwidth memory subsystems.
The upcoming game consoles (PS5/XBox sx) show that you can achieve excellent graphics performance from a SoC - if you can feed it. The die area won't be a problem on TSMC 5nm. Memory bandwidth still needs to be adressed.

On the low end, I guess iPad Pro level is fine, which should mean an 128-bit wider bus to (upgraded) LPDDR5. That should allow pretty much twice the graphics performance of the A12z.

The next step up, if you want to match the dGPU of the 16" MacBook Pro, is either 256-bit wide to LPDDR5 or HBM, just as on the dGPU. HBM will be a bit restrictive as far as total RAM size, but offers great bandwidth obviously.

As far as iMacs go, Apple pretty much needs to go with GDDR6 (as the new generation consoles) or HBM to match the currently best build to order dGPUs Apple offers. And those dGPUs are what Apple offers right now, not what will be on the market at the time the new AS iMacs will be introduced.

Matching dGPU performance is a much taller order than CPU performance, and absolutely requires higher bandwidth memory subsystems. It will still be difficult due to power draw concerns.
I agree. As this thread points out, Apple seems to have carefully positioned its current offerings, and it's clear where an APU + LPDDR5 will cut it. It's the exceptions that are the most interesting.

Do you think either of these solutions would be viable?

1. Using HBM2E as all-system-memory. I think the MBP16 is going to be deeply uncomfortable with four stacks of 16GB HBM2E. Would it work if they used 2.4Gbps or 2Gbps to limit power consumption?

2. Using HBM2E as cache. Apple could just put a stack of HBM on the GPU / APU package, call it cache and stick with LPDDR5 as main memory? This seems like the efficient option.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
If you want to get best performance out of Apple GPUs (and simplify your code), yes, you need to use Metal and Apple-specific rendering techniques. At the same time, most developers use one of the popular game engines (Unity, UE etc.) that take care all the platform-specific stuff for you. There are also open source wrappers that allow you to use standard APIs such as Vulcan on Apple's platforms. Finally, let's not forget WebGPU — an upcoming standard for high-performance GPU programming for web, which is partly based on Apple Metal.
I'd also like to add to this that Unity and Unreal have already expressed that they'll support Apple Silicon.
 
  • Like
Reactions: leman

leman

macrumors Core
Oct 14, 2008
19,516
19,664
1. Using HBM2E as all-system-memory. I think the MBP16 is going to be deeply uncomfortable with four stacks of 16GB HBM2E. Would it work if they used 2.4Gbps or 2Gbps to limit power consumption?

2. Using HBM2E as cache. Apple could just put a stack of HBM on the GPU / APU package, call it cache and stick with LPDDR5 as main memory? This seems like the efficient option.

From what I know, HBM2 uses less energy than DDR4, so that shouldn't be much of a factor (if a 16" can handle 64GB of DDR4 it should be able to handle 64GB of HBM). I assume that multiple HBM stacks can be connected via a single bus? And even at 2.0 Gbps you are looking at whopping 260GB/sec of bandwidth — 5x faster than DDR4.

I don't think that option 2. is viable. Using HBM for cache makes the entire system much more complex and I am not sure that the benefit would be that great compared to a large SoC-level cache. Once you are using HBM, you might as well go full HBM.
 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
There is no way in hell that ARM based CPUs will match the performance of high end x86 processors like Xeon within just two years. That would "break" Moore's law by such a wide margin that it is next to inconceivable. Even Apple's marketing is limited by physics, not to mention their engineers. And don't forget, high end x86 CPUs will make substantial progress in these two years as well. Not just Intel's Xeon, but also (and probably more importantly) AMD's Epyc.

We don’t know what they’ve had in their labs, presumably for years. We’re comparing the A12z/A13 chips to Xeon’s, but there’s a chance they’ve had higher-end prototypes similar to the i7 - i9 range working for years in their labs.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
I agree. As this thread points out, Apple seems to have carefully positioned its current offerings, and it's clear where an APU + LPDDR5 will cut it. It's the exceptions that are the most interesting.

Do you think either of these solutions would be viable?

1. Using HBM2E as all-system-memory. I think the MBP16 is going to be deeply uncomfortable with four stacks of 16GB HBM2E. Would it work if they used 2.4Gbps or 2Gbps to limit power consumption?
Limiting bandwidth to save energy won’t be necessary. Remember that the current MacBook Pro already has a GPU option that has two stacks of HBM, with an intel CPU and memory subsystem in addition!
Dual stacks of HBM2e offers a maximum of 2x24GB or 48 GB in total. That’s passable for a top end MacBook Pro or iMac, but it may be that capacities can double by the time a big iMac is ready to introduced.

A price we have to pay for all these high bandwidth options is that RAM won’t be user installable, and there will be hard upper limits to capacity. I’d say the compromise is easily worth it though.

2. Using HBM2E as cache. Apple could just put a stack of HBM on the GPU / APU package, call it cache and stick with LPDDR5 as main memory? This seems like the efficient option.
I would tend to agree with leman that this is unnecessarily complex for an iMac. For a Mac Pro, I’m not quite as sure. The Mac Pro is by far more difficult to asses unless you have Apples data on how much memory is actually installed in the systems that are currently in use.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
From what I know, HBM2 uses less energy than DDR4, so that shouldn't be much of a factor (if a 16" can handle 64GB of DDR4 it should be able to handle 64GB of HBM). I assume that multiple HBM stacks can be connected via a single bus? And even at 2.0 Gbps you are looking at whopping 260GB/sec of bandwidth — 5x faster than DDR4.

I don't think that option 2. is viable. Using HBM for cache makes the entire system much more complex and I am not sure that the benefit would be that great compared to a large SoC-level cache. Once you are using HBM, you might as well go full HBM.
You know it escaped me that the MBP16 used DDR4 and not LPDDR4. I might have known it at one point and forgotten it again later...

That is good, though. I still have my reservations about the proximity to the APU, and you have heard me spin that yarn before. You can read my heat/area sermon again below :p

Limiting bandwidth to save energy won’t be necessary. Remember that the current MacBook Pro already has a GPU option that has two stacks of HBM, with an intel CPU and memory subsystem in addition!
Dual stacks of HBM2e offers a maximum of 2x24GB or 48 GB in total. That’s passable for a top end MacBook Pro or iMac, but it may be that capacities can double by the time a big iMac is ready to introduced.

A price we have to pay for all these high bandwidth options is that RAM won’t be user installable, and there will be hard upper limits to capacity. I’d say the compromise is easily worth it though.

I would tend to agree with leman that this is unnecessarily complex for an iMac. For a Mac Pro, I’m not quite as sure. The Mac Pro is by far more difficult to asses unless you have Apples data on how much memory is actually installed in the systems that are currently in use.

Regarding the first option, there are several things I would like to point out.

The first is that we have seen no evidence that 24GB stacks of HBM2E are being manufactured by anyone. I would not presume they exist or are viable at this stage.

The second is that the Macbook Pro 16 is already configurable to up to 64GB of RAM. For a lot of reasons, the case for using 48GB of HBM2E over 64GB of DDR5 is weak, and I think we should design with capacity parity in mind.

Now, my concern is not about the energy itself so much as heat and the proximity to the APU. 64GB means four stacks of HBM2E, all of which is more heat-dense than DDR. At 2.8 Gbps, one of these stacks would be consuming about 5W of power each, and they are going to be flanking the APU.

Something like this:

1597088563667.png
Is manageable with a desktop GPU, since GPU cores don't get too hot and there's lots of active cooling. But it's really not ideal when you have CPU cores in the center whose performance is directly constrained by heat.

Again, that's 5W for each of those four squares assuming they are running at 2.8Gbps. Running them at the 3.2Gbps or 3.6Gbps advertised by Samsung and SK Hynix is I think a total nonstarter in the 16" Macbook Pro. I'm still of the mind that speeds should be brought down further.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
Is manageable with a desktop GPU, since GPU cores don't get too hot and there's lots of active cooling. But it's really not ideal when you have CPU cores in the center whose performance is directly constrained by heat.

Just slap a big old heat spreader on that puppy and and you are good to go ;) As you say, the RAM will probably be clocked a bit lower in a laptop, more like 2-3 watts per die. Pair it with a 2048 bit memory interface and you still have some incredible memory bandwidth in a compact laptop. Imagine what having 300GB/s memory bandwidth will do for CPU performance.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.