Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Good response. And yeah I guess two cut down M1 Max dies make a lot more sense than doing a triplet M1 Pro die (which as you said there is no evidence of so far), it was just something I came up with while bored this morning thinking out loud.
On the "what can an iMac support line of thinking," I should've been more clear. It's not that I don't think an iMac Pro like design could support an M1 Max Duo (or quad) multi die setup in terms of power or thermals, it absolutely could (should be easy compared to Xeon/i9 + Vega/RDNA), I'm just not sure whether or not its something Apple wants to do. On the one hand, with the MBP, Apple finally prioritized function over form for probably the first time in a decade. OTOH, the iMac has always been as much about making a "statement" (striking visual design) as it is about power, and given the direction they went with the 24" iMac, I get the feeling they're going to want to make the design sleeker, something they can very much do while still providing better performance than the 16" MBP. So, in that sense, to me, this new 12 core rumor makes a lot of sense.

So I have a longer post about the change in direction for the MBP (in short: I generally think both those who love/hate Ives and the old/new design have all got it wrong. Some of it is an admission of error but the biggest driver of change is that the new Mac chips represent a paradigm shift for Apple across it’s product stack).

What the new iMac design will be … I’m not sure. It might indeed go thinner with a chin, but the rumors say it will look both like the new 24” and like the Pro display. What they have in common is the new squared off design (shared with the MBP). Where they differ is that one has a chin and the other doesn’t. The other part of the above rumor is that the new iMacs are indeed “for the Pros” which makes me think they won’t design themselves into a thermal corner regardless of the aesthetics. At least I hope not.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I don't understand how this alleged 12-core M1 should work. Maybe it refers to additional 2 E-cores (which wouldn't make much sense)? Cores in M1 are generally organised as clusters of four cores, so I would be puzzled if Apple ships two full clusters and one half-cluster of Firestorm cores. Another — completely ridiculous — idea is that they add two faster A15-based cores to improve single-core performance, but honestly, that would be so weird.
 

magikow

macrumors newbie
Nov 22, 2014
7
13
Additionally I'm wondering if P cores in the new CPU would be clocked higher ;> In a bigger case there might be "thermal space" for higher frequency I guess?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Additionally I'm wondering if P cores in the new CPU would be clocked higher ;> In a bigger case there might be "thermal space" for higher frequency I guess?

That would be nice but would likely require a full redesign of all the relevant circuitry.
 

GiantKiwi

macrumors regular
Jun 13, 2016
170
136
Cambridge, UK
That would be nice but would likely require a full redesign of all the relevant circuitry.

Why do you think that? The only distinction is how it would affect the thermal envelope, and there is no way even a 30% increase in IC's is going to significantly affect one of these chips out of scope.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Why do you think that? The only distinction is how it would affect the thermal envelope, and there is no way even a 30% increase in IC's is going to significantly affect one of these chips out of scope.

I might have misunderstood something, but more knowledgeable people have speculated that one of the secrets of Apple's unmatched energy efficiency is that they use optimised logic gate layouts that can operate at lower energy usage. The drawback however is the inability to scale to higher frequencies (something to do with signal synchronisation if I got it right). This is very different from mainstream x86 designs that allow vey high frequencies but accept higher baseline power usage as a tradeoff.

Basically, it seems that Firestorm in its current form (as it is used in A14 and M1 variants) simply cannot reach frequencies higher than 3.2ghz. If that was easy, no doubt Apple would have already increased the frequency on M1 Pro/Max - they easily have the thermal headroom to allow for more dynamic clock boost range. The fact that Firestorm ships in the exact same 3.2ghz configuration on all products: be it passively cooled Air or the 16" Pro capable of dissipating over 100W of heat — is telling.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
I don't understand how this alleged 12-core M1 should work. Maybe it refers to additional 2 E-cores (which wouldn't make much sense)? Cores in M1 are generally organised as clusters of four cores, so I would be puzzled if Apple ships two full clusters and one half-cluster of Firestorm cores. Another — completely ridiculous — idea is that they add two faster A15-based cores to improve single-core performance, but honestly, that would be so weird.
It could be two clusters of 5 instead of 4. Maybe with increased cache to match.
 

CWallace

macrumors G5
Original poster
Aug 17, 2007
12,527
11,543
Seattle, WA
I don't understand how this alleged 12-core M1 should work. Maybe it refers to additional 2 E-cores (which wouldn't make much sense)?

The Tweet specifically notes the reason for the two extra cores is to improve overall performance over the M1 Pro and M1 Max, so they would be performance cores, not efficiency cores.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
I might have misunderstood something, but more knowledgeable people have speculated that one of the secrets of Apple's unmatched energy efficiency is that they use optimised logic gate layouts that can operate at lower energy usage. The drawback however is the inability to scale to higher frequencies (something to do with signal synchronisation if I got it right). This is very different from mainstream x86 designs that allow vey high frequencies but accept higher baseline power usage as a tradeoff.

Basically, it seems that Firestorm in its current form (as it is used in A14 and M1 variants) simply cannot reach frequencies higher than 3.2ghz. If that was easy, no doubt Apple would have already increased the frequency on M1 Pro/Max - they easily have the thermal headroom to allow for more dynamic clock boost range. The fact that Firestorm ships in the exact same 3.2ghz configuration on all products: be it passively cooled Air or the 16" Pro capable of dissipating over 100W of heat — is telling.

Not disagreeing with anything you wrote - I also find this 12 core part rumor odd, but one slightly interesting thing to note about frequency: remember we had the discussion about why the frequency of the P-core was only allowed to max out if one core was active? I believe @cmaier explained it as probable that it was due to the closeness of the P-cores to each other and the effect that had on power and heat. Unsurprisingly he was probably right as according to Anandtech the frequency behavior of the Pro/Max is different:

“The CPU cores clock up to 3228MHz peak, however vary in frequency depending on how many cores are active within a cluster, clocking down to 3132 at 2, and 3036 MHz at 3 and 4 cores active. I say “per cluster”, because the 8 performance cores in the M1 Pro and M1 Max are indeed consisting of two 4-core clusters, both with their own 12MB L2 caches, and each being able to clock their CPUs independently from each other, so it’s actually possible to have four active cores in one cluster at 3036MHz and one active core in the other cluster running at 3.23GHz.”

I know you probably saw that too but in the context of your discussion I just that was a really interesting point as that is *very* different from anyone else’s frequency design that I know of.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Dylandkt this morning "confirms" that the upcoming iMac Pro will have an M1 SoC with 12 CPU cores.

I presume this will be 10 performance cores and 2 efficiency cores and it will also offer at least the same 16/32 GPU cores the the M1 MAX, though perhaps it will have more of them, as well.
The reports makes ZERO mention of the number of GPU cores. That is a major problem. If Apple has a two chiplet/tile solution then a very easy solution here would be

Two tiles 6 cores / 22-24 core ==> 12 CPU - 44-48 GPU package.


The Pro and Max are not the same package size. If the logic board is big enough ( and the iMac 27" (or more) can be bigger than a 14" logic board) then very well could be space so a double sized package.

If were trying to complete with the modern equivalent of a Vega64X wtih 16GB HBM Apple would need something more than a 32GPU Max .

Apple would probably need a modified Max die to compose the two tile package ( one with die-to-die interconnection . could swap out the second NPU+ProRes+video for that and still have two per overall package. ). That would be easier than having to reflow the "top half " of the Max die to take more P cores. And if doing the double die for the "much much much small headless box" need to do that package anyway.

Dropping from 8P cores to 4P cores doesn't really "waste" much space at all on a M1 Max die. The Max is basically justa GPU with some "other stuff" wrapped around it. If were dropping 4P from the M1 or even the M1 Pro the percentage of "used" space wasting would be substantively higher. Howver, on the Max die... between the GPU and Memory controllers that is more than half the die. That is the space consumer.



In addition to this 12 core model, he notes that lower-end SoC options will also be offered (so at least M1 MAX and probably M1 PRO).

If they are doing "lowest common denominator" on the I/O to the rest of the logic board (i.e., same set of ports ) , then there doesn't have to be much change for the bigger package except for space. As long as allotted enough for thermals and overall board is big enough.

They could cap the thermals short of a full 20 CPU + 64 GPU . That would likely also keep the system cost down also. Apple would likely charge many hundreds more to walk the ladder up from 12 to 20. For example 12 -> 16 -> 18 -> 20 at $200-300 increments would give them a $600-900 mark-up mechanisms just on CPUs cores and another similar mark-up ladder for the GPU cores on the headless model.

If Apple is having problems with generating full Max sized chips than pairing up dies with P and G core problems could live partially out of the binned pile if hold the "top end " count to just 12 CPU and 44 GPU for the iMac on this generation.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
The reports makes ZERO mention of the number of GPU cores. That is a major problem. If Apple has a two chiplet/tile solution then a very easy solution here would be

Two tiles 6 cores / 22-24 core ==> 12 CPU - 44-48 GPU package.


The Pro and Max are not the same package size. If the logic board is big enough ( and the iMac 27" (or more) can be bigger than a 14" logic board) then very well could be space so a double sized package.

If were trying to complete with the modern equivalent of a Vega64X wtih 16GB HBM Apple would need something more than a 32GPU Max .

Apple would probably need a modified Max die to compose the two tile package ( one with die-to-die interconnection . could swap out the second NPU+ProRes+video for that and still have two per overall package. ). That would be easier than having to reflow the "top half " of the Max die to take more P cores. And if doing the double die for the "much much much small headless box" need to do that package anyway.

Dropping from 8P cores to 4P cores doesn't really "waste" much space at all on a M1 Max die. The Max is basically justa GPU with some "other stuff" wrapped around it. If were dropping 4P from the M1 or even the M1 Pro the percentage of "used" space wasting would be substantively higher. Howver, on the Max die... between the GPU and Memory controllers that is more than half the die. That is the space consumer.





If they are doing "lowest common denominator" on the I/O to the rest of the logic board (i.e., same set of ports ) , then there doesn't have to be much change for the bigger package except for space. As long as allotted enough for thermals and overall board is big enough.

They could cap the thermals short of a full 20 CPU + 64 GPU . That would likely also keep the system cost down also. Apple would likely charge many hundreds more to walk the ladder up from 12 to 20. For example 12 -> 16 -> 18 -> 20 at $200-300 increments would give them a $600-900 mark-up mechanisms just on CPUs cores and another similar mark-up ladder for the GPU cores on the headless model.

If Apple is having problems with generating full Max sized chips than pairing up dies with P and G core problems could live partially out of the binned pile if hold the "top end " count to just 12 CPU and 44 GPU for the iMac on this generation.

So I thought of this as well - dropping a full 4P cluster gets you there. And that makes a certain sense. The issue I have with it then is this 12 core is then 8 P cores + 4 E cores which is simply 2 E cores more than the lower parts and not much of a performance uplift - in fact maybe very little with the die to die interconnect. A performance uplift on the GPU with practically the same CPU is interesting but Apple would be risking a CPU performance regression wouldn’t they?

If the rumor was 12 P cores for 16 cores total then this would track with how they binned the base pro chip.
 
Last edited:

throAU

macrumors G3
Feb 13, 2012
9,201
7,354
Perth, Western Australia
Dylandkt this morning "confirms" that the upcoming iMac Pro will have an M1 SoC with 12 CPU cores.

I presume this will be 10 performance cores and 2 efficiency cores and it will also offer at least the same 16/32 GPU cores the the M1 MAX, though perhaps it will have more of them, as well.

In addition to this 12 core model, he notes that lower-end SoC options will also be offered (so at least M1 MAX and probably M1 PRO).

it might be 3x 4 performance core clusters instead. not sure why anyone would care about efficiency cores on a desktop pro machine.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I don't understand how this alleged 12-core M1 should work. Maybe it refers to additional 2 E-cores (which wouldn't make much sense)?

Easy. Go from 8P cores to 4P cores. 4P + 2E would be 6 Cores. Then 2 * 6 = 12 ( 8P and 4E). Done.

The major problem with this "tweet leak" is that there no number of the GPU cores. Apple's M1 design tightly couples the CPU cores and GPU cores. Without the GPU core count there is really no accurate way of sizing/matching which die design of M1 talking about.

Apple could be using two "binned down" Max sized dies to put the "better than a Max" performance on characteristic on the new "iMac Pro". pull back on the GPU cores to 22-24 cores per chiplet/tile and will pull back the thermals also. ( If Apple is thinning out the 27" model ).

If Apple is trying to signiifcantly supercede the 5700 XT 16GB GDDR6 and Vega 64X 16GB HBM GPUs then the Max sized GPU isn't gong to work so well. On just highly parallel CPU core specific benchmarks that don't have AMX or P core specifics instructions then the four E cores will help turn in marks better than the 5 year old 18 core W-2100 series of the discontinued iMac Pro.



Cores in M1 are generally organised as clusters of four cores, so I would be puzzled if Apple ships two full clusters and one half-cluster of Firestorm cores.

Again easy. Go from a cluster of 4 cores down to a cluster of 2 cores. Two clusters --> 4P cores with even better memory bandwidth headroom.

Or if the defect fell in part of the shared cluster infrastructure ... one cluser of 4P cores.

There are TWO ways to skin the cat down to this configuration making it easier to bin to this. If having trouble producing quantities of dies then there is an upside to this. (and as outlined in another response above it gives them a longer mark-up ladder to walk to full sized two chiplet/tile pricing... makes Apple more money too. )


Another — completely ridiculous — idea is that they add two faster A15-based cores to improve single-core performance, but honestly, that would be so weird.

Having two tiles/chiplets isn't ridiculous at all. It has been the rumor for a pretty long time now.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
it might be 3x 4 performance core clusters instead. not sure why anyone would care about efficiency cores on a desktop pro machine.

Having a few efficiency cores around to soak up background tasks and keeping the P-cores clear for work makes sense as they don’t take up much room (well Apple’s don’t but we’ve had the discussion on this forum how Intel’s “E” cores aren’t actually E cores in the traditional sense and aren’t meant for the same purpose so their inclusion is different).

Having said that maybe this is indeed a new die design with 12 P cores. Could be interesting.
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Easy. Go from 8P cores to 4P cores. 4P + 2E would be 6 Cores. Then 2 * 6 = 12 ( 8P and 4E). Done.

The major problem with this "tweet leak" is that there no number of the GPU cores. Apple's M1 design tightly couples the CPU cores and GPU cores. Without the GPU core count there is really no accurate way of sizing/matching which die design of M1 talking about.

Apple could be using two "binned down" Max sized dies to put the "better than a Max" performance on characteristic on the new "iMac Pro". pull back on the GPU cores to 22-24 cores per chiplet/tile and will pull back the thermals also. ( If Apple is thinning out the 27" model ).

If Apple is trying to signiifcantly supercede the 5700 XT 16GB GDDR6 and Vega 64X 16GB HBM GPUs then the Max sized GPU isn't gong to work so well. On just highly parallel CPU core specific benchmarks that don't have AMX or P core specifics instructions then the four E cores will help turn in marks better than the 5 year old 18 core W-2100 series of the discontinued iMac Pro.





Again easy. Go from a cluster of 4 cores down to a cluster of 2 cores. Two clusters --> 4P cores with even better memory bandwidth headroom.

Or if the defect fell in part of the shared cluster infrastructure ... one cluser of 4P cores.

There are TWO ways to skin the cat down to this configuration making it easier to bin to this. If having trouble producing quantities of dies then there is an upside to this. (and as outlined in another response above it gives them a longer mark-up ladder to walk to full sized two chiplet/tile pricing... makes Apple more money too. )




Having two tiles/chiplets isn't ridiculous at all. It has been the rumor for a pretty long time now.

I agree that it’s an easy bin, disabling a p core cluster, but the CPU performance uplift would then be negligible and risk regression with the die to die interconnect, no?
 

LonestarOne

macrumors 65816
Sep 13, 2019
1,074
1,426
McKinney, TX
The Tweet specifically notes the reason for the two extra cores is to improve overall performance over the M1 Pro and M1 Max, so they would be performance cores, not efficiency cores.

Wildhaired speculation: The could even get rid of the efficiency cores and go with 12 performance cores, since this chip is apparently never going to be used in a laptop or mobile device.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Good response. And yeah I guess two cut down M1 Max dies make a lot more sense than doing a triplet M1 Pro die (which as you said there is no evidence of so far), it was just something I came up with while bored this morning thinking out loud.
On the "what can an iMac support line of thinking," I should've been more clear. It's not that I don't think an iMac Pro like design could support an M1 Max Duo (or quad) multi die setup in terms of power or thermals, it absolutely could (should be easy compared to Xeon/i9 + Vega/RDNA), I'm just not sure whether or not its something Apple wants to do.

If Apple is trying to go to the iMac 24" design where stuff vast majority of the logic board into the "chin" , then it is only about 3-4 inches wider than the 24" model (3" if the side bezel shrink is heavy).
That could be a self-imposed "painting into a corner" that would cap the thermals on dual chiplet/tile package. Tossing P and G cores to fit the thermals would be something Apple "wanted" to do.
However, the dual package size ( and two fans and multiple speakers ) might use up the extra space.

The rumors that they had to put the 27" re-design on pause to get the 24" out the door is suggestive that they are whacking away at the old 27" chassis somehow. Otherwise it really wouldn't have needed much work to be done to be a resource restriction. Decent chance they are hollowing out the thermal capacity somehow.

If there was a "iMac Max" with a 31" screen then they could have more thermal headroom even if trying to stuff most of it into the chin. ( and an even more expensive device to sell. )

This 12 core probably is going to be far closer to the mid-upper range of the old iMac Pro price range than anything that has classically been in the iMac 27" price range.
 
  • Like
Reactions: crazy dave

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Not disagreeing with anything you wrote - I also find this 12 core part rumor odd, but one slightly interesting thing to note about frequency: remember we had the discussion about why the frequency of the P-core was only allowed to max out if one core was active? I believe @cmaier explained it as probable that it was due to the closeness of the P-cores to each other and the effect that had on power and heat. Unsurprisingly he was probably right as according to Anandtech the frequency behavior of the Pro/Max is different:

“The CPU cores clock up to 3228MHz peak, however vary in frequency depending on how many cores are active within a cluster, clocking down to 3132 at 2, and 3036 MHz at 3 and 4 cores active. I say “per cluster”, because the 8 performance cores in the M1 Pro and M1 Max are indeed consisting of two 4-core clusters, both with their own 12MB L2 caches, and each being able to clock their CPUs independently from each other, so it’s actually possible to have four active cores in one cluster at 3036MHz and one active core in the other cluster running at 3.23GHz.”

I know you probably saw that too but in the context of your discussion I just that was a really interesting point as that is *very* different from anyone else’s frequency design that I know of.

Yeah, local heating is a constraint, because even if you are not drawing much current, heat doesn’t dissipate instantaneously. The use of clusters is probably not driven by that - probably more to do with memory ports and bus width constraints and the like, but clusters helps (as long as the clusters are physically separated a bit). You could also do other fun things like interleaving P and E cores, spreading P cores without clusters, etc. Each has its own trade offs.
 
  • Like
Reactions: crazy dave

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I agree that it’s an easy bin, disabling a p core cluster,

Don't have to disable a whole cluster. Two clusters of two amounts to 4 also. There is no net overall bisection bandwidth decrease at all. If "turning off" a fully functional die to this configuration no really any big downside here.

But a small number could scavenge by turning cluster off ( if defect in common cluster infrastructure don't have a choice). Apple doesn't allow manual overclocking so SoC picking for clusters with incrementally lower performance isn't really an option. "CPU" package is soldered down away so not easy to test for regardless. There might be some small varianbility in some product but most of the supply would be fully function dies with parts "switched off" is a bandwidth saving way.


but the CPU performance uplift would then be negligible and risk regression with the die to die interconnect, no?

If you have got 2x the number of GPU cores and doing GPU gated work, then the performance uplift is significant. If it is a "Max" size die working with it, then the "maximizing" CPU performance isn't the issue. The baseline Max die is a hefty GPU with other stuff sprinkled around it.

Apple isn't competing "head-to-head" with GPU less workstation CPU core only packages.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Highly unlikely.

It doesn't make sense for 2x M1 Max to only yield a working 12-core CPU. TSMC's 5nm node has very good yields.

die yields have little to do with this. Binning down working cores allows Apple to have a taller pricing ladder to more expensive ( and higher profit percentage) packages with fully working cores.

"Binning" isn't purely filled with defects. For high yields can use fully functional cores and just turn cores off. If charging full die processing recovery costs in the price of the "lowest config" package then still making a profit.

As the wafer size has gotten larger there are just more dies coming off of a wafer. For a mid-side die like the Max then even more so.

If you look at the BTO for the MBP 14/16" models can see that Apple is charging about $100/core to walk up a +2 ladder. These are not super slim margin SoCs. Apple is fully using the performance value of the SoC to slap a substantive mark up on these chips. Apple "suffering" under labor of paying for defects.... errrr No. These are priced to make more than healthy margins. The binning here is driven here far more by market segmentation usage than recovering defect overhead.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Don't have to disable a whole cluster. Two clusters of two amounts to 4 also. There is no net overall bisection bandwidth decrease at all. If "turning off" a fully functional die to this configuration no really any big downside here.

But a small number could scavenge by turning cluster off ( if defect in common cluster infrastructure don't have a choice). Apple doesn't allow manual overclocking so SoC picking for clusters with incrementally lower performance isn't really an option. "CPU" package is soldered down away so not easy to test for regardless. There might be some small varianbility in some product but most of the supply would be fully function dies with parts "switched off" is a bandwidth saving way.




If you have got 2x the number of GPU cores and doing GPU gated work, then the performance uplift is significant. If it is a "Max" size die working with it, then the "maximizing" CPU performance isn't the issue. The baseline Max die is a hefty GPU with other stuff sprinkled around it.

Apple isn't competing "head-to-head" with GPU less workstation CPU core only packages.

Sure but I mentioned it would be interesting to increase GPU while keeping CPU basically constant (or maybe slightly worse depending on the die interconnect properties), but it still seems odd to go 8+4 on the CPU to me. I dunno maybe. The pro to max made sense to me. This, I’m not as certain that Apple would do. Maybe though. I wrote about this last night:


But even then it feels … unsatisfying as a configuration.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Yeah, local heating is a constraint, because even if you are not drawing much current, heat doesn’t dissipate instantaneously. The use of clusters is probably not driven by that - probably more to do with memory ports and bus width constraints and the like, but clusters helps (as long as the clusters are physically separated a bit). You could also do other fun things like interleaving P and E cores, spreading P cores without clusters, etc. Each has its own trade offs.

I’m struggling to recall (still bleary) but I think you also mentioned something about drop off in the current when the cores are close together?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I’m struggling to recall (still bleary) but I think you also mentioned something about drop off in the current when the cores are close together?
I may have mentioned that at one point. As a general matter, you have to provide a power grid, which is a physical metal grid, for both power and ground (Vdd and Vss). This is a 3-dimensional grid. On some chips I’ve worked on, you even have entire layers dedicated to it (so you might have four metal layers stacked vertically, and then above that a power layer, and then a couple more signal layers, then a ground layer, or whatever). The resistance to any particular transistor can vary, which means you may or may not be able to get enough power where it needs to go. You get power supply drops across the rails, so if you are at 1.4 V at the top of the stack, by the time you get to the bottom you may be at 1.0 or whatever, if you haven’t taken care.

So what you should do is estimate how much current you are bringing through different parts of the power rails, and use that to figure out how much voltage you drop (it’s easy to calculate the resistance of the rails, so it’s just ohm’s law to calculate the voltage drop). If you see that region A will drop too much voltage, you find a way to bring more current to that area. You can increase the dimensions of the rails, add rails, add more vias between layers, etc. Either that, or you start moving circuitry around.

If you stick a whole bunch of high-current circuits near to each other, you may not be able to get enough power metal in there (you need to leave room for signals to get in and out), so it can be a limiting factor.

The same thing happens at a higher level - you need to bring current into the package. You have a bunch of solder bumps on the package (hundreds probably) for power and ground, but if all the current ends up using just one small area of bumps, you have a problem.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Easy. Go from 8P cores to 4P cores. 4P + 2E would be 6 Cores. Then 2 * 6 = 12 ( 8P and 4E). Done.

Fair enough, but does it make sense in the context of M1 designs? This would be a radical departure and more suggestive of a multi-chip-solution (where each chip/tile consist of 4P+2E cores).


Again easy. Go from a cluster of 4 cores down to a cluster of 2 cores. Two clusters --> 4P cores with even better memory bandwidth headroom.

Again, fair enough, but if they do this kind of fundamental redesign why even stick with Firestorm? Would it still be M1?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.