Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,505
19,638
chip defects are nanometers in scale, of course they'll be constrained in a GPU core, in fact it could be a SINGLE defective transistor.

Sorry, I should have been more clear. What I was trying to say that disabling defective GPU (or CPU) units is unlikely to be an effective binning strategy because there won’t be that many chips with these kinds of defects. Or, to put differently, it’s very unlikely that many of the binned chips actually have defective GPU cores. You’d really need the stars to align in a very particular way to get two defective GPU cores and nothing else (or to get two defective CPU cores in two different clusters). That’s why I believe that this is revenue optimization: Apple simply arbitrarily disabling cores on otherwise fully functional chips to create different configurations at different price points. Note also that all M-series chips ship with the full caches, there is no binning there.
 
  • Like
Reactions: Chuckeee and chabig

1096bimu

macrumors 6502
Nov 7, 2017
459
571
Sorry, I should have been more clear. What I was trying to say that disabling defective GPU (or CPU) units is unlikely to be an effective binning strategy because there won’t be that many chips with these kinds of defects. Or, to put differently, it’s very unlikely that many of the binned chips actually have defective GPU cores. You’d really need the stars to align in a very particular way to get two defective GPU cores and nothing else (or to get two defective CPU cores in two different clusters). That’s why I believe that this is revenue optimization: Apple simply arbitrarily disabling cores on otherwise fully functional chips to create different configurations at different price points. Note also that all M-series chips ship with the full caches, there is no binning there.
M1PRO.jpg

Take a look at this, the GPU and CPU cores are by far the largest single components, that can be disabled.
the defect rate and number of chips per wafer are such that you'd most often only get 1 or 2 defects per chip, which by the looks of this, have like 40% chance to be in one of the CPU or GPU cores.
This means if you have a defective chip, there's 40% chance it could be salvaged as binned chip.
There's no star that need to be aligned.

BTW sometimes they make the critical parts of a chip somewhat redundant or fault-tolerant such that a single defect will not disable it. They could also have cache be slightly larger than spec, so that it could also have disabled cells.

They can obviously also disable fully functional chips to get more low-end chips, but nobody knows how often that happens. But you'd definitely rather do that, than go out of your way to deliberately manufacture a slightly different, partially disabled chip.
 
  • Like
Reactions: Chuckeee

theorist9

macrumors 68040
May 28, 2015
3,860
3,046
that's two different designs...
if you etch a chip with a core disabled, it's not the same design, even if it's just a single disconnected wire.

This is even more retarded than just designing another chip with 2 less cores, because the disabled cores take up wafer space, on top of being a different and non-interchangeable design.
You misunderstand the practice that's being discussed. It's not one in which some chips are etched with some cores disabled. Rather it's one in which all chips are etched with all cores enabled, testing is done for process variation (PV), and the best way to address any PV is determined. In some cases that means deactivating blocks that don't work, or don't work well. That deactivation can be done either physically or with firmware:

1662867415314.png


Note also that this is just one of many ways to address PV.

Source: https://www.researchgate.net/public...ral_Techniques_for_Managing_Process_Variation

And let's tone down the language. Using terms like "retarded" gets you no points.
 
Last edited:

mr_roboto

macrumors 6502a
Sep 30, 2020
855
1,864
Hey, l’m trying to parse the seemingly small differences between M1pro 8 and 10 core laptops. I’m not sure there’s going to be a huge difference in performance for my uses. But I did wonder if the 8core binned chips might be slightly less reliable being that they are already ‘defective’. I plan on using this computer for a very very long time so maybe some discrepancies might show over the lifetime. Have you guys got any thoughts on this? I’d love to hear them if you do.
Reliability is not an issue. Yield engineering schemes like these involve adding extra circuitry. You have to identify up front the entire block which can be disabled if there's a defect inside and provide for fully disconnecting power, ground, and signal connections to that block (defects can cause electrical shorts inside the block so it needs to be electrically isolated). The end result is a defect that's just sitting there, inert, no power flowing through to create a hotspot that can cause worse problems to develop over time.

The other thing to contemplate is that even a 10 core M1 Pro has some things disabled, because there's a large amount of invisible yield engineering going on everywhere.

It's all about defect density - the average number of defects per unit area of the wafer. The bigger a block of circuitry, the more likely it will contain a defect.

So what's a big percentage of the floorplan of any modern chip, and therefore problematic from a yield perspective? Memory. Modern high performance SoCs have lots of on-chip memory: system-level cache, CPU L1/L2/L3 caches, GPU tile memory, and so on.

In any even moderately modern process node, every moderately large memory has redundancy built in. Say you set out designing a 128 KiB SRAM array. First you split it into subarrays - let's use an eight-way split as an example, so eight 16 KiB subarrays. Then you add an extra subarray, and the circuitry necessary to assemble a fully functional 128 KiB memory out of eight of the nine subarrays.

People spend a lot of time analyzing the economics of this and yes, it can and does make a ton of sense to waste money (because die area is money) on redundant structures so that you can yield good and fully functional chips from die that had defects on them.

I guarantee you that there are lots of memory arrays in M1 and M2 chips designed this way. It is literally unthinkable that there are none.

I don't believe the 8 core chips are a result of binning. There are so many other parts to these chips that it would be vanishingly improbably that two GPUs (and only two GPUs) wouldn't work after production. These chips are manufactured with either 8 or 10 working GPUs. That's all we know. There will be no difference between the two 8 and 10 core versions over their lifespan other than performance.
Nah. The point of 8/10 core GPU chips is yield enhancement. GPU cores are large blocks, therefore they are a risk for the same reason that memory is a risk. In this case, the economics are a little different since whole GPU cores are larger structures than memory subarrays, so they're selling parts with fewer functional GPU cores at a lower price rather than scrapping every part that has so much as a single dead GPU core.

Yes, there will be some "8-core" chips which have 1 or 2 functional GPU cores disabled. Yield engineering includes trying to predict how pricing will influence demand for the different harvested variants. If pricing drives excess consumer demand for the 8-core parts, well, that's OK, you can just convert some of your 10-core output to 8-core and it's all good. But the reverse condition - a glut of 8-core and shortage of 10-core - is bad. You can't fix it by shipping some of the 8-core parts as 10-core, so consumers are going to be unhappy about the delays on 10-core parts.

So Apple works together closely with TSMC on yield predictions and engineering, and with themselves on consumer pricing, trying make it likely that they will reliably have somewhat more 10-core chips than they need. How much more? Who knows, they're paying someone (or several someones) big bucks to do the math trying to finesse that question. They don't want to generate lots of shortages, but they also don't want to leave money on the table by selling too many 10-core parts as 8-core.

Sorry, I should have been more clear. What I was trying to say that disabling defective GPU (or CPU) units is unlikely to be an effective binning strategy because there won’t be that many chips with these kinds of defects. Or, to put differently, it’s very unlikely that many of the binned chips actually have defective GPU cores. You’d really need the stars to align in a very particular way to get two defective GPU cores and nothing else (or to get two defective CPU cores in two different clusters). That’s why I believe that this is revenue optimization: Apple simply arbitrarily disabling cores on otherwise fully functional chips to create different configurations at different price points. Note also that all M-series chips ship with the full caches, there is no binning there.
See above re: caches. I 100% guarantee that Apple is using redundant structures in memories to enhance yield. It's the industry norm.

It's illogical to say that the stars need to align. Defects are (mostly) randomly placed, so the key question is: what structures are large, and therefore most likely to contain a defect? GPU cores are not a tiny structure, they're large, and if you're unlucky enough to have a defect hit outside a memory (which are often fully repairable) but inside a GPU, it's nice to have the option to turn that entire GPU core off and still sell the part.

Apple does even more yield harvesting than you see in the Mac product lineup; for example IIRC several AppleTV models have had A-series chips with fewer enabled CPU or GPU cores than the phone version.
 

leman

macrumors Core
Oct 14, 2008
19,505
19,638
It's illogical to say that the stars need to align. Defects are (mostly) randomly placed, so the key question is: what structures are large, and therefore most likely to contain a defect? GPU cores are not a tiny structure, they're large, and if you're unlucky enough to have a defect hit outside a memory (which are often fully repairable) but inside a GPU, it's nice to have the option to turn that entire GPU core off and still sell the part.

Oh of course, that goes without saying. What I mean is that I doubt that every single binned chip is binned because it has defective CPU or GPU cores. Some of them will be defective, sure, but I’d expect the majority of chips to be partially disabled to satisfy the demand, not the other way around.
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
In any even moderately modern process node, every moderately large memory has redundancy built in. Say you set out designing a 128 KiB SRAM array. First you split it into subarrays - let's use an eight-way split as an example, so eight 16 KiB subarrays. Then you add an extra subarray, and the circuitry necessary to assemble a fully functional 128 KiB memory out of eight of the nine subarrays.

Usually you have row and column redundancy instead of memory array redundancy. With other words the selection is on line/row granularity instead on subarray granularity. I have never used nor seen a scheme, which you describe, in practical usage.
 
Last edited:

jav6454

macrumors Core
Nov 14, 2007
22,303
6,263
1 Geostationary Tower Plaza
Hey, l’m trying to parse the seemingly small differences between M1pro 8 and 10 core laptops. I’m not sure there’s going to be a huge difference in performance for my uses. But I did wonder if the 8core binned chips might be slightly less reliable being that they are already ‘defective’. I plan on using this computer for a very very long time so maybe some discrepancies might show over the lifetime. Have you guys got any thoughts on this? I’d love to hear them if you do.
Thanks in advance,
B
All chips are reliable unless there is a glaring manufacturing defect. However, some chips will always differ in temperature and performance from one another due to the silicon lottery.
 
  • Like
Reactions: blinkie

blinkie

macrumors 6502
Original poster
Sep 7, 2007
285
51
Guys that was great. Thank you for taking the time to comment. I found it very informative. Regardless of the chip I get (probably the 10) I’m looking forward to getting on the M ladder. It’ll be a whole new world of performance compared to my 2013 comp.

B
 
  • Love
Reactions: Gudi

mr_roboto

macrumors 6502a
Sep 30, 2020
855
1,864
Usually you have row and column redundancy instead of memory array redundancy. With other words the selection is on line/row granularity instead on subarray granularity. I have never used nor seen a scheme, which you describe, in practical usage.
I defer to your experience; I was thinking about caches when I wrote that. I've heard many times that cache designers use cache ways to provide redundancy. E.g. if your cache is 8-way set associative, provide a redundant 9th way.
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
Hey, l’m trying to parse the seemingly small differences between M1pro 8 and 10 core laptops. I’m not sure there’s going to be a huge difference in performance for my uses. But I did wonder if the 8core binned chips might be slightly less reliable being that they are already ‘defective’. I plan on using this computer for a very very long time so maybe some discrepancies might show over the lifetime. Have you guys got any thoughts on this? I’d love to hear them if you do.
Thanks in advance,
B
QA will make sure its reliable
 

jikmgeo

macrumors newbie
Jul 25, 2012
3
0
It's artificial binning with purposefully disabled cores, in all likelihood, to fill the product stack. As mentioned above, it's extremely unlikely two GPU cores would be defective and none of the CPUs are affected. In addition, true binning would result in limited supplies of Apple's most popular base model MacBook Pro. And we never see that happen. Combined with a mature N5 process, it doesn't make sense for binning to occur except to satisfy marketing requirements.
Hey JPack, with the release of the M3 series, as well as the alleged binning on the M3 Max chips, do you still believe this statement is true with the 2023 MacBook Pros? Asking largely due to the fact that these chips use the 3NM process, which is newer than 5NM was at the time.
 

JPack

macrumors G5
Mar 27, 2017
13,509
26,141
Hey JPack, with the release of the M3 series, as well as the alleged binning on the M3 Max chips, do you still believe this statement is true with the 2023 MacBook Pros? Asking largely due to the fact that these chips use the 3NM process, which is newer than 5NM was at the time.

The current M3 Pro/Max options look far more natural as a result of true binning than before. There are appreciable differences in CPU and GPU core counts than the M1 series.
 

Chuckeee

macrumors 68040
Aug 18, 2023
3,044
8,690
Southern California
Back to the original question. Yes, Binned chips are reliable.

Based on the discussion above, the lower performance chips may or may not be actually binned. If they haven’t been binned, then there would be no impact on reliability, based on using a lower performance chip. If the chip was actually binned because of a fault during the fabrication process, those faults do not grow so a binned chip won’t have lower reliability.
 
Last edited:
  • Like
Reactions: picpicmac

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
Seems to be an accepted phrase applied to the 8-core chips. You’re the first person I’ve heard reject it.
Of course binned chips are defective! You've never heard this fairy story before, because it's all made up on the fly. There is no need for artificial binning, defects on chips happen all the time. The GPU covers a large area and one or two cores not working correctly is a common type of failure. Furthermore the GPU is not as important for general performance as other areas on the chip. So these slightly defective chips get reused for entry-level machines. If two of four P-cores are defective, the resulting binned chip would lose half its CPU performance. So these kind of chips go directly into the trash.

As for the risk of buying a binned chip. The rest of the chip passed all its test with flying colors and isn't any better or worse than an un-binned chip, who passed all its tests.
 

3Rock

macrumors 6502a
Aug 25, 2021
733
798
Posted this a day or so ago about binning. Check video about 15 minutes in. they are all banned.

 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
For as far has as I can remember binned chips have been a standard in chip fabbing.

Back in the day it was the difference between an 80MHz processor and a 100MHz processor.

If the chip can’t meet the requirements it gets binned out.
 

Chuckeee

macrumors 68040
Aug 18, 2023
3,044
8,690
Southern California
For as far has as I can remember binned chips have been a standard in chip fabbing.

Back in the day it was the difference between an 80MHz processor and a 100MHz processor.

If the chip can’t meet the requirements it gets binned out.
Binning electronic components is Older than that.

Resistors within +/- 20%: No 4th stripe
Resistors within +/- 10%: Silver 4th stripe
Resistors within +/- 5%: Gold 4th stripe
Etc.

All based on binning. Remember a science project where I measured 200 silver striped resistors. The all resistance accuracy between 5% and 10%, none closer than 5%
 
Last edited:

MacCheetah3

macrumors 68020
Nov 14, 2003
2,274
1,210
Central MN
You know what? I’m glad I asked this question. I wonder why the phrase is so widely used in reviews. Maybe it sounds cool.
“News” outlets and social media “stars” need their buzzwords. :)

Anyway… I’m certain Apple allows TSMC (and others) to do some SOC/SIP binning. However, there’s little evidence to believe it’s nearly as common as “PC” components.

A blatant (and interesting) recent example:


There are also the F model Intel chips, as another example. (See below.)

Would it be possible to enable these cores? Has anyone tried it?
While I cannot find concrete evidence, there are scattered believable observations to support manufacturers take steps to physically fully disable the defective sub-components. For example:

 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.