Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

The probably most profound memory I have for miniature CRTs is when a classmate brought their Watchman to school on a Friday.

It wasn’t exactly a typical Friday where we were. Our school let our classes out for a special assembly, allowing some of our teachers to let us go outside. A few of us, me included, were watching our classmate’s Watchman in order to tune in to live national TV, pretty close to lunch hour, because the president was giving a speech three days after the Challenger was destroyed. Jets we saw flying in ceremonial formation on their Watchman flew directly over our heads about seven seconds later.
 
"Slipped the surly bonds of Earth, to touch the face of God".

By comparison, the world we live in now seems to be a ludicrous soap opera with endless unbelievable plot twists.

I was never fond of him (I remember being one of three kids in my grade two class of, like, thirty who, in a mock election, voted for Carter over him in 1980, and I couldn’t figure out what his allure was). Being where I was when he said the above quote, I didn’t care much for his poetic licence either. When I got older, my disdain for him only broadened, especially for how his cabinet officially ignored the HIV plague for five years.

The world is always a ludicrous soap opera. What’s changed is tempo and volume: this online medium makes it so much quicker and louder.
 
I don’t deny your experience that the devices you fixed worked for quite a while, and that reballing with leaded solder fixes lots of other hardware from this timeframe (e.g. XBox).
I believe that the NVIDIA explanation for the 8600M’s problems is an issue with the underfill layer that connects the silicon die to the interposer board that holds the BGA balls on its other side, and that the revised versions of these chips swapped the underfill formula to something more resilient.



(I find it unlikely that NVIDIA would have focused in this hard on their flip-chip methodology if the problem were solely related to RoHS-compliant solder balls.)

My guess is that the reballing process helps the underfill and/or flip-chip bonding in some way due to the higher temperatures involved, but that the root problem is probably not solely due to the BGA solder balls themselves.

EDIT: Interestingly, the “semiaccurate” website’s deep dive says that part of the problem could also be due to the solder balls on the flip-chip die (often called “bumps”) using a high-lead formula rather than a eutectic formula, leading them to be more fragile in the case of thermal stress.
Were you guys swapping the balls on the silicon GPU die itself, or on the green PCB substrate (i.e., die-to-interposer or interposer-to-motherboard layer)? I’ve done a bit of work at the latter scale and cannot imagine the steadiness of hand that would be required for the former...

It's never been necessary to change solder balls on a die for me. And as I said, if it was a problem of this type, all chips (and not just geforce 8xxx) would have the same problem.
 
It's never been necessary to change solder balls on a die for me. And as I said, if it was a problem of this type, all chips (and not just geforce 8xxx) would have the same problem.

Interesting - I thought that the 8xxx series was especially prone to having issues, to the point that NVIDIA released a revised version with different underfill for the die. Not sure why they would have bothered doing that if the problem were indeed the solder balls connecting to the motherboard?

...
If it's a problem with the chips, how do you explain ALL chips from ALL brands having a problem from year 2005 to mid-2012? From Geforce 6100 to Geforce GT 3xxM. From ATI Radeon x300 to ATI Radeon 58xx. Even Intel chips like GMA 950, X3100, MHD4500. All massively defective.
...

I'd never heard of universal GPU-specific solder ball defects in this timeframe, so I looked around a bit and found this video from acerbic repairman Louis Rossman:
His thesis, from having worked on a wide range of hardware, is that premature GPU deaths are mostly due to the flip-chip die ball bumps warping off the interposer. He says this is why heating up to 140º (a la @bobesch) appears to work - it warps the die back so it contacts the interposer again for a while, until it heats up and fails again. (If the problem were instead with the solder balls themselves, heating up to 140º shouldn't do anything at all since, as you've said, the melting point of solder much higher.)
He of course recommends full chip replacement when parts can be found, though he also expresses the regrettable opinion that legacy Macs aren't worth fixing...

So I think that it is possible that a wide range of GPUs would exhibit flip-chip bump problems due to being run in thermally-constrained environments and overheating. As Rossman says, reballing can help temporarily, and the issue may not always return in well-ventilated or low-demand environments. Probably worth trying in some cases, especially when replacement chips are prohibitively expensive.
 
Interesting - I thought that the 8xxx series was especially prone to having issues, to the point that NVIDIA released a revised version with different underfill for the die. Not sure why they would have bothered doing that if the problem were indeed the solder balls connecting to the motherboard?



I'd never heard of universal GPU-specific solder ball defects in this timeframe, so I looked around a bit and found this video from acerbic repairman Louis Rossman:
His thesis, from having worked on a wide range of hardware, is that premature GPU deaths are mostly due to the flip-chip die ball bumps warping off the interposer. He says this is why heating up to 140º (a la @bobesch) appears to work - it warps the die back so it contacts the interposer again for a while, until it heats up and fails again. (If the problem were instead with the solder balls themselves, heating up to 140º shouldn't do anything at all since, as you've said, the melting point of solder much higher.)
He of course recommends full chip replacement when parts can be found, though he also expresses the regrettable opinion that legacy Macs aren't worth fixing...

So I think that it is possible that a wide range of GPUs would exhibit flip-chip bump problems due to being run in thermally-constrained environments and overheating. As Rossman says, reballing can help temporarily, and the issue may not always return in well-ventilated or low-demand environments. Probably worth trying in some cases, especially when replacement chips are prohibitively expensive.

All manufacturers release various revisions of their chips. If it's a problem with the die spheres, then this isn't a problem unique to the 8xxx series, not least because the 6xxx series is by far 80% more problematic than the 8xxx series. I've seen many videos of this type talking about the same thing, technicians changing chips for chips with the same serial number, claiming that they are newer revisions because the compound that is around the die is lighter in color, or something like that. In my experience, most replacement chips are the exact same chips that are on the motherboard, the difference in color is just the use that causes their color to change.

Desktop graphics cards are also problematic, and even then most you'll find out there will still be with their original chip (most likely failed because of capacitors)

Troubled Macbooks are just a tiny fraction of an entire world of laptops that fail their GPUs. The king, top of the list, unsurpassed, will always be the HP DV6000, even after a recall they continued to have problems (not unlike Apple's recall with their macbooks).

I believe that in 2007 nvidia massively manufactured these chips, and they were available on the aftermarket for many years (problem oem identical chips), in the end I think it generated a placebo effect on people.

I've worked in an authorized service, and they even replaced them with new boards, there was a pile of defective boards that the technicians simply remanufactured, and after passing a quick test, they went inside a recall laptop, replacing a board. defective (their defective plate was going to that pile of plates to be remanufactured), I don't know if this was legal, or if they were actually following the policies of the brands they represented, I just thought it didn't feel right to me.
 
All manufacturers release various revisions of their chips. If it's a problem with the die spheres, then this isn't a problem unique to the 8xxx series, not least because the 6xxx series is by far 80% more problematic than the 8xxx series. I've seen many videos of this type talking about the same thing, technicians changing chips for chips with the same serial number, claiming that they are newer revisions because the compound that is around the die is lighter in color, or something like that. In my experience, most replacement chips are the exact same chips that are on the motherboard, the difference in color is just the use that causes their color to change.

Desktop graphics cards are also problematic, and even then most you'll find out there will still be with their original chip (most likely failed because of capacitors)

Troubled Macbooks are just a tiny fraction of an entire world of laptops that fail their GPUs. The king, top of the list, unsurpassed, will always be the HP DV6000, even after a recall they continued to have problems (not unlike Apple's recall with their macbooks).

I believe that in 2007 nvidia massively manufactured these chips, and they were available on the aftermarket for many years (problem oem identical chips), in the end I think it generated a placebo effect on people.

I've worked in an authorized service, and they even replaced them with new boards, there was a pile of defective boards that the technicians simply remanufactured, and after passing a quick test, they went inside a recall laptop, replacing a board. defective (their defective plate was going to that pile of plates to be remanufactured), I don't know if this was legal, or if they were actually following the policies of the brands they represented, I just thought it didn't feel right to me.

In short, NVIDIA acknowledged their design flaw with a chip revision; AMD, when faced with the same, didn’t.

And that’s why so many 2011 dGPU MacBook Pros gather dust, sit in recycling bins, and are post-destroyed into e-waste reclamation now instead of being used. Which is a crying shame.
 
  • Like
Reactions: ak-78
In short, NVIDIA acknowledged their design flaw with a chip revision; AMD, when faced with the same, didn’t.

And that’s why so many 2011 dGPU MacBook Pros gather dust, sit in recycling bins, and are post-destroyed into e-waste reclamation now instead of being used. Which is a crying shame.
nvidia didn't do it because it was a good company, it did it because it was crushed with lawsuits from big OEMs, it's been massacred since Geforce 6xxx, and it's been the main chip found in laptops (so there were more people wanting nvidia's head than from AMD), in the case of nvidia, people organized to file large class-action suits against OEMs, but in the case of AMD, people just sat back and decided to buy a new laptop.
 
  • Like
Reactions: ak-78
nvidia didn't do it because it was a good company, it did it because it was crushed with lawsuits from big OEMs, it's been massacred since Geforce 6xxx, and it's been the main chip found in laptops (so there were more people wanting nvidia's head than from AMD), in the case of nvidia, people organized to file large class-action suits against OEMs, but in the case of AMD, people just sat back and decided to buy a new laptop.

Few companies will ever be proactive in the face of redressing their own design flaw, causing units to fail with typical use. Enter AMD.

Whether by lawsuits or overt contractual coercion by a major buyer (like an Apple, HP, or Asus), such a company, like NVIDIA, would need to do something about it to stave off potentially seismic legal penalties later. And that was basically what happened here.
 
I remember that chaotic time, I avoided having laptops with nvidia graphics for some time, I suffered with GMA950 and Intel X3100. It was a total pain to see gta iv running at 6fps with textures disappearing at x3100 In fact all games ran poorly on that thing, it was able to run gta sa at 30fps, but not at max settings, and without any anti aliasing.

Then I got excited about the intel 4500, but it was still a flop, then I tried to put a little faith in the integrated graphics of the cores, intel hd 3000 couldn't get more than 20fps on watch dogs (on a Core i7, 3rd gen processor), I only went to see a simple improvement on Intel HD 630 of the 7th generation. I thought it was a miniaturization problem, or that it would be very difficult to have decent integrated graphics, but ATI/AMD always proved that it was just pure intel slack. Radeon 3200 or Radeon 4200 were simple but capable for example. Nvidia has outdone itself with the 9400 ION, a very low power consumption video card (I was able to run gta iv on an atom 270!) I only respect the intel graphics solutions, because of all, they are the last on the list, when it comes to heating problems, and solder balls.
 
Just posting this here for relevance. Found a Macworld July 1999 magazine which speculates on the release date of the G4. They did a fun render of a Grape G4 in Blue-and-White styling as a prediction.

From a continuity perspective, it would've been cool to see the G4 printed on the side like the Blue and White. But Apple's design language was moving too fast for that.
C3CC8D54-63E0-4628-9CAA-E33346B9DD58.jpeg
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.