Neither is true. The GPUs were not defective—it was an internal design flaw.
The 2011 (2012, 2010 and late 2009) had WD Black or Seagate HDDs that ran extremely hot—we're talking
hell hot in there. This caused the GPU to get extremely hot and when the machine was off, it would cool down. After enough hot/cool cycles, solder joints in the GPUs would fail and crack— this is why baking the GPU normally worked as it re-flowed the solder. The HDDs often failed well before the three years were up, too. Apple then offered an extended warranty — but you had to know about it.
A few of us figured out that heat was the real culprit—so did Apple and the replacement HDDs they used ran a lot cooler (those also had built in sensors). When SSDs became affordable and the OWC Temp sensors hit the market, many of us started replacing those WD and Seagate heat pumps with 128GB SSDs & adapter brackets* to increase the air flow, the OWC sensors—and the batteries with CR2032. Student Macs that got wiped clean every year never needed anything bigger.
There's nothing wrong with the BR2032 other than cost but you don't need the heat cladding if you are cooling down the insides. Electrically, the BR and CR are the same battery.
As I've mentioned, I upgraded well over 300. I can do a 27" 2009–11 in 15 minutes one-handed (not a boast, my left arm became crippled in 2009). How many of those suffered GPU failure? Not one. Many teachers bought those after the district retired those machines and I hear from them now and then—usually to put decent sized SSD in or to tell them that High Sierra is the last MacOS for those except the 2012.
*Any adapter bracket that looks like this works fine. Do you really need one? Well, no—double-stick foam tape works—but you do increase the air circulation and if you have to go back in, it's even faster to change out the drive inside. I do not recommend leaving the old HDD in there, even disconnected. Do the job right.
View attachment 1956111