Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

rp7777

macrumors newbie
Original poster
Jun 30, 2020
24
9
I have an issue where PCIe Boost A does not appear to be supplying power.

My setup:

Mac Pro 2012
2 x 2.93 GHz 6 Core Intel Xeon
32 GB Memory
MSI Radeon RX 580 Armor MK2 8G

I was originally powering the GPU via 2 6 pin Mini PCIe to single 8 Pin connector (i.e. the GPU has a single 8 pin socket and I purchased the relevant cable to connect both Logic Board PCIe Boost connectors to a single 8 Pin connector).

With this setup, using iStat as monitor, I could see that both PCIe boost reporting (varying) voltage. i.e.:

PCIe Boost A, 12V 12.16 V
PCIe Boost B, 12V 12.18 V

However, in the power draw (current) section I only ever saw similar to the following:

PCIe Boost A 12V 0.00 A
PCIe Boost B 12V. 1.26 A

Even when pushing GPU, I would see Boost B current climb but Boost A never moved from 0.

So, thought that maybe the GPU never draws enough current to "trigger" draw from Boost A, so, after research (mainly here) settled on installing an EVGA PowerLink with the understanding that this setup (both PCIe Boost into PowerLink), meant that I "should" see an (roughly) equivalent draw of power from each PCIe Boost.

Installed the PowerLink today.

I am not seeing the expected current draw on each PCIe Boost. Same as before. Both report voltage available but only Boost B reporting any current draw.

Wondering what is (or is not :) ) going on here.

Do I have an issue with PCIe Boost A?

Is there something else I need to do to smooth the power draw across the 2 boosters?

Is there some other way (not iStat) to check/confirm if there is an issue/problem?

Anybody else who has seen this/resolved etc.?

Any and all help is appreciated.
 
pull the mini 6 pin from Connector B and just boot the machine, dont force the gpu. If the GPU is not working Boost A is dead.

also you can swap both mini 6 pins, if boost B is now without current its a faulty cable.
 
  • Like
Reactions: paalb
pull the mini 6 pin from Connector B and just boot the machine, dont force the gpu. If the GPU is not working Boost A is dead.

also you can swap both mini 6 pins, if boost B is now without current its a faulty cable.
Good call. Thought I had done this but obviously not.

I removed Boost B cable and restarted. MP started OK and GPU active and monitor showing images.

Fired up GpuTest - as soon as I hit the start button, the MP shut down.

So it appears that Boost A is dead. MP provides enough power from PCIe slot for general use but as soon as any load on GPU, with no "external" power, the GPUs attempt to draw more from PCIe slot power causes shutdown.

Connect Boost B again and tested - all OK.

I have been running like this (additional power from Boost B) for some time now (probably 2 years?) previously via 2 6 Pin mini PCIe to single 8 PIN and now with 2 Mini PCIe 6 pin to Powerlink.

Presumably the single 6 pin Mini PCIe boost (Boost B) is supplying enough power for my GPU as I have not had a shutdown etc. with anything I run (not a gamer really but occasionally play (Bordelands, Obduction etc.)).

According to MSI, the GPU is rated at 185 W, so 75W from PCIe slot and 150W (maximum possible) from a single Mini PCIe Boost power connection (Boost B in my case). Based on those figures there is some overhead left on the Boost B connection.

I will leave connected to the PowerLink (not really required now that only 1 boost in use).

Annoyingly, the Boost A connection appears to have never worked. I bought this MP (a 2010) about 4-5 years ago and it died (Logic Board) about 6 months later. A new (2012) Logic Board was installed (I am in Australia - repair place had to order from the U.S.) and it appears that that "new" board had a broken Boost A from the start. I just did not pick up on at the time and now too late to do anything about.

Other option would be a Pixlas Mod but don't think I need to go down that path if the current (pun not intended) Boost B is sufficient.

Any thoughts on this or if I am missing something important?

Thanks for the help.
 
According to MSI, the GPU is rated at 185 W, so 75W from PCIe slot and 150W (maximum possible) from a single Mini PCIe Boost power connection (Boost B in my case). Based on those figures there is some overhead left on the Boost B connection.
Your power draw figures are wrong. Starting by the fact that 185W is the total power draw BEFORE AMD PowerPlay.

Last card that used 75W was the AMD RX 480 reference design and days after the cards started to get in the hands of customers, photos of motherboards with melted/burned PCIe slots start to show on hardware and GPU forums. AMD had to modify the drivers to limit the power draw days after the release and urgently modify the design of the power plane for the next revision.

After that stupid fiasco of melting PCIe slots all over the internet, every decent review site started measuring the PCIe slot power draw independently of the total GPU power draw and no high end card ever got over ~50W again from the PCIe slot.

Also, the maximum non continuous power draw for each of the backplane PCIe Power Boost connector is ~120W and the SMC shuts it down after that, it's a design rated for 75W.

If you run Furmark, SMC will shutdown your Mac Pro.
 
Also, the maximum non continuous power draw for each of the backplane PCIe Power Boost connector is ~120W and the SMC shuts it down after that, it's a design rated for 75W.

If you run Furmark, SMC will shutdown your Mac Pro.
Thanks for the reply and input/information.

Regarding the 150/120W for the Boost Connectors - I got the 150W figure from reviewing (mostly here) threads around the provision of additional power to GPUs - the 150W was quoted in a thread related to using a Powerlink. You have more experience with CMPs so I take your 120W figure as baseline.

Regarding Furmark. I have run GPUTest selecting Furmark (Benchmark and Stress Test) and it appears to run OK and CMP does not shut down.

Is there a period of time it should run and then shutdown or would you expect this to be immediate? (i.e. Furmark tries to run at max capacity of GPU thus GPU demands max power immediately and (if this overloads power input) SMC shuts down - or does Furmark ramp up until max power/trigger SMC shutdown).

I have been using the RX 580 for a couple of years now with a 2 6 pin mini PCIe to 8 pin PCIe cable (obviously with only 1 PCIe Boost Connector actually delivering power) and now with the 2 6 pin mini PCIe into Powerlink and Powerlink connected to GPU via 8 pin to 8 pin cable (cannot mount PL direct to GPU as connector on AMD cards are backwards) - obviously again only 1 6 pin mini PCIe is actually delivering power in this setup also.

So (unless I am mistaken or missing something), it appears that, (the card I have at least) does not require more power than what is provided by the PCIe slot power and a single 6 pin mini PCIe Boost connector.

Does that conclusion appear to be OK or am I missing something (possibly risking something like burning out power traces on Logic Board etc.).

Appreciate your help and insights.
 
Thanks for the reply and input/information.

Regarding the 150/120W for the Boost Connectors - I got the 150W figure from reviewing (mostly here) threads around the provision of additional power to GPUs - the 150W was quoted in a thread related to using a Powerlink. You have more experience with CMPs so I take your 120W figure as baseline.
Apple designed it for 75W with a relatively high tolerance, probably only for spikes, the receptacle and the connector itself are rated for 120W. MacPro4,1 and MacPro5,1 are 2008ish designs, when GPUs had a skimpy power draw compared to modern GPUs.

The early-2009 backplane I tested with a Agilent 6060B DC load had the emergency SMC shutdown at 124W, while with a mid-2010 the shutdown happened with 129W. Several people over the years tested what each AUX Boost can provide with load, results around 125W.

The power traces on the PCB and the fuses itself are not designed for continuous usage at ~120W, this will burn the traces overtime.
Regarding Furmark. I have run GPUTest selecting Furmark (Benchmark and Stress Test) and it appears to run OK and CMP does not shut down.

Is there a period of time it should run and then shutdown or would you expect this to be immediate? (i.e. Furmark tries to run at max capacity of GPU thus GPU demands max power immediately and (if this overloads power input) SMC shuts down - or does Furmark ramp up until max power/trigger SMC shutdown).

I've noticed that there are two distinct modes of SMC shutdown, instantaneous spike and continuous load. Continuous load also have a temperature component, SMC probably takes temperature in consideration since I had several shutdowns in the summer and almost none in the winter for the exactly same config. That got me thinking how the H8S firmware was programmed and I've started to test with the 6060B load, but without the firmware itself is difficult to see exactly what Apple did.

I have been using the RX 580 for a couple of years now with a 2 6 pin mini PCIe to 8 pin PCIe cable (obviously with only 1 PCIe Boost Connector actually delivering power) and now with the 2 6 pin mini PCIe into Powerlink and Powerlink connected to GPU via 8 pin to 8 pin cable (cannot mount PL direct to GPU as connector on AMD cards are backwards) - obviously again only 1 6 pin mini PCIe is actually delivering power in this setup also.

So (unless I am mistaken or missing something), it appears that, (the card I have at least) does not require more power than what is provided by the PCIe slot power and a single 6 pin mini PCIe Boost connector.

Better designed GPU power rails, with less spikes, also work better within the SMC envelope. SMC does the emergency shutdown when the sensors detect overcurrent, lower temperature and the tolerance on the sensors affect when the SMC does it, so it's not exactly the same for each backplane. Ambient temperature also greatly affects resistance and load.

Maybe you card have PowerPlay tables disabled, your card could have a good power rail design (I never tested your specific card, can't say anything about it) or your backplane have a greater tolerance than most.

Remember that you are way over the design specification and your PCB traces can become a crisp over time.

Does that conclusion appear to be OK or am I missing something (possibly risking something like burning out power traces on Logic Board etc.).

Appreciate your help and insights.

Even with normal GPU prices, I wouldn't do what you are doing besides for a short test. With the insanity going on right now with GPU prices I would never even try.

Anyway, if works for you…
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.