Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

matthewpomar

macrumors member
Original poster
Oct 27, 2010
78
12
Hi,

I am running an RTX 2070 Super in my classic Mac Pro. I have it connected to the 2 AUX power connectors via 6 and 8 pin going back to the main board.

It's my understanding that the RTX 2070 Super Founders Edition has a TDP of 215W. As I have it connected, I should be able to provide 225W to the card. However, every now and then my GPU will cause my Mac Pro to just turn off. This only happens in whilst playing a graphically intense game, like the new Modern Warfare.

I do remember there was a lot of hot air kicking out of the Mac/GPU around the time the Mac last shut down due to power draw.

How can I make sure that the card does not use more 215W like it is rated and cause my Mac to shut down?
 
Last edited:

tsialex

Contributor
Jun 13, 2016
13,455
13,602
Hi,

I am running an RTX 2070 Super in my classic Mac Pro. I have it connected to the 2 AUX power connectors via 6 and 8 pin going back to the main board.

It's my understanding that the RTX 2070 Super Founders Edition has a TDP of 215W. As I have it connected, I should be able to provide 225W to the card. However, every now and then my GPU will cause my Mac Pro to just turn off. This only happens in whilst playing a graphically intense game, like the new Modern Warfare.

I do remember there was a lot of hot air kicking out of the Mac/GPU around the time the Mac last shut down due to power draw.

How can I make sure that the card does not use more 215W like it is rated and cause my Mac to shut down?
Why you are sure that you can provide 225W to the card?

It's extremely hard to find a GPU that draw 75W from the PCIe slot, the only known example is the first edition of the reference RX 480, that design was later heavily modified to don't do it because it was burning the PCIe slot.

The majority of GPUs draws around 25 to 40W from the PCIe slot and use most of the power needed from the PCIe connectors. Btw, GPUs don't use the same amount from the PCIe power connectors, most cards connects each power connector to different internal power circuits and draw differently from each one. This is not a problem with PCs, but it is with a MP4,1 or 5,1.

PC GPUs draw power in an unbalanced way, one way to make the Mac Pro backplane compatible with these GPUs is eVGA PowerLink.
 
  • Like
Reactions: h9826790

matthewpomar

macrumors member
Original poster
Oct 27, 2010
78
12
Why you are sure that you can provide 225W to the card?

It's extremely hard to find a GPU that draw 75W from the PCIe slot, the only known example is the first edition of the reference RX 480, that design was later heavily modified to don't do it. The majority of GPUs draws around 25 to 40W from the PCIe slot and use most of the power needed from the PCIe connectors. Btw, GPUs don't use the same amount from the PCIe power connectors, most cards connects each power connector to different circuits and draw differently. This is not a problem with PCs, but it is with a MP4,1 or 5,1.

Hi,

I'm not sure of anything :)

I've read elsewhere that the RTX 2070 Super should work in the Mac Pro based on the theory that the PCIe slot and the two AUX connectors will provided the needed 225W.

Do you know if I will be able to use the RTX 2070 Super in the Mac Pro or is the card going to be too demanding? Are there any tweaks I can make to the config or connectors that I use to get the needed power?

Thank you.
 

tsialex

Contributor
Jun 13, 2016
13,455
13,602
225W is just theoretical, 75W from the PCIe slot, 75W from PCIe AUX A and 75W from the PCIe AUX B. In real life you can't feed 225W to a GPU unless it draws around 75W from the PCIe slot, scenario highly improbable of ever happening.

While still out of spec and using more power from the PCIe AUX A and B then what Apple intended with the backplane design, with an eVGA PowerLink you probably can make your GPU work without shutting down your Mac Pro when the GPU is in load.

PC GPUs draw power in a very unbalanced way, this is not a problem with PCs since the GPU is directly connected to the PSU, but it is a serious problems with Mac Pro. One way to make the Mac Pro backplane compatible with these GPUs is eVGA PowerLink. PowerLink balances the power draw between the two mini-PCIe connectors, making both share the load, and usually makes GPUs that draw around the low 2xxW work fine with Mac Pro 4,1/5,1. Another way is Pixla's mod.
 
Last edited:
  • Like
Reactions: matthewpomar

matthewpomar

macrumors member
Original poster
Oct 27, 2010
78
12
225W is just theoretical, 75W from the PCIe slot, 75W from PCIe AUX A and 75W from the PCIe AUX B. In real life you can't feed 225W to a GPU unless it draws around 75W from the PCIe slot, scenario highly improbable of ever happening.

While still out of spec and using more power from the PCIe AUX A and B then what Apple intended with the backplane design, with an eVGA PowerLink you probably can make your GPU work without shutting down your Mac Pro when the GPU is in load.

PC GPUs draw power in a very unbalanced way, this is not a problem with PCs since the GPU is directly connected to the PSU, but it is a serious problems with Mac Pro. One way to make the Mac Pro backplane compatible with these GPUs is eVGA PowerLink. PowerLink balances the power draw between the two mini-PCIe connectors, making both share the load, and usually makes GPUs that draw around the low 2xxW work fine with Mac Pro 4,1/5,1. Another way is Pixla's mod.

Thanks for the tip n the eVGA PowerLink. I'll give it a try before I throw in the towel on the RTX 2070 Super. I read about Pixla's mod, and I prefer not to go down that path.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
How can I make sure that the card does not use more 215W like it is rated and cause my Mac to shut down?
I had a system with four Quadro RTX 6000s which had a similar symptom.

Looking at the cards, I could see power spikes far above the rated TDP. The Turing chips very rapidly Turbo up, and exceed the stated power max for an instant before throttling back.

The fix was to use nvidia-smi to limit the turbo boost clocks to what the power supplies could tolerate. (The system had 6000 watt power supplies, so it wasn't a simple matter of insufficient power supply capacity, but depended on the very rapid increase in power use that tripped protective circuits. First derivative problems.)

The command was added to the startup scripts via (not persistent - must be done at boot time):

/etc/systemd/system/nvidia-clock.service:ExecStart=/usr/bin/nvidia-smi --lock-gpu-clocks=300,1680

This kept it from turbo'ing above 1680MHz. (Note that 1680 was determined by experiment. I had a script that would trigger a shutdown 2 out of 3 times within 5 minutes. It would run repeatedly for 48 hours at 1680 MHz, and seldom more than a few hours at 1750 MHz.)

This caused a barely measurable performance loss - but stopped the shutdowns.

I also needed to enable the Nvidia persistence daemon earlier in the startup.

On a headless Linux server, the system would unload the driver if it had no active channels (and since there was no Xwindows or other GUI server, that was typical). The "lock-gpu-clocks" would be lost at the unload - and there would be a significant delay to reload the driver when CUDA was needed again. The persistence daemon keeps a channel open to the GPU, and prevents the unloads.

It's quite possible that the capacitors in the PowerLink could dampen the power spikes as well.
 
Last edited:
  • Like
Reactions: matthewpomar

flyproductions

macrumors 65816
Jan 17, 2014
1,087
465
One way to make the Mac Pro backplane compatible with these GPUs is eVGA PowerLink. PowerLink balances the power draw between the two mini-PCIe connectors, making both share the load,..
So the PowerLink bridges the two connectors internally instead of just extending them seperately to the cards back? Interesting! I suspected this to be a more or less just cosmetic solution.
 

tsialex

Contributor
Jun 13, 2016
13,455
13,602
So the PowerLink bridges the two connectors internally instead of just extending them seperately to the cards back? Interesting! I suspected this to be a more or less just cosmetic solution.
No, both PCIe inputs/outputs are bridged into just one rail. See this picture, originally from this post:

screen-shot-2017-04-28-at-12-10-58-pm-png.705915
screen-shot-2017-04-29-at-1-17-04-pm-png.705916
 
  • Like
Reactions: flyproductions

flyproductions

macrumors 65816
Jan 17, 2014
1,087
465
No, both PCIe inputs/outputs are bridged into just one rail.
Thanks! So it would be of absolutely no matter, if a dual-mini-6pin to 8pin or two seperate mini-6 to 6+2pins would be used for feeding?

edit: Found the question answered in the thread! ;)
 

tsialex

Contributor
Jun 13, 2016
13,455
13,602
Thanks! So it would be of absolutely no matter, if a dual-mini-6pin to 8pin or two seperate mini-6 to 6+2pins would be used for feeding?

edit: Found the question answered in the thread! ;)
I always use two mini-PCIe 6-pin to 6-pin when installing an eVGA PowerLink, this way every 12V and GND pin from the backplane will be directly connected to the PowerLink.
 
  • Like
Reactions: flyproductions

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
But Turing is not supported at all with macOS, so it's a moot point.
The 2070 is Turing architecture.

The NVIDIA® GeForce® RTX 2070 SUPER™ is powered by the award-winning NVIDIA Turing™ architecture and has a superfast GPU with more cores and faster clocks to unleash your creative productivity and gaming dominance. It’s time to gear up and get super powers.

 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
NVIDIA supports up to Pascal with macOS web drivers. You can only use Volta or Turing GPUs installed in a Mac Pro if you are running Windows or Linux.
OK. The OP never mentioned which OS they were using, so I assumed OSX. My bad.

However, if they're running Windows or Linux, then 'nvidia-smi' is fully supported, and they can use a supported tool to keep the power draws within acceptable limits.

ps: Nvidia is very good about backwards compatibility. Older drivers usually work fine with new hardware - you just aren't able to use any or all of the new features. At least on Windows and Linux.
 
Last edited:

flyproductions

macrumors 65816
Jan 17, 2014
1,087
465
The majority of GPUs draws around 25 to 40W from the PCIe slot and use most of the power needed from the PCIe connectors. Btw, GPUs don't use the same amount from the PCIe power connectors, most cards connects each power connector to different internal power circuits and draw differently from each one.
To illustrate this with some "real life" numbers, i measured this with Bresnik’s Hardware Monitor during an Unigine Heaven benchmark run. Card is an EVGA GTX 1080 FTW Hybrid. This particular breed of the 1080 has two (instead of one) 8pin connectors and due to a factory overclock an expected TDP of 215 instead of 180 watts. It was feeded with the "not so recommended" two single mini-6pin to 6+2pin setup.

Here are the peak values. I have not seen anything higher than the shown 92.x watts. Just modified the default sensor names to something a little more human readable:

Screen Shot 2020-03-02 at 13.15.13.png

As shown, it is 100% correct that the load to the PCIe is way lower than what would be theoretically possible. It rarely passed the 30 watts line. Strangely even a 1070 exceeds this and peaks well over 40 for this value. But the individual load drawn by the card's two 8pins, while not beeing exactly even, is not as disbalanced as one could expect. In fact i had seen differences close to this even with a 1070 feeded by a dual-mini-6pin to 8pin, which in theory should share it's demand exact 50:50 to the two connectors.

While for this card even benchmarking the load to a single port stayed well below the given 75 watts limit for 80 to 85% of the time, this might show, that it's really not a good idea to feed something "bigger" like the 1080ti or Radeon VII this way. Not even with a Power Link to balance the load a bit better.
 
Last edited:

igor_y

macrumors newbie
Mar 17, 2020
7
1
Hello,
I think I am experiencing similar issue with my Mac Pro 1.1 and NVidia 980 ti.
Mac shuts down abruptly when I am playing in the flight simulator (X-Plane) in Windows 10. And this is really annoying. I spent a lot of time trying to figure out the reason. I was trying to replace modules of RAM, to play with fans speeds, to replace CPUs, to install Windows 10 from scratch - but nothing helps.
In system events Windows only shows Kernel-41 error. And on the motherboard I can see that 2 diodes turn red right after the shut-down: CPU A and CPU B overheat. So now I found this thread. I suspect that GPU taking too much power is the issue. And may be there's some false signal leading to indicate that it's CPU overheat, I don't know.
Could you please help?
Did I get it right that there are 2 options:
a) limit NVidia with the NVidia-SMI. Q: How can I measure the required limit and how to set this limit correctly?
b) EVGA Powerlink. Q: Just buy it and plug it in? Or any other tricks?
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
a) limit NVidia with the NVidia-SMI. Q: How can I measure the required limit and how to set this limit correctly?
I have a system with four Quadro RTX 6000 cards that was shutting down. When I capped the boost clocks, it stopped shutting down.

In the startup I put:
/usr/bin/nvidia-smi --lock-gpu-clocks=300,1680

This keeps the boost clock from going over 1680 MHz, which lowered the peak watts to an acceptable level. The effect on performance was slight - well within the run-to-run timings.

It was a matter of trial-and-error to pick the number. I set the number about halfway between the normal and max boost, and ran something that reliably caused a shutdown. I increased the cap by 50 MHz, and reran. Repeated this until it shut down, then lowered the cap to the last one that worked. I then ran the test 10 or 20 times to verify that it was stable. (nvidia-smi is dynamic - no need to reboot to try a new cap)

Yahoo! for "lock-gpu-clocks" for more tips.
 

igor_y

macrumors newbie
Mar 17, 2020
7
1
I have a system with four Quadro RTX 6000 cards that was shutting down. When I capped the boost clocks, it stopped shutting down.

In the startup I put:
/usr/bin/nvidia-smi --lock-gpu-clocks=300,1680

This keeps the boost clock from going over 1680 MHz, which lowered the peak watts to an acceptable level. The effect on performance was slight - well within the run-to-run timings.

It was a matter of trial-and-error to pick the number. I set the number about halfway between the normal and max boost, and ran something that reliably caused a shutdown. I increased the cap by 50 MHz, and reran. Repeated this until it shut down, then lowered the cap to the last one that worked. I then ran the test 10 or 20 times to verify that it was stable. (nvidia-smi is dynamic - no need to reboot to try a new cap)

Yahoo! for "lock-gpu-clocks" for more tips.
Thanks! So I tried this command and got an error "Setting lock gpu clocks is not supported for gpu [my_GPU_id]". Then I did a search for this error and found that 980ti is not in the support list for NVidia-SMI...
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Thanks! So I tried this command and got an error "Setting lock gpu clocks is not supported for gpu [my_GPU_id]". Then I did a search for this error and found that 980ti is not in the support list for NVidia-SMI...
To be more clear - nvidia-smi does support the 980Ti. The 980Ti does not support the lock-gpu-clocks function - that arrived with later chips.

Sorry that I didn't remember that bit of info.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.