Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I am having a machine with unusual PSU temperatures on the "part 2" of the power supply (info from Mac Fans Control)

Is there any maintenance I can do myself on the PSU component to lower the temp?

I guess some kind of cleaning, but perhaps something else is possible / beside changing the PSU with another?

Is your PSU issue solved? And may I know how unusual the PSU 2 temperature is? Do you just mean a bit high? Or big split between PSU 1 and PSU 2?
 
Just for reference, it seems the TitanXp is shutting off the MP's when under very heavy load, probably from pulling too much on the 8pin cable.

With the Pixlas mod, there is no problems on my MP. Its not using the logic board power sockets at all. I tested it with CUDA-Z heavy load, and Heaven at 4k rez with ultra quality. The fans ramped up slightly, and no problems at all. The only caveat is that my MP has its original x5675, and not x5690.
 
  • Like
Reactions: LightBulbFun
Just for reference, it seems the TitanXp is shutting off the MP's when under very heavy load, probably from pulling too much on the 8pin cable.

With the Pixlas mod, there is no problems on my MP. Its not using the logic board power sockets at all. I tested it with CUDA-Z heavy load, and Heaven at 4k rez with ultra quality. The fans ramped up slightly, and no problems at all. The only caveat is that my MP has its original x5675, and not x5690.

So does it actually work fine with Pixlas mod and X5675, or does it make the mac shut down too?
 
Surely this hack of splicing into the power lines or soldering new leads directly into the cMP PSU does not get past the problem of multiple top-end GPUs drawing too much power beyond the maximum capacity of the PSU to supply? If that is true, then a supplementary PSU for the GPUs seem like the only answer—or am I missing something?
 
Surely this hack of splicing into the power lines or soldering new leads directly into the cMP PSU does not get past the problem of multiple top-end GPUs drawing too much power beyond the maximum capacity of the PSU to supply? If that is true, then a supplementary PSU for the GPUs seem like the only answer—or am I missing something?

The idea of this mod is to fully utilise the power of the 980W PSU without the need to worry about only 2x 6pin avail for GPU. After the mod. It's totally OK to drive 2 high end GPU each required 2x 8pin input.

However, as you said, it's not about make the PSU able to deliver more than 980W in total.
 
Nothing I have tried has caused a sudden shutdown. (dont want to try furmark)

Hi Guys,

New to the forum but been following the thread for some time.

Finally took a gamble and decided it was time to do this mod myself after purchasing a new Titan Xp. (Some pics below)

I have just been doing some testing and initially everything seemed fine. I ran Heaven in ultra mode with no problem. However, after running CUDA-Z in heavy mode I did have a sudden shutdown.

Now I am worried that I have messed up this mod in some way. I used the splice technique and now I am afraid some wires may not have a clean connection.

If anyone has any advice on troubleshooting measures I can take that will be greatly appreciated.

Just thought I should write this post incase anyone else is about to do this mod and may experience the same problem.
 

Attachments

  • IMG_4489.JPG
    IMG_4489.JPG
    1.4 MB · Views: 576
  • IMG_4491.JPG
    IMG_4491.JPG
    1.5 MB · Views: 596
From the picture, it looks like your using the 6pin connection from the logic board?

That is one thing different from what I'm doing on my TitanXp, I have both 6 and 8 pin cables going to the pixlas cable.
Maybe the TitanXp is overloading the 6pin somehow? Try testing with a splitter to run both power sockets to your cable.
 
  • Like
Reactions: Doomdr
I have just been doing some testing…

I wouldn't worry too much initially. I suggest you take a step back and start by doing thorough tests in apps or games you want to use. Some stress tests isolate a single component in the computer in a way that normal usage never will—and that can lead to wrong conclusions. Yes, we want a stable computer under any circumstance, but I'd still suggest you put focus on the things you really want to do with your computer.

And like Surrat says above, there are a couple of options whether you use the mainboard 6-pin cables at all, or go all in on the external cable. It might even be a good idea for the sake of load distribution to use the main board, but then I'd use both 6-pin combined into a single 8-pin and put that into the card.

I've been wondering myself if it's a good idea to only tap into the unmodulated external cable, or if it's better to still be attached to the normal architecture via a dual 6 pin combiner cable. My thinking is the external cable might be more sensitive to surges or spikes in current, but I don't know. Even if you're just using the external power cable on one 8-pin you're making sure the card isn't starved for power.
 
  • Like
Reactions: Doomdr
From the picture, it looks like your using the 6pin connection from the logic board?

Yes I was, I switched out the 6 pin from the logic board and I am now powering the card entirely from the Pixlas cable.

Successfully been running CUDA-Z in heavy mode for 30 minutes without any sudden shutdowns. Yesterday, with the 6 pin from the logic board connected CUDA-Z caused a shut down after about 40 seconds.

I now feel reassured that my Pixlas mod was successful and can fully power my card.

As Andree has stated under normal usage I am confident that I could run the card from the logic board 6 pin and Pixlar 8 pin leaving room for another high end card to be installed. But until then I will run it entirely off the Pixlar cable.

Thanks for the info guys, it's super appreciated!
 
Yes I was, I switched out the 6 pin from the logic board and I am now powering the card entirely from the Pixlas cable.

Successfully been running CUDA-Z in heavy mode for 30 minutes without any sudden shutdowns. Yesterday, with the 6 pin from the logic board connected CUDA-Z caused a shut down after about 40 seconds.

I now feel reassured that my Pixlas mod was successful and can fully power my card.

As Andree has stated under normal usage I am confident that I could run the card from the logic board 6 pin and Pixlar 8 pin leaving room for another high end card to be installed. But until then I will run it entirely off the Pixlar cable.

Thanks for the info guys, it's super appreciated!

Titan Xp much likely draining much power from any pascal NVIDIA cards if powered from pcie power from backplane board. It looks tapping into power solution is only option for 5.1 tower.

Sadly, my country still don't have any Titan Xp to test on, only GTX1080ti currently available. I'm plan to test on hackintosh though, since it easier to work with.

Btw nice to hear your Titan Xp is running flawlessly.
 
  • Like
Reactions: derohan and Doomdr
I like the Pixlas mod and have it implemented on my Mac Pro. I had previously read of some issues with the "guillotine-style" vampire taps (more a problem for motorcycle enthusiasts than computer modifiers) and had these recommended to me: http://www.posi-products.com/posiplug.html . They work great. The tap seems solid, seems less bulky in your case than the guillotine clips appear to be (I still can't quite close the guard shield all the way - but it's mostly closed (90%-ish), application was easy, and when you're done with the mod they're reusable....
 
I did same but soldered two 8 Pin cables for a friend at the same position. I dont know if I should trust those crimpers, especially when they got older thru heat.
 
A simple bridge in between should able to share the load effectively.

e.g.

Dual mini 6pin -> single 8pin -> dual 6+2

The single 8pin can act as a bridge to evenly distribute the loading to both mini 6pin, which should able to avoid trigger the shutdown protection.

I want to use this methode for my Titan X. My Multimeter and Brain say, this must be working, but was this really tested before?


I know about the pin2 theory, I want to test it as well. I've already study about how to mod the cable, however, I don't have any 8pin card on hand. So can't run the test at this moment.

I dont understand, can you expain this "theory"?
 
I want to use this methode for my Titan X. My Multimeter and Brain say, this must be working, but was this really tested before?




I dont understand, can you expain this "theory"?


You can use the EVGA power link sold $9.99 in amazon. (sorry it went up to $29.99 now) o_O
https://www.amazon.com/EVGA-PowerLi...qid=1493685643&sr=8-1&keywords=EVGA+powerlink

It's internally bridged and pooled both connectors into 12V and feed to the 8+8 or 8+6. I have mentioned in my other post about triple Titan X mod. I think you may have got the answer with this product.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
You can use the EVGA power link sold $9.99 in amazon. (sorry it went up to $29.99 now) o_O
https://www.amazon.com/EVGA-PowerLi...qid=1493685643&sr=8-1&keywords=EVGA+powerlink

It's internally bridged and pooled both connectors into 12V and feed to the 8+8 or 8+6. I have mentioned in my other post about triple Titan X mod. I think you may have got the answer with this product.

Thx,

this is a better way to get the cable short :)

What pin2 theory? I don't know either.
If you have the EVGA power link, why should it matter?

curiosity?
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
I did the Pixlas mod on my cMP 5,1 so I could run two R9 280X GPUs with Final Cut Pro X. It was surprisingly easy, and seems to be working great. The cards are quiet, temps are normal.

With the latest FCPX (10.3.3) on El Cap 10.11.6, using a flashed GTX 980 (not the Ti version) I got a BruceX time of 43 seconds. With a single R9 280X I got 32 seconds. With two R9 280X cards I get 21 seconds, and a LuxMark of just over 26k.

============

I made a couple of changes to the standard procedure:
  1. I used the Posi-Tap connectors someone suggested earlier in this thread — they can be hand-tightened easily without a tool, even with the tapped wires against the far back wall of the case, so there's no need to remove the PSU; here's the posi-tap guide.
  2. I used a couple of standard off-the-shelf 8-pin PCIe cables, rather than having a custom cable created by modDIY. Here's the extension cable, and here's the splitter. I cut off the female end of the extension cable, pushed back the cloth covering on the wires, and stripped 1/2" off each end using this tool to get them ready for splicing with the posi-taps.
Although I ordered an extension cable with black and white wires, it turns out they just alternated the colors to "look cool", they do NOT represent Ground and Positive! So I followed the 8-pin PCIe diagram carefully (that image is looking at the male end), and tracing from the male plug, I taped together three wire bundles and labeled them, for “+12” (3 wires), “Sense” (2 wires), and “Ground” (3 wires). Then I pushed the bundles (actually all wires at once) up through the gap at the back of the optical bay.

I attached each of three +12 wire ends onto a different +12 wire from the inner four PSU wires (see the diagram in post #1), and each of three Ground wire ends onto a different Ground wire from the outer four PSU wires. Finally I attached the two Sense wires onto the fourth outer PSU ground wire.

Here's what my taps look like ("+" is for +12, "S" is for Sense, and "G" is for Ground):

1.jpg


Here's the lower compartment, showing how the extension cable plugs into the 2-to-1 splitter, which then plugs into each of the R9 280X cards (each card also gets a 6-pin connection to the Mac Pro's backplane board):

2.jpg


One problem: the Posi-Taps may be too fat to allow replacing the power supply cable cover. I tried pushing them flat against the back wall, but couldn't get the tab on the bottom right side of the cover to engage, so I just left it off. But no problem getting the optical bay unit to latch back in place.

I used a multimeter and the 8-pin diagram to verify each pin was getting the correct voltage before attaching the splitter plugs to my GPU cards.

======

One of the problems with installing a 2nd GPU is that you lose *two* PCIe slots -- only slot 1 is double-wide, like most graphics cards. I really wanted to use both my USB 3.0 card and my PCIe SSD card, but it seemed one would have to go.

However, I tried the approach outlined by AndreeOnline in posts #136 and #151 in this thread: a short PCIe riser card to slightly elevate the 2nd GPU, and a flexible PCIe extension cable for the blocked slot that can be folded sideways and then up and around the 2nd GPU card.

Now I can use all four PCIe slots.

Here is my slot layout:

4: (blocked) extension cable to Amfeltec Squid M.2 AHCI carrier board
3: Secondary GPU - R9 280X (on short riser card)
2: Sonnet Allegro 4-port USB 3.0
1: Main GPU - R9 280X

I saw a post by user h9826790 suggesting this layout for better heat distribution (GPUs in slots 1 and 3), rather than having one directly above the other in slots 1 and 2. Although slot 3 is only x4 instead of x16, people report the performance penalty is minimal, something like 5% on benchmarks.

Here's what it looks like with the Amfeltec temporarily hanging down in space (I also removed its rear bracket):

4.jpg


Here's a view looking up at the drive bays, showing how the extension cable snakes up and over the 2nd GPU:

5.jpg


And here's what it looks like with the cable folded up, and the Amfeltec card resting on the inner lip of the drive bays (I put insulating tape behind there):

6.jpg


Instead, I might try moving my two backup HDDs (for CCC and Time Machine) to drive bays 1 and 2, and make a little harness to hang the Amfeltec card in the unoccupied space of bays 3 and 4.

My Amfeltec card has three SSD blades on it: two Samsung SM951 @ 512GB, and one Kingston Predator @ 960GB. It's an x16 card running at x4 in slot 4, but single-drive access still runs at full speed since each blade is x4. I'm not using RAID striping, so I see no difference in Blackmagic Speed Test readings with this card in an x16 vs an x4 slot -- roughly 1300 r/w for each of the SM951 drives, and 1000 write / 1300 read for the Kingston. Of course, when I copy large files *between* the blades, I should get half the performance that I used to see in an x16 slot. If I ever need to do multi-blade RAID I will put the Amfeltec back in x16 slot 2, and the USB 3 card in slot 4 (and find a way to thread a cable out the back to a USB hub).

One problem with putting the 2nd GPU on a riser card is that it is heavy and doesn't get enough support, so it tends to sag downward. The Mac Pro's card bracket can't hold it in place, in fact I couldn't get the bracket/bar thing to fit at all in this config, so I used some short knurled thumbscrews to lock in the other cards. Also, at least one of the ports on the 2nd GPU is obstructed, and in fact you could easily unseat the card while plugging/unplugging since it is only secured by friction. I don't plan to use any ports on the 2nd GPU, so I will put tape over them. Here's what the rear panel looks like:

7.jpg


To address the sagging problem, I inserted a piece of stiff cardboard to give the card more support, while not obstructing airflow from the fan. This helped a lot, now everything is in alignment again:

8.jpg


So... it does work, even if it's a little ugly and definitely fragile. But I'm very happy having two GPUs plus USB 3.0 and PCIe SSDs.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
quick question - does tapping into the main power like this do anything to the mac pro's sensors etc? IE is there a downside?
 
I did the Pixlas mod on my cMP 5,1 so I could run two R9 280X GPUs with Final Cut Pro X. It was surprisingly easy, and seems to be working great. The cards are quiet, temps are normal.

With the latest FCPX (10.3.3) on El Cap 10.11.6, using a flashed GTX 980 (not the Ti version) I got a BruceX time of 43 seconds. With a single R9 280X I got 32 seconds. With two R9 280X cards I get 21 seconds, and a LuxMark of just over 26k.

============

I made a couple of changes to the standard procedure:
  1. I used the Posi-Tap connectors someone suggested earlier in this thread — they can be hand-tightened easily without a tool, even with the tapped wires against the far back wall of the case, so there's no need to remove the PSU; here's the posi-tap guide.
  2. I used a couple of standard off-the-shelf 8-pin PCIe cables, rather than having a custom cable created by modDIY. Here's the extension cable, and here's the splitter. I cut off the female end of the extension cable, pushed back the cloth covering on the wires, and stripped 1/2" off each end using this tool to get them ready for splicing with the posi-taps.
Although I ordered an extension cable with black and white wires, it turns out they just alternated the colors to "look cool", they do NOT represent Ground and Positive! So I followed the 8-pin PCIe diagram carefully (that image is looking at the male end), and tracing from the male plug, I taped together three wire bundles and labeled them, for “+12” (3 wires), “Sense” (2 wires), and “Ground” (3 wires). Then I pushed the bundles (actually all wires at once) up through the gap at the back of the optical bay.

I attached each of three +12 wire ends onto a different +12 wire from the inner four PSU wires (see the diagram in post #1), and each of three Ground wire ends onto a different Ground wire from the outer four PSU wires. Finally I attached the two Sense wires onto the fourth outer PSU ground wire.

Here's what my taps look like ("+" is for +12, "S" is for Sense, and "G" is for Ground):

View attachment 701115

Here's the lower compartment, showing how the extension cable plugs into the 2-to-1 splitter, which then plugs into each of the R9 280X cards (each card also gets a 6-pin connection to the Mac Pro's backplane board):

View attachment 701117

One problem: the Posi-Taps may be too fat to allow replacing the power supply cable cover. I tried pushing them flat against the back wall, but couldn't get the tab on the bottom right side of the cover to engage, so I just left it off. But no problem getting the optical bay unit to latch back in place.

I used a multimeter and the 8-pin diagram to verify each pin was getting the correct voltage before attaching the splitter plugs to my GPU cards.

======

One of the problems with installing a 2nd GPU is that you lose *two* PCIe slots -- only slot 1 is double-wide, like most graphics cards. I really wanted to use both my USB 3.0 card and my PCIe SSD card, but it seemed one would have to go.

However, I tried the approach outlined by AndreeOnline in posts #136 and #151 in this thread: a short PCIe riser card to slightly elevate the 2nd GPU, and a flexible PCIe extension cable for the blocked slot that can be folded sideways and then up and around the 2nd GPU card.

Now I can use all four PCIe slots.

Here is my slot layout:

4: (blocked) extension cable to Amfeltec Squid M.2 AHCI carrier board
3: Secondary GPU - R9 280X (on short riser card)
2: Sonnet Allegro 4-port USB 3.0
1: Main GPU - R9 280X

I saw a post by user h9826790 suggesting this layout for better heat distribution (GPUs in slots 1 and 3), rather than having one directly above the other in slots 1 and 2. Although slot 3 is only x4 instead of x16, people report the performance penalty is minimal, something like 5% on benchmarks.

Here's what it looks like with the Amfeltec temporarily hanging down in space (I also removed its rear bracket):

View attachment 701118

Here's a view looking up at the drive bays, showing how the extension cable snakes up and over the 2nd GPU:

View attachment 701119

And here's what it looks like with the cable folded up, and the Amfeltec card resting on the inner lip of the drive bays (I put insulating tape behind there):

View attachment 701120

Instead, I might try moving my two backup HDDs (for CCC and Time Machine) to drive bays 1 and 2, and make a little harness to hang the Amfeltec card in the unoccupied space of bays 3 and 4.

My Amfeltec card has three SSD blades on it: two Samsung SM951 @ 512GB, and one Kingston Predator @ 960GB. It's an x16 card running at x4 in slot 4, but single-drive access still runs at full speed since each blade is x4. I'm not using RAID striping, so I see no difference in Blackmagic Speed Test readings with this card in an x16 vs an x4 slot -- roughly 1300 r/w for each of the SM951 drives, and 1000 write / 1300 read for the Kingston. Of course, when I copy large files *between* the blades, I should get half the performance that I used to see in an x16 slot. If I ever need to do multi-blade RAID I will put the Amfeltec back in x16 slot 2, and the USB 3 card in slot 4 (and find a way to thread a cable out the back to a USB hub).

One problem with putting the 2nd GPU on a riser card is that it is heavy and doesn't get enough support, so it tends to sag downward. The Mac Pro's card bracket can't hold it in place, in fact I couldn't get the bracket/bar thing to fit at all in this config, so I used some short knurled thumbscrews to lock in the other cards. Also, at least one of the ports on the 2nd GPU is obstructed, and in fact you could easily unseat the card while plugging/unplugging since it is only secured by friction. I don't plan to use any ports on the 2nd GPU, so I will put tape over them. Here's what the rear panel looks like:

View attachment 701139

To address the sagging problem, I inserted a piece of stiff cardboard to give the card more support, while not obstructing airflow from the fan. This helped a lot, now everything is in alignment again:

View attachment 701140

So... it does work, even if it's a little ugly and definitely fragile. But I'm very happy having two GPUs plus USB 3.0 and PCIe SSDs.
Congratulations you made it!

Suggestions:

Since you need two GPUs, and I see you made a lot of modifications, you may want to solder a pair of 12V + GND from the PSU. Trust me, it's a lot easier, saving your cables and safer. I have been using this way to drive two 1080 ti (plus an extra 12V power in the optical drive space for 3rd). I haven't encounter any shot off. Strictly speaking, in fact, electronically taping is more or less for signal not for power source as much loading as two Titan X GPU (12V 30A) would draw, and I think two 280X can be just the same.

Another suggestion is that you don't have to dangling your PCIE riser like that for your PCIE drive and USB 3.0. Put your 2 gpu to slot 1, and slot 4. That's not only better looking but also be more stable. Puting GPU to slot 4 is not hard at all. Change your spinning HDDs to SSDs and take SSD's metal case off then you have all you want. Any serious bulk files save in a NAS or USB 3.0 drive. My two cents...
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.