Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Michael_80

macrumors newbie
Original poster
Nov 23, 2015
16
3
Hello.
So, my AMD HD7970 Reference (non o/c) card damaged my macpro 1.1 pci-e 16x slot. The slot is still working but it is no longer providing power.
I used to have it connected only via the two 6pin motherboard headers and I used a connector from 6pin to 8pin. I am fully aware that the power limit for macpro graphics slot is 225W (75W x 2 through through the 2 6pin connectors plus 75W through the pci-e 16x slot). However I was running this setup because the card was supposed to be within limits (or very close to them) and ppl have been running them without problems on their mac pro. In any case, now my motherboard is partially fried and I have to use slot 2 to run the 7970 @ 8x, use slot 1 for my USB 3.0 card etc.
Please note that I have never overclocked the card and/or increased its power limits when in PC mode. On the contrary I had my card undervolted (~0.925v @ 925mhz core) because it had a good ASIC quality.
Let this be a warning to anybody who is running a similar setup.

So... I will give this another attempt because I have ordered a new macpro motherboard for 65GBP (=80E).
What I am thinking to do is this: balance the power load that is required from the 8pin connector, which is supposed to provide 150W, as follows:
-Use the upper two molex for the optical drives to "create" a pci-e 6pin which will go directly to the card (2 molex ---> 1 pci-e 6pin) along with a pci-e 6pin extender. I will also use a molex splitter so as the superdrive to remain powered and functional.
-Use the 2 motherboard 6pin headers to create an 8pin pci-e connector (2 x 75W =150W which is a "true" pci-e 8 pin). This requires a splitter with two female 6pin pci-e connectors and one male 8pin pci-e connector.

Do you think it is a safe approach? I am aware that the safest approach would be to opt for a two 6pin card but this is the card I have now and I have to work with that. Also, I would like to avoid the hassle of connecting a second psu (which would be however the more orthodox and safe approach power-wise).

Here are my system specs, to tell me if mac pro psu can handle my components
MacPro 1.1 flashed to 2.1
2 quad cores x5365 3.0GHz
32GB ram
4 hard drives (1WD 4TB, 2 SSD 850EVO 256GB, 1 Hitachi 1TB HD)
1 superdrive
1 pci-e lycom adapter with a samsung sm 951pro m2 (used as boot)
1 pci-e USB 3.0 4 port card (not using external power, however I am using all the ports of the card)
1 Sapphire 7970 AMD Reference card professionally flashed for mac use

Sorry for the long post, but I had to provide all the facts and data :)
 
So you're using 5 Hard Drives, and using a PCI 3.0 card that has 4 USB and those are all being used, in conjunction with the video card? No wonder it blew.
 
Sounds a bit strange to me. The card should not pull more than 75W from the slot regardless it's TDP.

The 8 pin may kill the mini 6pin, but it should do nothing to the slot's power supply.

If the power supply of the slots is some how connected to the mini 6 pin, then it should kill everything (or at least both the mini 6pin + slot power supply), but not just the slots.

Anyway, may I know what's happening when the slot was killed? Did you do anything physically to the slot? (e.g. Swap PCIe card) Did you ever monitor the actual power draw? (e.g. Via iStat)

I am not saying that you lie. But just doubt the slot's death may not directly related of using the down voltages 7970.
 
So, I would like to clarify a few things.
The computer worked for approximately 6 months. I am typing this on my (damaged) mac pro. The computer is still working. The pci-e 16x is still working, if I install a pci-e card in there. However if I install the graphics card in my pci-e 16x slot the computer boots normally but I have no video. Considering that the card works when installed in slot 2 (albeit in 8x) has led me to believe that the 2x 6 pin connectors have not been damaged and something happened to the pci-e 16x slot.
I did not do anything physically to this slot - it just partially died. Also the computer shut-down normally and I was aware of the problem when I tried to turn it on the next day. From an optical inspection I cannot see anything wrong with the slot (no burn traces etc). It could be that some pins that make this slot 16x have been damaged and the slot cannot physically provide the necessary pins to work as 16x or even 8x. This could be an explanation why the graphic card does not work in that slot but a pci-e 1x USB card does. In other words, it may have nothing to do with the slot not providing 75W. Maybe the slot pins have been damaged due to increased wattage load..
I have been very careful in monitoring the power draw via iStat in mac and via GPU-Z when in PC mode. Everything was within normal limits. It is my understanding that, as long as you do not touch the "power settings" of the card in MSI Afterburner, the card is supposed to work within its thermal envelope, which was within mac pro limits. I even undervolted the card when in pc mode...
Mac pro can supply 75W in each pci-e slot (total 4 x 75W = 300W). It is also designed to have all HDs populated. I have an m2 card and a USB 3.0 card. On the USB 3.0 card I have connected a powered hub, an external ssd drive and an EyeTV usb TV Tuner. No way these peripherals should come close in reaching the power limit of the motherboard. Additionally, the 4 hard drives are actually 2 SSDs and 2 HDs - so again even lower consumption.
I should also mention that I have a 30' Dell using a usb powered (from the motherboard) dual link dvi to mini display port adapter, and 2 other monitors that do not require usb power. I am also using a firewire isight.

With all these in mind, I can only come to the conclusion that I did something wrong when I made the decision to just install the card using the two 6pins and converting one 6pin to 8pin. This is why I would like to know if the proposed method should work.
2 molex connectors should normally be OK for creating a pci-e 6 pin connector.
W=A*V. So 75W requires approximately 6.5A.
Here it says that the official max current of a single molex connector is 11A --> https://en.wikipedia.org/wiki/Molex_connector
There are several threads for "max power draw of a molex connector".
One of them is here:---> https://hardforum.com/threads/power-draw-limit-on-molex.1751688/

To be safe I am planning to use two molex connectors. This would create less than 4A load on each molex. Can the PSU handle it?
 
So, I would like to clarify a few things.
The computer worked for approximately 6 months. I am typing this on my (damaged) mac pro. The computer is still working. The pci-e 16x is still working, if I install a pci-e card in there. However if I install the graphics card in my pci-e 16x slot the computer boots normally but I have no video. Considering that the card works when installed in slot 2 (albeit in 8x) has led me to believe that the 2x 6 pin connectors have not been damaged and something happened to the pci-e 16x slot.
I did not do anything physically to this slot - it just partially died. Also the computer shut-down normally and I was aware of the problem when I tried to turn it on the next day. From an optical inspection I cannot see anything wrong with the slot (no burn traces etc). It could be that some pins that make this slot 16x have been damaged and the slot cannot physically provide the necessary pins to work as 16x or even 8x. This could be an explanation why the graphic card does not work in that slot but a pci-e 1x USB card does. In other words, it may have nothing to do with the slot not providing 75W. Maybe the slot pins have been damaged due to increased wattage load..
I have been very careful in monitoring the power draw via iStat in mac and via GPU-Z when in PC mode. Everything was within normal limits. It is my understanding that, as long as you do not touch the "power settings" of the card in MSI Afterburner, the card is supposed to work within its thermal envelope, which was within mac pro limits. I even undervolted the card when in pc mode...
Mac pro can supply 75W in each pci-e slot (total 4 x 75W = 300W). It is also designed to have all HDs populated. I have an m2 card and a USB 3.0 card. On the USB 3.0 card I have connected a powered hub, an external ssd drive and an EyeTV usb TV Tuner. No way these peripherals should come close in reaching the power limit of the motherboard. Additionally, the 4 hard drives are actually 2 SSDs and 2 HDs - so again even lower consumption.
I should also mention that I have a 30' Dell using a usb powered (from the motherboard) dual link dvi to mini display port adapter, and 2 other monitors that do not require usb power. I am also using a firewire isight.

I agree. Having all the PCI-e slots and drive bays populated should not be a problem as that's what it was meant to do. Even with your USB and Firewire peripherals, at worst, they should have only killed the ports themselves and not affect the PCI-e slots.

My understanding is that when you overclock a video card, whether it be through bios or software, it will increase power draw on the card.


With all these in mind, I can only come to the conclusion that I did something wrong when I made the decision to just install the card using the two 6pins and converting one 6pin to 8pin. This is why I would like to know if the proposed method should work.
2 molex connectors should normally be OK for creating a pci-e 6 pin connector.
W=A*V. So 75W requires approximately 6.5A.
Here it says that the official max current of a single molex connector is 11A --> https://en.wikipedia.org/wiki/Molex_connector
There are several threads for "max power draw of a molex connector".
One of them is here:---> https://hardforum.com/threads/power-draw-limit-on-molex.1751688/

To be safe I am planning to use two molex connectors. This would create less than 4A load on each molex. Can the PSU handle it?

Seems the two molex connector should be able to provide 75W easily for your video card.
 
Both the card and the slot still work when swapped? If it had fried doesn't that seem unlikely? Can you test a different GPU, or check your pcie lane configuration? Have you done pram and smc resets? Could the board have shorted? Was it ever plugged in when you were changing hardware? Has your OS updated recently, installed new hardware or could there be debris in the slot? Does the behavior change if you remove the Samsung 850 from your system, and is one your drives a win 10 bootcamp volume?
 
Both the card and the slot still work when swapped? If it had fried doesn't that seem unlikely? Can you test a different GPU, or check your pcie lane configuration? Have you done pram and smc resets? Could the board have shorted? Was it ever plugged in when you were changing hardware? Has your OS updated recently, installed new hardware or could there be debris in the slot? Does the behavior change if you remove the Samsung 850 from your system, and is one your drives a win 10 bootcamp volume?

The gpu works on another slot so gpu & 2 6 pin are OK.
I did pram and smc reset and the problem remains.
I am running El Capitan 11.5.
I do not have another pci-e 16x card to test. I only have pci-e 1x (USB 3.0 card) which I have plugged in slot 1 and it works. I am sure that if I install a 8x or 16x card it will not work because the gpu that is working on slot 2 does not work on slot 1 anymore.
 
I am sure that if I install a 8x or 16x card it will not work because the gpu that is working on slot 2 does not work on slot 1 anymore.

It would be good to verify this by testing it though. Just because it doesn't work with one user flashed GPU doesn't mean it's broken.

What about the Samsung ssd? Are you running a bootcamp volume? Does anything change if you pull drives so that only one OS is present? Doesn't your Mac allocate pcie lanes differently than later models? Have you double checked your settings for that?
 
MacVidCards also has a 1,1 with damaged PCI-e slot 1.

Screen Shot 2016-04-22 at 6.56.34 AM.png


Source: http://forum.netkas.org/index.php/topic,8206.15.html
 
  • Like
Reactions: Michael_80
It would be good to verify this by testing it though. Just because it doesn't work with one user flashed GPU doesn't mean it's broken.

What about the Samsung ssd? Are you running a bootcamp volume? Does anything change if you pull drives so that only one OS is present? Doesn't your Mac allocate pcie lanes differently than later models? Have you double checked your settings for that?

I could try with the lycom pci-e to m2 adapter which is 4x to see if it works but I do not see the point since the same card is working in another slot.
With regards to the pci allocation, I have been using the Configuration Expansion Slot Utility which is located at: /System/Library/CoreServices/
I tried all different combinations but Slot 1 can no longer work with my 7970..
I also tried with a bare-bone system (no HDD, no used pci-e slots except the gfx card) and I am still having this problem.
In any case what's done is done. I just need to know if the approach I am about to follow is the appropriate one.....
[doublepost=1461326031][/doublepost]
MacVidCards also has a 1,1 with damaged PCI-e slot 1.

View attachment 628074

Source: http://forum.netkas.org/index.php/topic,8206.15.html

That is very interesting - the problem symptoms are quite similar to mine ! But how could an 7950 which is well the power envelope of 1.1 and only requires 2x 6pin have caused this? Maybe this was a 7950 card on a 7970 PCB and also had 1x 6pin and 1x8pin...
 
He didn't mention how he damaged his slot or which video card was the culprit, just that it was damaged.

I just wanted to point out that it is a real possibility and that you are not the only one this has happened to.
 
Many people here have far exceeded the power limits and in all of the examples when exceeding the limit far enough, the power supply shuts off. Any power supply worth anything should have an overload cutoff like this.

I'm not trying to refute OP's word, I'm just trying to say that what happened here is exceptional and is different from what people have reported in the past when they have exceeded the power limit. It is an unusual case.
 
I could try with the lycom pci-e to m2 adapter which is 4x to see if it works but I do not see the point since the same card is working in another slot.

I also tried with a bare-bone system (no HDD, no used pci-e slots except the gfx card) and I am still having this problem.


Ok. The reason I suggest is because several users had odd black screen GPU boot events when using a Win 10 bootcamp volume + Samsung 850 ssd. Removing the Samsung and bootcamp volumes, pram resetting and booting up a naked OSX volume would probably rule out that particular issue though. Sorry to hear of your troubles, good luck!
 
Many people here have far exceeded the power limits and in all of the examples when exceeding the limit far enough, the power supply shuts off. Any power supply worth anything should have an overload cutoff like this.

I'm not trying to refute OP's word, I'm just trying to say that what happened here is exceptional and is different from what people have reported in the past when they have exceeded the power limit. It is an unusual case.

Several years ago, I remember reading on Netkas' forum where a guy burnt out the traces to his mini PCI power connectors with a heavily overclocked video card. While this sort of thing certainly isn't the norm, there has been some instances of it happening.
 
@pastrychef , You're right, i remember that topic. System didn't shut down preventing the damage.
But 7970/R9 280X is widely used GPU among cMP users and that guy was overclocking card that was way above power limit in its stock form (nVidia GTX5xx IIRC).
 
IIRC that was a Fermi card with two 8-pin connectors and overclocked to the extreme. And then on top of all that, he ran Furmark, which is software that had been known in the past to destroy hardware even if the power connections are all to spec.

So I'll accept the point that it is possible to exceed the power supply's protection capability to shut down in time, but that is the only example I've heard of, and it's pretty extreme.

OP's setup is much more modest than that, and consumes less power than what others here are doing.
 
  • Like
Reactions: owbp
UPDATE:
Got new motherboard but did not install it yet. Technician insisted we try another card. We tried a GT120 and it works @ 16x. We put back 7970 and it only works @ 8x in either slot (1 or 2). You could say OK it is the card.
So... I install the card in my PC and it works @16x (please see screenshot).
I will also try another card which does require external power (a 5770) on my mac pro with my old motherboard and will let you know. I am puzzled because I have two contradicting indications on what to do.
screenshot.gif
 
Pastrychef was spot on linking my past failure to you. Similar but not identical. I still have it, I have to leave it set to x8 for slot 1 for all cards to work. If set to x16 all cards from one brand don't work at all but the other brand works fine. (Don't remember which is which)

That poor 1,1 has been around for years. Meanwhile We are on our 3rd or 4th 4,1 and it has lost bottom slot already.

I think physical wear from multiple insertions is primary issue. But the 1,1 started having issues after I ran a 4890 in it. (Dual 4870s on one board)

I could reliably shut it off at a gas station set piece in Crysis by blowing it up. Quite dramatic having the explosion take out the computer!

I have left it at x8 so I can test cards in it ever since.

You can find the answer for you. Switch the board.

I have found it interesting that AMD/Nvidia must be using 1 particular pin differently from each other or from spec in PCIE 1.0 mode x16. And I somehow blew that pin.
 
  • Like
Reactions: Michael_80
So I just installed my old apple 5770 on my macpro 1.1 and it works in pci-e 16x with the old mobo. This card uses one pci-e 6pin connector though...
I consider myself fairly experienced with computers but i am now very confused since I have two contradicting indications:
If the 7970 card had problem and it only worked in 8x then the same would apply on my PC. But the card works perfectly in pcie-16x @ 2.0 version. Maybe the pins are different in pci-e 1.1 16x vs 2.0?
On the other hand if the old mobo of mac pro had problem then no card would work @ 16x. However I tried both AMD (official 5770) and an old NVIDIA (GT120) and they both work @ 16x.
Please see a screenshot from my current configuration with the 5770:
Untitled.jpg


I may very well put the new motherboard in, however, if the 7970 is faulty I could just sell the motherboard (which is brand new) and the card so as to buy a new graphics card which also supports metal for El Capitan...
 
I'm sorry about your problem.

I'm running a similar setup. MSI 280X gaming (factory OC) with 2x 6-pin connectors (one with a 6-8 pin adapter as provided by MSI with the card).

I've had zero trouble, typically. However, one game does cause my computer to crash hard on occasion. That certainly has caused me to wonder how close I am to damaging my computer. I suppose I'm taking my chances for now, but I'll switch back to my 5770 as soon as that 100% certain nnMP shows up at WWDC. ;-)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.