Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

startergo

macrumors 603
Original poster
Sep 20, 2018
5,100
2,293
I have R9 280X connected through DVI in slot 1 and RX 580 connected through DP to DVI adapter in slot 2.
I can see the boot screen through my R9 280X, but it cuts off at about 75% progress of the apple boot logo.
When HS is loaded only RX 580 outputs to a screen.
In Windows 10 the exact same setup works fine both cards output to their screens. I can even see crossfire enabled in the Furmark stress test.
Am I missing something here? Why the R9 280X stops outputing in HS together with the RX 580?
 
I have R9 280X connected through DVI in slot 1 and RX 580 connected through DP to DVI adapter in slot 2.
I can see the boot screen through my R9 280X, but it cuts off at about 75% progress of the apple boot logo.
When HS is loaded only RX 580 outputs to a screen.
In Windows 10 the exact same setup works fine both cards output to their screens. I can even see crossfire enabled in the Furmark stress test.
Am I missing something here? Why the R9 280X stops outputing in HS together with the RX 580?

Crossfire 280X with RX580? AFAIK, Crossfire doses't require "matched pair" GPU, but both GPU MUST belongs to the same family.
 
Crossfire 280X with RX580? AFAIK, Crossfire doses't require "matched pair" GPU, but both GPU MUST belongs to the same family.
I was surprised too, but that is what Furmark said: Crossfire enabled
 
upload_2019-2-3_10-28-57.png
 
  • Like
Reactions: h9826790

Thanks for the sharing. Interesting info.

Crossfire enabled.

Both GPU run at ~100%. (This is normal for Furmark)

But 280X only warm up from 61C to 64C. (This part is normal assuming the 280X has good cooler)

RX580 clock speed shows 639.8MHz, VDDC 0V, perform worse than the 280X (definitely very wrong).

Did you try remove the 280X and run the same test with just the RX580?
 
Thanks for the sharing. Interesting info.

Crossfire enabled.

Both GPU run at ~100%. (This is normal for Furmark)

But 280X only warm up from 61C to 64C. (This part is normal assuming the 280X has good cooler)

RX580 clock speed shows 639.8MHz, VDDC 0V, perform worse than the 280X (definitely very wrong).

Did you try remove the 280X and run the same test with just the RX580?
The Voltage is 0VDC perhaps because I am powering it from the SATA power and it does not measure that accurately:

upload_2019-2-4_3-24-23.png
 
The Voltage is 0VDC perhaps because I am powering it from the SATA power and it does not measure that accurately:

View attachment 819791

That VDDC should be read from the graphic card, not from the power source. So, something wrong at there (definitely wrong reading, may be software issue, not necessary hardware related).

It looks like your card was thermal / power throttling. I suspect that's due to the OEM card has lower than normal TDP limit, which makes the card reach the power throttling point earlier than normal RX580.

Anyway, average FPS 87 vs 54, the crossfire seems really performing as expected.

May be all GCN GPU now can be crossfired. I haven't use that for few years now. Not quite sure the most up to date requirement.
 
That VDDC should be read from the graphic card, not from the power source. So, something wrong at there (definitely wrong reading, may be software issue, not necessary hardware related).

It looks like your card was thermal / power throttling. I suspect that's due to the OEM card has lower than normal TDP limit, which makes the card reach the power throttling point earlier than normal RX580.

Anyway, average FPS 87 vs 54, the crossfire seems really performing as expected.

May be all GCN GPU now can be crossfired. I haven't use that for few years now. Not quite sure the most up to date requirement.
The TDP of the card is 130W. Maybe because they use the same driver the crossfire is enabled? I know if I select the basic video driver on one of them they won't perform the same way . What bothers me is in HS the 280 turns off the output .Any idea why?
 
The TDP of the card is 130W. Maybe because they use the same driver the crossfire is enabled? I know if I select the basic video driver on one of them they won't perform the same way . What bothers me is in HS the 280 turns off the output .Any idea why?

But your screen capture shows the TDP of the RX580 is 185W. Be careful of that. The card's TDP is 130W, but that most likely means the cooling solution is good for up to 130W. And your card may draw up to 185W from the software reading. And that's definitely not a good sign for single 6pin card.

Anyway, I sold all my HD7950, R9 280, R9 380 already, can't test it with the RX580. But if your 280X can work flawlessly when boot from the original ROM, then apparently the Mac EFI is causing some compatibility issues.
 
But your screen capture shows the TDP of the RX580 is 185W. Be careful of that. The card's TDP is 130W, but that most likely means the cooling solution is good for up to 130W. And your card may draw up to 185W from the software reading. And that's definitely not a good sign for single 6pin card.

Anyway, I sold all my HD7950, R9 280, R9 380 already, can't test it with the RX580. But if your 280X can work flawlessly when boot from the original ROM, then apparently the Mac EFI is causing some compatibility issues.
It does not matter which ROM I use .I have no issues with a single card usage only when both are installed. As far as the power goes the 2 SATA provide 108W power so there is plenty power on the 6 pin .I don't know how can it draw 185W if it is limited in the ROM?
 
It does not matter which ROM I use .I have no issues with a single card usage only when both are installed. As far as the power goes the 2 SATA provide 108W power so there is plenty power on the 6 pin .I don't know how can it draw 185W if it is limited in the ROM?

The ROM default setting is NOT limited 130W for your card. It's a pre-defined number with 50% PowerTune, which means, if 130W with 150% max, that will able to draw 195W.
Screenshot 2019-02-05 at 1.18.11 AM.png

As you can see from your won screen capture, the "Power max" is 150% (this is the PowerTune limit). And the registered TDP is 185W at that moment. If we assume 185W is the +50% of the pre-defined TDP. Then that should be ~123W. And as long as PowerTune stay at 150%, your card may able to draw up to 185W (if not thermal throttling etc).

In Windows, the AMD driver control panel will allow you to see and adjust the PowerTune number. But in macOS, it will only get that number from the ROM. In your case, +50% max.

Also, the problem of your card is NOT draw too much from the 6pin, but from the slot.

The cMP seems quite well built. So, draw more than 75W from a slot seems has no adverse effect (at least true for short period of time). So far, didn't heard a single case that burn the logic board because of that. However, quite a few gaming PC motherboard was brunt because of that.

The card doesn't know it can draw how much on the 6pin (e.g. up to 108W), but only know how to draw the demanded power according to the pre-programmed pattern. In your card's case. It seems the card pretty much just divide the demand by 2. Therefore, if really draw 180W, that may means 90W from the 6pin, and 90W from the slot.

Anyway, if EFI makes no difference, but multi GPU does, then it sounds like driver conflict.
 
The ROM default setting is NOT limited 130W for your card. It's a pre-defined number with 50% PowerTune, which means, if 130W with 150% max, that will able to draw 195W.
View attachment 819863
As you can see from your won screen capture, the "Power max" is 150% (this is the PowerTune limit). And the registered TDP is 185W at that moment. If we assume 185W is the +50% of the pre-defined TDP. Then that should be ~123W. And as long as PowerTune stay at 150%, your card may able to draw up to 185W (if not thermal throttling etc).

In Windows, the AMD driver control panel will allow you to see and adjust the PowerTune number. But in macOS, it will only get that number from the ROM. In your case, +50% max.

Also, the problem of your card is NOT draw too much from the 6pin, but from the slot.

The cMP seems quite well built. So, draw more than 75W from a slot seems has no adverse effect (at least true for short period of time). So far, didn't heard a single case that burn the logic board because of that. However, quite a few gaming PC motherboard was brunt because of that.

The card doesn't know it can draw how much on the 6pin (e.g. up to 108W), but only know how to draw the demanded power according to the pre-programmed pattern. In your card's case. It seems the card pretty much just divide the demand by 2. Therefore, if really draw 180W, that may means 90W from the 6pin, and 90W from the slot.

Anyway, if EFI makes no difference, but multi GPU does, then it sounds like driver conflict.

The GPU-Z only measured 130-135W maximum usage on the RX-580. Also it shows properly the voltage 1.025VDC maximum. I tried another testing software and the frequency jumped to 1300ish
I also managed to get both cards to output to the displays in HS by using miniDP to VGA converter. So both display ports work properly, but the DVI turns off in 2 card configuration.
 
The GPU-Z only measured 130-135W maximum usage on the RX-580. Also it shows properly the voltage 1.025VDC maximum. I tried another testing software and the frequency jumped to 1300ish
I also managed to get both cards to output to the displays in HS by using miniDP to VGA converter. So both display ports work properly, but the DVI turns off in 2 card configuration.

130-135W GPU consumption or graphic card consumption?

Did you check in the AMD driver panel if power tune is 50%.

Anyway, it seems that 6xx MHz is due to power throttling. AMD released the “fix” (which prevent the card draw too much from PCIe slot) via driver update, it’s possible that your card’s power draw is limited by the driver because of that. But in macOS, its behaviour can be different due to no more extra power draw limitation from driver. Anyway, as low NG as your daily workflow is not running Furmark, it shouldn’t be an issue.
 
130-135W GPU consumption or graphic card consumption?

Did you check in the AMD driver panel if power tune is 50%.

Anyway, it seems that 6xx MHz is due to power throttling. AMD released the “fix” (which prevent the card draw too much from PCIe slot) via driver update, it’s possible that your card’s power draw is limited by the driver because of that. But in macOS, its behaviour can be different due to no more extra power draw limitation from driver. Anyway, as low NG as your daily workflow is not running Furmark, it shouldn’t be an issue.
I think there is something wrong with the Furmark as I tested with another software (forgot which) and the card ran at full speed in a stress test. Plus it is not seeing properly voltage draw . But yes the GPU-Z measured 135W power draw, which corresponds to the ROM limit .
 
I think there is something wrong with the Furmark as I tested with another software (forgot which) and the card ran at full speed in a stress test. Plus it is not seeing properly voltage draw . But yes the GPU-Z measured 135W power draw, which corresponds to the ROM limit .

Apart from Furmark, you can try OCCT.
 
  • Like
Reactions: startergo
startergo

What is your GPU power cabling setup for both cards ?
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.