Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I had ordered a Sapphire RX 580 8GB Pulse in September from Amazon UK. Received a shipping notice this morning that it had been despatched.
Due to their price guarantee (it somehow dropped in price since my order) I have paid £85 + £15 shipping to Australia, which is about US$110

Bargain.
That’s rather annoying. I ordered my card on June 15th from Amazon UK and it still hasn’t shipped! WTH
 
That’s rather annoying. I ordered my card on June 15th from Amazon UK and it still hasn’t shipped! WTH

Yep. Makes no sense.
I’m still waiting for a 1080Ti FE that I ordered in July when it was still in stock. IVE BEEN PROMISED about 6 times that they have the card and it’s being shipped that day to me, but it never happens. Perhaps because of the run around on that order they expedited my RX580.

Not sure the left hand knows what the right hand is doing at Amazon UK.
 
  • Like
Reactions: Squuiid
I am just going to mention

theres more then just cosmetics with the Pulse RX580, one major thing the Pulse RX 580 has over RX 580s in OS X is its assigned a proper frame-buffer personality which means all the ports etc are going to work 100% its also thanks to this frame-buffer that it gets a proper name in the first place

RX 580s that are not the pulse one and which get the Generic R9 xxx name is because they are initialised by OS X differently by using a Generic frame-buffer that is built using info from the cards VBIOS, while this generally works quite well there have been reports of issues (especially when it comes to multi-monitor setups)

I just figured id throw this out there as its useful info to have...

This may be a problem for customised PCB RX580, but for the reference card, pretty sure it’s just cosmetic.

In fact, from memory, the Radeon frame buffer always work. The “port disabled” issue only happen when someone flash the card with the EFI that’s builded for another card.

However, if hex edit the part number inside a RX580’s ROM to make it ident itself as RX580 inside MacOS. Then the MacOS may apply the wrong frame buffer to that particular card as well.

If a user leave that R9 xxx cosmetic issue there. Not try to do anything to “fix” it. Then it should be nothing more than just cosmetic.

For non reference RX580, there is no guarantee it can work from the beginning. Nothing to do with if it’s the PULSE or not.
 
  • Like
Reactions: Synchro3
Is the Pulse a 2 slots card? I just got a Nitro+ from Amazon, and it covers PCIe 2. So it's a bit pointless as I don't want to lose a precious PCIe slot! I'll probably return it soon, after a couple more tests.
Have to say, I'm a bit disappointed by the performances of the card. It scores the same in Resolve as my R9 280X with the rushes I have been transcoding for a current editing project (peaking at 35fps where my 980Ti does 60fps).
So it looks like an ok card if you get it for 250€/$. Definitely not worth what the sellers wanted during the mining craze.
 
  • Like
Reactions: foliovision
Is the Pulse a 2 slots card? I just got a Nitro+ from Amazon, and it covers PCIe 2. So it's a bit pointless as I don't want to lose a precious PCIe slot! I'll probably return it soon, after a couple more tests.
Have to say, I'm a bit disappointed by the performances of the card. It scores the same in Resolve as my R9 280X with the rushes I have been transcoding for a current editing project (peaking at 35fps where my 980Ti does 60fps).
So it looks like an ok card if you get it for 250€/$. Definitely not worth what the sellers wanted during the mining craze.

Doesn't Davinci Resolve make a lot of use of Cuda cores?
 
It does, but I kept reading here and there how amazing this card was... I'll keep it for a few more days - I can deal with one less PCiE for now, but I'll need it back soon. But frankly, I don't know why Apple keeps insisting on working with AMD when clearly Nvidia has the upper hand.
 
It does, but I kept reading here and there how amazing this card was... I'll keep it for a few more days - I can deal with one less PCiE for now, but I'll need it back soon. But frankly, I don't know why Apple keeps insisting on working with AMD when clearly Nvidia has the upper hand.

Is there an open cl function you can use? would that be better with an AMD card. The RX580 is double slot but I wonder if the fans are making it thicker then other cards.
[doublepost=1511051392][/doublepost]Just looking at some pictures, the blower cooler single fan reference rx580 appears to have a normal double slot plate. the double fan pulse and nitro versions appear to have a thicker plate that looks like it may actually be thicker then a normal double slot. Without one in person its hard to tell.
 
What makes a good use of OpenCL ? I ran some benches today and the RX 580 is not massively faster than the R9 280X. It does decently in OpenCL - which would explain why Valley and Cinebenc got the highest scores, better than the 1080Ti.
But in real world, I don't know where I would benefit from it. I don't use FCPX, and in Resolve / Premiere, any Nvidia GPU has the upper hand.
 
  • Like
Reactions: ssgbryan
What makes a good use of OpenCL ? I ran some benches today and the RX 580 is not massively faster than the R9 280X. It does decently in OpenCL - which would explain why Valley and Cinebenc got the highest scores, better than the 1080Ti.
But in real world, I don't know where I would benefit from it. I don't use FCPX, and in Resolve / Premiere, any Nvidia GPU has the upper hand.

Why not stick with nvidia?
 
I'm running an MSI Gaming X RX580 8GB, powered with a 2x 6 pin -> 8 pin converter and that seems to work fine, so you definitely don't *need* the Sapphire. It may not be optimal, but the MSI works perfectly well out of the box (although the fan is a little louder than I'd like).
 
Last edited:
What makes a good use of OpenCL ? I ran some benches today and the RX 580 is not massively faster than the R9 280X. It does decently in OpenCL - which would explain why Valley and Cinebenc got the highest scores, better than the 1080Ti.
But in real world, I don't know where I would benefit from it. I don't use FCPX, and in Resolve / Premiere, any Nvidia GPU has the upper hand.

Valley is a OpenGL benchmark, and it’s hard to believe that a RX580 can do better than a 1080Ti.

CineBench is a well know CPU single thread performance limiting benchmark. Any modern GPU will hit the limit straight away, and only able to measure the “CPU single thread speed - driver overhead”. Since the AMD driver has less overhead in MacOS. Therefore, almost all AMD GPU will perform better than Nvidia GPU in that CineBench OpenGL test. A RX460 can out perform a TitanX.

IMP, for OpenCL, Luxmark V3 is a very decent tool to measure relative performance.

The RX580 should be quite a bit faster, however, the 280X optimisation is doing so well in MacOS (most likely because of the nMP), it can beat some stronger GPU in some particular occation. In fact, almost all HD7000 series GPU benefit from it. e.g. For BruceX, my HD7950 can beat my 1080Ti. Of course, it doesn’t mean the 7950 is really faster. If look at the power draw. The 7950 can draw up to around 100W during FCPX rendering (about 140W under normal full load). But the 1080Ti only draw ~50W (out of 250W), the software simply can’t utilise the hardware at all.
 
  • Like
Reactions: foliovision
Why not stick with nvidia?
I love my Nvidia GTX980Ti but it's giving me a hard time with Avid Media Composer. Since then though, I noticed that I only get those glitches if I do full screen monitoring with the LEFT monitor. I can swap monitors around, physically, but the one which is on the left side of the interface will always have those glitches. How weird...

Also I love the idea of a card with built-in drivers, that's in line with the Mac being a Mac - simplicity, not much tuning required, it just works. Which is why I am/was a big fan of the R9 280X as once flashed, it was just like a stock Mac.


Valley is a OpenGL benchmark, and it’s hard to believe that a RX580 can do better than a 1080Ti.

CineBench is a well know CPU single thread performance limiting benchmark. Any modern GPU will hit the limit straight away, and only able to measure the “CPU single thread speed - driver overhead”. Since the AMD driver has less overhead in MacOS. Therefore, almost all AMD GPU will perform better than Nvidia GPU in that CineBench OpenGL test. A RX460 can out perform a TitanX.

IMP, for OpenCL, Luxmark V3 is a very decent tool to measure relative performance.

The RX580 should be quite a bit faster, however, the 280X optimisation is doing so well in MacOS (most likely because of the nMP), it can beat some stronger GPU in some particular occation. In fact, almost all HD7000 series GPU benefit from it. e.g. For BruceX, my HD7950 can beat my 1080Ti. Of course, it doesn’t mean the 7950 is really faster. If look at the power draw. The 7950 can draw up to around 100W during FCPX rendering (about 140W under normal full load). But the 1080Ti only draw ~50W (out of 250W), the software simply can’t utilise the hardware at all.

Valley surprised me too, but here it is...

RX580:
823433ValleyRX580.png


GTX 1080Ti:
300095ValleyGTX1080Ti.png


I know a benchmark is not real life, but in my case it's a good comparison tool as it's the exact same machine. And same OS.

Here is Cinebench, which, I agree, is not really relevant these days.

RX580:
961867CinebenchRX580.png


GTX1080 (NOT Ti, I forgot to bench it):
653212CinebenchGTX1080.png


And R9 280X, far from ridiculous:
446261CinebenchR9280X.png


I love Luxmark too, and again, same machine, the Mic render:

RX580:
811421MicRX580.png


Again, the R9 280X is not ridiculous:
654116MicR9280X.png


My fav', GTX 980Ti, pretty decent:
883671MicGTX980.png


Of course the GTX1080Ti blows them away, nearly double from R9 280X:
559228MicGTX1080Ti.png


But as I said in another thread, real life is different from benches and the GTX 1080Ti doesn't perform much better than the 980Ti when I use it in Davinci Resolve - the bottleneck probably becomes the hard drives. On the specific type of video files I'm dealing with at the moment, transcoding to proxy files runs at 60fps with the 980Ti, 62fps with the GTX1080Ti but drops to 35fps with the RX580, marginally faster than the R9 280X - files are 23.976fps, and I have 300 hours to convert...
 
  • Like
Reactions: foliovision
I love my Nvidia GTX980Ti but it's giving me a hard time with Avid Media Composer. Since then though, I noticed that I only get those glitches if I do full screen monitoring with the LEFT monitor. I can swap monitors around, physically, but the one which is on the left side of the interface will always have those glitches. How weird...

Also I love the idea of a card with built-in drivers, that's in line with the Mac being a Mac - simplicity, not much tuning required, it just works. Which is why I am/was a big fan of the R9 280X as once flashed, it was just like a stock Mac.




Valley surprised me too, but here it is...

RX580:
823433ValleyRX580.png


GTX 1080Ti:
300095ValleyGTX1080Ti.png


I know a benchmark is not real life, but in my case it's a good comparison tool as it's the exact same machine. And same OS.

Here is Cinebench, which, I agree, is not really relevant these days.

RX580:
961867CinebenchRX580.png


GTX1080 (NOT Ti, I forgot to bench it):
653212CinebenchGTX1080.png


And R9 280X, far from ridiculous:
446261CinebenchR9280X.png


I love Luxmark too, and again, same machine, the Mic render:

RX580:
811421MicRX580.png


Again, the R9 280X is not ridiculous:
654116MicR9280X.png


My fav', GTX 980Ti, pretty decent:
883671MicGTX980.png


Of course the GTX1080Ti blows them away, nearly double from R9 280X:
559228MicGTX1080Ti.png


But as I said in another thread, real life is different from benches and the GTX 1080Ti doesn't perform much better than the 980Ti when I use it in Davinci Resolve - the bottleneck probably becomes the hard drives. On the specific type of video files I'm dealing with at the moment, transcoding to proxy files runs at 60fps with the 980Ti, 62fps with the GTX1080Ti but drops to 35fps with the RX580, marginally faster than the R9 280X - files are 23.976fps, and I have 300 hours to convert...

I suspect because you select a relatively low resolution. And only medium quality in Valley. So, the benchmark again is measuring the CPU single thread performance. Anytime when it happen. A AMD card will beat the Nvidia card in MacOS.

In fact, your 1080Ti’s result almost identical to the my 1080Ti under the ExtremeHD preset. So, it’s a sign that the benchmark is no more GPU limiting.
Valley Extreme.jpg


May be you can try ExtremeHD preset, or even run at “super resolution” (e.g. 4K), until a point that 1080Ti’s performance drop. Then you should back to GPU limiting. Which also means you are benchmarking the GPU, but not something else.

Anyway, I did some reading about DaVinci Resolve, can’t sure if it can really utilise CUDA / video engine of the 1080Ti in MacOS. If it’s just using the GPU’s OpenCL ability, and not fully utilised, or further limiting by something else when you hit ~60FPS. Then it can explain your situation.
 
Last edited:
  • Like
Reactions: fendersrule
May be you can try ExtremeHD preset, or even run at “super resolution” (e.g. 4K), until a point that 1080Ti’s performance drop. Then you should back to GPU limiting. Which also means you are benchmarking the GPU, but not something else.
That's interesting, I'll try something this when I get a chance!

Anyway, I did some reading about DaVinci Resolve, can’t sure if it can really utilise CUDA / video engine of the 1080Ti in MacOS. If it’s just using the GPU’s OpenCL ability, and not fully utilised, or further limiting by something else when you hit ~60FPS. Then it can explain your situation.
Where did you see that? I have the full version of Davinci, not the crippled version they sell on the App Store which can't do CUDA. I think it's more a hard drive limitation at this point because if I check the drives during the video conversion, the drives are at their maximum delivery.
I'm still interested in getting a GTX1080Ti for real, and for future-proofing this Mac Pro., Actually, I read your signature link and got some good info about the model/brand to get! Thanks for that!
 
May be you can try ExtremeHD preset, or even run at “super resolution” (e.g. 4K)
I remember MVC saying exactly same thing, let it "breathe" at higher resolutions and you'll see the power of it.
the software simply can’t utilise the hardware at all.
And that was the most disappointing thing when i tested GTX 980 in FCPX, the software just won't use it.

I want to go back to RX series just for a moment.
The only difference between RX 480 and 580 are clocks and revision (C7 vs E7). Any reason not to get 480 instead?
I mean, Device ID is the same, there shouldn't be any problem with it, right?
 
That’s rather annoying. I ordered my card on June 15th from Amazon UK and it still hasn’t shipped! WTH
Out of the blue, I got a despatch email from Amazon today (£102).....woww.
I will keep this as a backup, as I gave up waiting for amazon and paid 3 times that price from another supplier.
But it balances out with this one being so cheap.
 
  • Like
Reactions: Squuiid
I remember MVC saying exactly same thing, let it "breathe" at higher resolutions and you'll see the power of it.

And that was the most disappointing thing when i tested GTX 980 in FCPX, the software just won't use it.

I want to go back to RX series just for a moment.
The only difference between RX 480 and 580 are clocks and revision (C7 vs E7). Any reason not to get 480 instead?
I mean, Device ID is the same, there shouldn't be any problem with it, right?

The main reason to avoid RX480 is because it can draw more than 75W from the PCIe slot.

On the Windows side, this issue was fixed by driver update. However, this kind of fix is not avail in MacOS, which means the "bug" may be still there. I personally believe that cMP is well builded with some buffer, but if the RX580 is basically the same thing and can stay within limit, then why not go for the RX580?

The only time I want to go for RX480 will be I want to run dual card. This is the time that I want the card can draw max from the slot, and relief the mini 6pins a bit.

But TBH, unless our daily job is running Furmark, I don't think the RX480 will constantly draw that much from the slot. Especially if use it for FCPX etc, the loading is pretty low (relatively) indeed.
 
  • Like
Reactions: owbp
The main reason to avoid RX480 is because it can draw more than 75W from the PCIe slot.
Totally forgot about that, thanks for reminding me!
But, yes, you're right, i am playing games only in Windows and FCPX (the most GPU demanding task i do in macOS) won't tax it that much.
There are no reasons to choose 480 instead of 580, but i'm fishing for GPU on ads and don't know what will come out.
The prices for new ones are comming down slowly, but still waaaay overpriced so i don't know what will appear, but i have to be fast when it does...
I mean, i'm very close to buying GTX1070 and switching to Resolve (yours 1080Ti purchase doesn't help at all, haha).
unless our daily job is running Furmark
You know me too well AKA love when you write this :D
 
The issue with DaVinci resolve is where you buy it from, if you get it from Blackmagic Design it is full featured but if you buy it from the app store it has limitations.

"DaVinci Resolve 14 App Store Limitations
The Mac App store version of DaVinci Resolve 14 Studio works with OpenCL only, does not support some external control panels, and may not be compatible with all 3rd party OpenFX or VST plugins. If you need these features, please purchase DaVinci Resolve 14 Studio from a Blackmagic Design reseller."

https://itunes.apple.com/us/app/davinci-resolve-studio/id900392332?mt=12
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.