Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks for sharing. :cool: If XFX is calling out mac compatibility and they are including a 8-pin to 6-pin power converter, they may have inside info on MacOS compatibility.

Screen Shot 2019-07-09 at 3.22.08 PM.png

XFX must be targeting macOS users with the Radeon 5770 reference (mistake) in the published specs...
 
Maybe Xfx realize a little bit of goodwill to mac owners goes a long way. If they are willing to come out and say “we support macOS” then I’m inclined to give them a second look.

For whatever reason Sapphire never mentions macOS comparability only Windows Compatability. Maybe they don’t want people to get confused and think their MacBook Air can handle a Vega 64 without an eGPU? Maybe something is in the work but sapphire always tight lipped about it. I wouldn’t count them out yet.
 
One user guide is now available from AMD for both RX 5700 and RX 5700 XT:
https://www.amd.com/system/files/documents/radeon-rx-5700-series-quick-start-guide-en.pdf

The system requirements state:

"Minimum 600W (recommended 700W) PSU with up to an 8-pin + 6-pin PCI Express Auxiliary connectors. This PSU recommendation is only for one Radeon RX 5700 series GPU installed per system. Additional GPUs will require more capable power supplies."

"Windows® 10 or Linux® operating system (64-bit operating system is highly recommended)."

Screen Shot 2019-07-11 at 11.01.15 AM.png

Looks like waiting for modified lower power requirement versions from individual vendors would be the best plan, or else supplement with external PSU or internal "hack" or modification of some kind. Still likely best to use an Apple recommended RX 580 for the vast majority of MP5,1 users.
 
One user guide is now available from AMD for both RX 5700 and RX 5700 XT:
https://www.amd.com/system/files/documents/radeon-rx-5700-series-quick-start-guide-en.pdf

The system requirements state:

"Minimum 600W (recommended 700W) PSU with up to an 8-pin + 6-pin PCI Express Auxiliary connectors. This PSU recommendation is only for one Radeon RX 5700 series GPU installed per system. Additional GPUs will require more capable power supplies."

"Windows® 10 or Linux® operating system (64-bit operating system is highly recommended)."

View attachment 847717

Looks like waiting for modified lower power requirement versions from individual vendors would be the best plan, or else supplement with external PSU or internal "hack" or modification of some kind. Still likely best to use an Apple recommended RX 580 for the vast majority of MP5,1 users.

According to tom's hardware's review. If we can balance the power draw between the 6+8 pin, each mini 6pin "only" need to deliver ~86W (average), which is good enough to stay below the shutdown even in artificial power virus. Even we count that extra 40W (peak) all goes into the 6+8 pin, each mini 6pin still stay below the tested 120W shutdown protection limit.
5700XT.png


The 5700 is even better, the 8pin only draw 75W during Furmark. So, it should always stay below 75W for normal daily usage. Even connect the 8pin directly to one of the mini 6pin. And if we balance the power draw, there will be nothing to worry about.
5700.png
 
XFX has not published their user guide for RX 5700 yet. They advertise Mac compatibility (through Best Buy website) and include 8>6 and 6>4 power cable adapters in the box. Curious if anything specifically is mentioned about that setup.
 
TBH I don’t know if there would be any noticeable performance increase for myself compared to my pulse rx580, the 5700 has same 8gb, same number sp.
Also are they not geared towards gaming.
 
TBH I don’t know if there would be any noticeable performance increase for myself compared to my pulse rx580, the 5700 has same 8gb, same number sp.
Also are they not geared towards gaming.

I prefer to believe that the poor compute result is from the immature driver.

This is from one of those review website. Which seems agree with other reviews.
luxmark-5700.png


And these are my own results (10.14.6 beta, Core 1340MHz, VRAM 2150MHz with timing patched, downvolt to 1000mV)
Hotel - 10.14.6 - 1340:2150.png
1340+2150 (1000mV).png

The 5700 is weaker in the Hotel scenes, but way faster in the Luxball test. Which is strange. As you said, both GPU has same amount of VRAM. So, the 5700 can't be VRAM capacity bottlenecking and suddenly perform much poorer at the more complex scenes.

But the good performance in Luxball test shows the potential of this GPU in compute.

So, hopefully it's just driver issue. And 5700 will able to catch up the Vega's performance, but with much lower power draw.
 
  • Like
Reactions: handheldgames
Did you lower the memory frequency? I remember your thread where you used 2200Mhz on VRAM while still keeping it stable at 0,95V and 1340Mhz core at 1000mV
 
Did you lower the memory frequency? I remember your thread where you used 2200Mhz on VRAM while still keeping it stable at 0,95V and 1340Mhz core at 1000mV

Yes, the VRAM can run at 2250MHz, but since I can't quite monitor the VRAM / VRM parameters in macOS. I prefer to lower the VRAM clock a bit to 2150MHz for daily use now (reduce hardware stress). If I keep the VRAM clock at 2250MHz, I can achieve 18000 in Luxball test.
1340+2250.png
 
  • Like
Reactions: zoltm and donluca
I prefer to believe that the poor compute result is from the immature driver.
No, RDNA is a pure gaming architecture, similar to GeForce.

If you want compute, buy GCN.


Now Navi looks more interesting than initially, at least on PC:
 
Then why it perform so well in Luxball?
It must be doing different kinds of calculations.

It still worse than Vega for the price.
[doublepost=1562876039][/doublepost]
"AMD Radeon RX 5700 Series With First Gen Navi Have a Hybrid RDNA and GCN Architecture"
https://wccftech.com/amd-radeon-rx-5000-7nm-navi-gpu-rdna-and-gcn-hybrid-architecture/
No, the microarchitecture is RDNA. The instruction set is GCN.

Of course a new microarchitecture can keep elements of the previous one.
 
Last edited:
It must be doing different kind of calculations.

It still worse than Vega for the price.
[doublepost=1562876039][/doublepost]
No, the microarchitecture is RDNA. The instruction set is GCN.

It's a newer card with much lower power consumption, more expensive is expected.

But that's not the original discuss direction. We were debating about if RDNA is really that bad in compute. Or it's now limiting by the driver / software. Not the performance to price ratio.

If Luxball isn't really measuring compute performance, then it's fine. We can say that result is not valid. But I don't think that's the case.

If Luxball can somehow represent compute performance, and it use different way to to compute, and the RX5700 can do that well, which means the RX5700 has that compute power. Just the software engineer need to write the software properly.
 
It's a newer card with much lower power consumption, more expensive is expected.

But that's not the original discuss direction. We were debating about if RDNA is really that bad in compute. Or it's now limiting by the driver / software. Not the performance to price ratio.

If Luxball isn't really measuring compute performance, then it's fine. We can say that result is not valid. But I don't think that's the case.

If Luxball can somehow represent compute performance, and it use different way to to compute, and the RX5700 can do that well, which means the RX5700 has that compute power. Just the software engineer need to write the software properly.
It's not a programming quality thing. These cards are likely good for FP16 and FP32 but bad for FP64 for example (traditional GeForce gimping).
 
Yes, the VRAM can run at 2250MHz, but since I can't quite monitor the VRAM / VRM parameters in macOS. I prefer to lower the VRAM clock a bit to 2150MHz for daily use now (reduce hardware stress). If I keep the VRAM clock at 2250MHz, I can achieve 18000 in Luxball test.
View attachment 847756

Good, thanks for sharing! Maybe you should update the thread with this last setting.

I have a RX480 which I’m going to VBios mod to 580 and apply your settings. Might go for 2150Mhz just to be on the safe side.
 
Good, thanks for sharing! Maybe you should update the thread with this last setting.

I have a RX480 which I’m going to VBios mod to 580 and apply your settings. Might go for 2150Mhz just to be on the safe side.

You have to test that before mod the VRAM clock. Do NOT just copy that figure.

My card can run at 2250MHz, but some other member's card can't do anything faster than the default 2000MHz. Even just extra 50MHz, the card crash.

My post at there was showing about how fast I can push in the high performance mod setting. Not my daily use setting.

In fact, few weeks back, I don't need that much compute power, I was with the WX7100 setting for daily use.

But these few weeks, I do lots of hwaccel test, I often push the card to limit. So, I want a higher performance setting to test stability, but don't want all the way to 2250MHz because I can't quite monitor the VRAM / VRM parameters.

Anyway, for this topic, we better go back to the RX580 VBIOS study thread.
 
Im waiting for https://www.pugetsystems.com/all_articles.php to give them a good work out.

for me resolve is my main interest,
https://www.pugetsystems.com/labs/a...ve-14-NVIDIA-GeForce-vs-AMD-Radeon-Vega-1213/
nice to see the RX 580 is close to a GTX1070
or
Radeon 7 is same as a GTX 2080 Ti
https://www.pugetsystems.com/labs/articles/DaVinci-Resolve-15-AMD-Radeon-VII-16GB-Performance-1382/

& there will be lots of bugs, so will need some time to even out a tad plus if apps need to be optimized for the new cards there may be a year lag before that happens.

I cant see any real apple support any time soon, it's only in 10.14 that iv seen reports of the RX/VEGA support relay getting a lot better so may take ages
 
I found these FP64 specs:

Vega II: 1:4
Navi: 1:16
Vega: 1:16
Polaris: 1:16
RTX: 1:32
GTX 10: 1:32

I have found errors in that database before, so I would not be surprised if Navi is actually 1:32.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.