Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I think it is supposed to be natively supported in the next update 10.12.6. So you'll have to wait for that release.

Apparently a bunch of drivers for a variety of AMD GPUs are in 10.13. Whether they are included in 10.12.6 remains to be seen.
 
...
P.S. For me it is absolutely bonkers that reviewers use SPECperf, rather than other professional applications to test the GPU. Why?

SPEC benchmarks are based on real programs. . They have typically been open/free programs so there was no problems in getting them and running them through their paces. The ability to collect independent measures is important. For example, gcc is an real program. Everybody doesn't use it but it is real and has been used for SpecINT. . There is a mild compromise with SPECviewperf (https://www.spec.org/gwpg/gpc.static/vp12.1info.html ). This "open/free" has to trade off with programs that cost thousands of dollars. viewperf is based upon the traces of extremely real professional applications. For example,

"The catia-04 viewset was created from traces of the graphics workload generated by the CATIA V6 R2012 application from Dassault Systemes. Model sizes range from 5.1 to 21 million vertices. ... "
https://www.spec.org/gwpg/gpc.static/catia04.html

Defacto what you get is a version of the app that only renders one single model (or a narrow set of 'replays'). It is crippleware in that can't do a random model with it. But for a benchmark that 'do whatever you want' functionality is less than worthless in utility. For a benchmark everyone has to do the same thing (minimize differences). The app's publisher is directly involved in the creation just about every one of these viewperf entries.
 
If Ryzen is so great, why isn't Apple using it in the iMac Pro? The answer is that its already been passed by Skylake-X. Once again AMD has to play the value card.
It has nothing to do with the situation I am talking about. Ryzen was described by people as a failure, when it was showing sub par performance in games. The problem is that performance was because the software was never designed for Ryzen, and Ryzen has not been sent by AMD to devs before the release so that performance would not be leaked. Now, when software matures we see huge improvements on this platform.

Why do you believe this will not be the same case with Vega? Especially if drivers are reporting new features, but the software is not using it, because it was not designed with them in mind?

Crappy launches is not how to make "MONEYZ". When Nvidia releases a GPU, whether its a Titan or an entry level model, they try and keep rumors to a minimum, they announce it publicly, reviews go up around the same time pre-orders start, and then the card hits the shelves a week or two after that. They don't need their fanboys justifying why it doesn't perform very well, because they have been delivering the best performance for years now. That seems like a pretty good recipe for making "MONEYZ".
I see that compute performance is where it should be, especially considering this GPU has zero professional workload optimizations that are in Radeon WX drivers. So where is crappy launch?

The only gripe I have with it is related to drivers. AMD does not tie with it any gaming, and professional drivers, so basically... you are on your own, and you buy unfinished product. Will next release of drivers bring optimizations? Possibly only in gaming mode, when new features will be properly reported.

Whats bonkers is that AMD didn't send out review samples of their first flagship card in two years, so we are stuck with one single review until everyone has time to do all their testing.
This is NOT gaming card. AMD will send review samples of RX Vega GPUs, at the end of this month.

Two things should post red alert, for everyone who is reading the reviews: AMD Vega does not post Tile Based Rasterization, in Tile Based Rasterization test, despite the fact that is the technique the GPU uses. What does this mean?

And Lastly, let me show you what Ryan Smith from Anandtech has posted on another forum:
https://forum.beyond3d.com/posts/1989183/

Oh it's totally valid to benchmark it. I just have to tread a little more carefully; as AnandTech people often take what we say as the final word, and while I'm proud of the trust readers have in us, it means I need to be careful not to write a preview article and have 10 news aggregators posting "AnandTech proves that Vega sucks!" in the morning.:eek: Most readers get that a preview is a preview, but not everyone does.

Only thing you can do, with the GPU right now is to preview it, if it does not have the final drivers, and software is not able to use the features of the hardware, because it is so different.

The sole purpose of this release is to earn money, and to allow developers to get their hands on, and optimize their software for this architecture, because it will take some time, with learning and changing approaches.
 
....
Whats bonkers is that AMD didn't send out review samples of their first flagship card in two years, so we are stuck with one single review until everyone has time to do all their testing.

You can generally get more honest reviews out of folks who independently bought the device under review of folks to whore out to write reviews for any freebie they are given? That vast majority of the tech porn 'press' would only doing low hurdle, pandering gaming program reviews. If AMD is at this point primarily targeting > $1000 cards that folks run custom and semi-custom apps on what's the point ?

It is not that the Frontier edition is a money maker. It is also not a money looser while AMD preps the real mainstream solution entry for later in the year. This current product isn't their "gaming" solution.
 
Luxmark Hotel.

Fury X at about 1Ghz : around 3500 points

Vega Frontier at 50-60% higher clockspeed and higher TDP : around 4700 points

Looks about the same clock for clock.

http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/24

The best single cards are 295x2 and Titan Xp which are average 5500.
Vega is running at 1.4 GHz. So only 33% higher clock speed for 50% higher scores.

Also check other compute applications, in which Vega is almost two times faster than Fiji.
 
I'll check but doubt those scores.

Then we have the Radeon Pro Duo which would fly past any of the cards above in Compute and can be found 30% cheaper than a Frontier at about the same power consumption.
 
It has nothing to do with the situation I am talking about. Ryzen was described by people as a failure, when it was showing sub par performance in games. The problem is that performance was because the software was never designed for Ryzen, and Ryzen has not been sent by AMD to devs before the release so that performance would not be leaked. Now, when software matures we see huge improvements on this platform.

Huge improvements? In what, some of the newer AMD sponsored games? There hasn't been some magical leap in performance since release except for a few games, Kaby Lake is still running all over Ryzen. But that shouldn't really come as a surprise and personally I couldn't care less about the gaming performance. It's good enough and you gain a lot more in terms of multi tasking capabilities that would cost you 2-3x+ from Intel.

That said, the nonsense of "it will improve over time" needs to stop with AMD. When I buy a product today, I want it to perform at it's best. If I have to wait 6 months+ to get the best performance out of it, why buy it now? I might as well just get the rival's product which works perfectly from day 1. Stop making excuses for AMD's laziness with their drivers.
 
  • Like
Reactions: tuxon86
Huge improvements? In what, some of the newer AMD sponsored games? There hasn't been some magical leap in performance since release except for a few games, Kaby Lake is still running all over Ryzen. But that shouldn't really come as a surprise and personally I couldn't care less about the gaming performance. It's good enough and you gain a lot more in terms of multi tasking capabilities that would cost you 2-3x+ from Intel.

That said, the nonsense of "it will improve over time" needs to stop with AMD. When I buy a product today, I want it to perform at it's best. If I have to wait 6 months+ to get the best performance out of it, why buy it now? I might as well just get the rival's product which works perfectly from day 1. Stop making excuses for AMD's laziness with their drivers.
Have you been paying attention to the improvements of Ryzen?

Yes, it is getting better over time, when the software is getting mature. And no, you cannot get sponsored titles for CPU performance.

Otherwise we can call every single game out there an Intel sponsored title, because those are the CPUs that 99% of games out there are optimized for, because AMD was irrelevant for years.

P.S. How is Ryzen comparing to Broadwell-E and Haswell-E in gaming which they should compete with?
I'll check but doubt those scores.

Then we have the Radeon Pro Duo which would fly past any of the cards above in Compute and can be found 30% cheaper than a Frontier at about the same power consumption.
Ill give you an example.

Raven Ridge GPU is 40%, according to AMD faster than 7th Generation APU GPU, in 50% lower thermal envelope.

Which means that they are comparing 35W Raven Ridge APU, with 65W Bristol Ridge A12 9800.

A12 9800 has 512 GCN cores, and 1108 MHz core clock.
Raven Ridge has 704 GCN cores, 800 Mhz core clock.

Which actually means that it has lower compute performance than Bristol Ridge. So what they are touting as an increase in performance?

Per clock Vega should be around 30% faster than previous generations of GCN GPUs.
 
Per clock Vega should be around 30% faster than previous generations of GCN GPUs.

I'm only thinking about cost for performance. I just need to rationalise why a Frontier costs so much for so little benefit when you can find slightly older and cheaper hardware that beat it. If it is supposed to be the 'Frontier' I expect it to be untouchable.
 
The sole purpose of this release is to earn money, and to allow developers to get their hands on, and optimize their software for this architecture, because it will take some time, with learning and changing approaches.

I was thinking that this was just a beta testing release for developers. Frontier Edition does sound something like that... for pioneers only. Not for the rich kids who'd buy it just because it is expensive.
 
Some mining news I missed.

A new build of Ethminer came out last week. This one has some 'debug code' removed. It has more CUDA optimisations, especially targeting the GTX 1060.

I have two miner stations - one Nvidia and one AMD. I have only been running them with the new build for two hours but I have just hit a new megahashing record.

Dual GTX 1070 - this one has just hit 75Mh/s peak, system power draw 320w.

Radeon 295x2 + RX 580 - this one just hit 95Mh/s peak, system power draw 550w.

I will need to run for over 24 hours to see if the average hashing rate has improved. The CUDA improvements being reported on three forums indicate that there will be a rush for Geforce cards.
 
Last edited:
You can generally get more honest reviews out of folks who independently bought the device under review of folks to whore out to write reviews for any freebie they are given? That vast majority of the tech porn 'press' would only doing low hurdle, pandering gaming program reviews. If AMD is at this point primarily targeting > $1000 cards that folks run custom and semi-custom apps on what's the point ?

It is not that the Frontier edition is a money maker. It is also not a money looser while AMD preps the real mainstream solution entry for later in the year. This current product isn't their "gaming" solution.

On the flip side if you give reviewers ample time to review a product they have enough time to ask the manufacturer about any problems or performance issues they may encounter during testing. Right now there are a lot of eager AMD fans who have been waiting for Vega and all they have to look at is a couple reviews thrown together as quickly as possible that show very poor performance. I have to imagine more than a few people saw those numbers, gave up waiting, and picked up an Nvidia card.

It has nothing to do with the situation I am talking about. Ryzen was described by people as a failure, when it was showing sub par performance in games. The problem is that performance was because the software was never designed for Ryzen, and Ryzen has not been sent by AMD to devs before the release so that performance would not be leaked. Now, when software matures we see huge improvements on this platform.

Why do you believe this will not be the same case with Vega? Especially if drivers are reporting new features, but the software is not using it, because it was not designed with them in mind?

I see that compute performance is where it should be, especially considering this GPU has zero professional workload optimizations that are in Radeon WX drivers. So where is crappy launch?

The only gripe I have with it is related to drivers. AMD does not tie with it any gaming, and professional drivers, so basically... you are on your own, and you buy unfinished product. Will next release of drivers bring optimizations? Possibly only in gaming mode, when new features will be properly reported.


This is NOT gaming card. AMD will send review samples of RX Vega GPUs, at the end of this month.

Two things should post red alert, for everyone who is reading the reviews: AMD Vega does not post Tile Based Rasterization, in Tile Based Rasterization test, despite the fact that is the technique the GPU uses. What does this mean?

And Lastly, let me show you what Ryan Smith from Anandtech has posted on another forum:
https://forum.beyond3d.com/posts/1989183/



Only thing you can do, with the GPU right now is to preview it, if it does not have the final drivers, and software is not able to use the features of the hardware, because it is so different.

The sole purpose of this release is to earn money, and to allow developers to get their hands on, and optimize their software for this architecture, because it will take some time, with learning and changing approaches.

If the drivers are incomplete, its very concerning especially this late in the game. You can't just dismiss the abysmal performance as "its not a gaming card" since its aimed at content creators, which includes creating content for games and VR. Right now this card consumes double the power of a GTX 1080 with lower performance. Not to mention, if the drivers are incomplete now, how are they going to release a gaming card within a month?

If the drivers are complete, well then AMD will have completely lost the high end gaming market for the foreseeable future. If they have to sell the ~480 mm2 Vega with exotic memory for less than a ~320 mm2 GTX 1080, they are in a heap of trouble.
 
I'm only thinking about cost for performance. I just need to rationalise why a Frontier costs so much for so little benefit when you can find slightly older and cheaper hardware that beat it. If it is supposed to be the 'Frontier' I expect it to be untouchable.
What is cheaper? P4000 which costs twice as much, and offers professional GPU driver optimizations, which bring 10-30% performance increase?

Do not get me wrong.

I have one biggest gripe, which Actually do not understand with this release. Will this GPU get the professional WX optimizations, as well? There is upcoming WX 9100 based on Vega, which will get them. But will this GPU get them as well? Or do you will have to pay wnother 500-1000$ for them?

If the drivers are incomplete, its very concerning especially this late in the game. You can't just dismiss the abysmal performance as "its not a gaming card" since its aimed at content creators, which includes creating content for games and VR. Right now this card consumes double the power of a GTX 1080 with lower performance. Not to mention, if the drivers are incomplete now, how are they going to release a gaming card within a month?

If the drivers are complete, well then AMD will have completely lost the high end gaming market for the foreseeable future. If they have to sell the ~480 mm2 Vega with exotic memory for less than a ~320 mm2 GTX 1080, they are in a heap of trouble.
The features that increase performance are not visible in software for two main possible reasons.

First. They are not enabled in drivers.
Second. They are enabled in drivers, but the software is not optimized to use those features.

There is a dev, who tore down the drivers, and found out that there is no Radeon Pro optimizations available.

As I have said. This GPU purpose is to give devs tool, for which they have to pay.
 
What is cheaper? P4000 which costs twice as much, and offers professional GPU driver optimizations, which bring 10-30% performance increase?

Do not get me wrong.

I have one biggest gripe, which Actually do not understand with this release. Will this GPU get the professional WX optimizations, as well? There is upcoming WX 9100 based on Vega, which will get them. But will this GPU get them as well? Or do you will have to pay wnother 500-1000$ for them?

The features that increase performance are not visible in software for two main possible reasons.

First. They are not enabled in drivers.
Second. They are enabled in drivers, but the software is not optimized to use those features.

There is a dev, who tore down the drivers, and found out that there is no Radeon Pro optimizations available.

As I have said. This GPU purpose is to give devs tool, for which they have to pay.

The Frontier is pitched against Titan range though, not Quadro.

For that matter I think workstation cards like Quadro or FireGL are mostly a crock of **** and many companies have no idea that they would be fine with a prosumer level card.
 
The Frontier is pitched against Titan range though, not Quadro.

For that matter I think workstation cards like Quadro or FireGL are mostly a crock of **** and many companies have no idea that they would be fine with a prosumer level card.
The good thing is that possibly, regardless of everything, Apple will get Radeon Pro drivers for Vega.
 
The Frontier is pitched against Titan range though, not Quadro.

For that matter I think workstation cards like Quadro or FireGL are mostly a crock of **** and many companies have no idea that they would be fine with a prosumer level card.

From what I've gathered, the extra $2-4k you pay for Quadro/FireGL is for the signed drivers. I have no idea what that actually means since I don't work in that industry though.
 
From what I've gathered, the extra $2-4k you pay for Quadro/FireGL is for the signed drivers. I have no idea what that actually means since I don't work in that industry though.
Compare GTX 1080 performance with Quadro P4000.

Both are the same chips, with Quadro having lower core clock(slightly). And regardless of it, it gives higher compute performance in applications.
 
From what I've gathered, the extra $2-4k you pay for Quadro/FireGL is for the signed drivers. I have no idea what that actually means since I don't work in that industry though.
It used to mean better numerical accuracy, error checking and enhanced Z buffer, more texture memory. But I sometimes see these cards being fit in video editing machines or mid level CG modelling where these extra features mean nothing.
 
vega is already so incredibly late and now you're basically saying hey just wait a few more months for AMD's terrible software team to catch up.
 
vega is already so incredibly late and now you're basically saying hey just wait a few more months for AMD's terrible software team to catch up.
We do not know what is happening.

In games per clock it is slower than Fiji.

In compute per clock, it is faster than Fiji.

It is brand new architecture, after all.
 
We're talking Windows of course. macOS wise I would only talk about natively supported cards. I got tired long ago of kext hacking and card switching and rebooting.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.