Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And yet barely match or is outright beaten by a one year old GPU that cost less and consume way less power...
In games? Maybe. And we still do not know whether this is reason of hardware or software, like it was with Ryzen. What if Vega will turn out to be Volta competitor, not Pascal?

So far, every other scenario, compute shows that is is most of the time faster than direct competitor. Similar scenario to Ryzen. Gaming - slower, compute - faster, or on par with direct competitor. And it does not even have signed radeon Pro drivers. Those are reserved for WX 9100.
 
In games? Maybe. And we still do not know whether this is reason of hardware or software, like it was with Ryzen. What if Vega will turn out to be Volta competitor, not Pascal?

This would only be possible if Volta was worst than Pascal as Pascal is giving it quite a ride even if nearly one year older.

So far, every other scenario, compute shows that is is most of the time faster than direct competitor. Similar scenario to Ryzen. Gaming - slower, compute - faster, or on par with direct competitor. And it does not even have signed radeon Pro drivers. Those are reserved for WX 9100.

Only if that direct competitor is the one year old gtx1080. It is trading blow with the Titan XP but there are no clear winner between them. It pulls ahead in some test and is behind in others. It would have to receive a massive boost from new drivers, beyond what most analyst believe is possible, to really be ahead. And then the lack of CUDA that is used in many content creation application will come and bite it in the yahoo...
 
The most concerning thing I have seen with Vega so far is its power efficiency. It can't even sustain its advertised 13 TFLOPS because it can't get close to its 1600 Mhz clock rate. Its staying below 1500 Mhz. If gaming Vega competes with the GTX 1080 but has power consumption higher than the 1080 Ti its hard to see where the advantage is compared to the competition.
 
The most concerning thing I have seen with Vega so far is its power efficiency. It can't even sustain its advertised 13 TFLOPS because it can't get close to its 1600 Mhz clock rate. Its staying below 1500 Mhz. If gaming Vega competes with the GTX 1080 but has power consumption higher than the 1080 Ti its hard to see where the advantage is compared to the competition.

There's that playing against that arch as well.
 
The most concerning thing I have seen with Vega so far is its power efficiency. It can't even sustain its advertised 13 TFLOPS because it can't get close to its 1600 Mhz clock rate. Its staying below 1500 Mhz. If gaming Vega competes with the GTX 1080 but has power consumption higher than the 1080 Ti its hard to see where the advantage is compared to the competition.

It's okay though, I'm sure the AMD fans will claim victory in a "core for core, clock for clock" or some other random arbitrary (and meaningless) comparison.
 
This would only be possible if Volta was worst than Pascal as Pascal is giving it quite a ride even if nearly one year older.
Well, lets wait and see.
Only if that direct competitor is the one year old gtx1080. It is trading blow with the Titan XP but there are no clear winner between them. It pulls ahead in some test and is behind in others. It would have to receive a massive boost from new drivers, beyond what most analyst believe is possible, to really be ahead. And then the lack of CUDA that is used in many content creation application will come and bite it in the yahoo...
Spec Perf - 9 tests in 6 of which Vega is faster in PCPer review. Luxmark - here Titan Xp is faster.

Why do you deliberately mislead people with your uneducated opinions, about the hardware? I thought you were professional.

GTX 1080 will never be faster in compute than Vega if already we can see that RX 480 can be faster than GTX 1070, in professional applications.

P.S. You do not need CUDA, to get best performance out of your application, especially when you have compiler from CUDA to OpenCL, to optimize for AMD GPUs. So it is only in your mind a problem.

P.S. Nobody cares about CUDA on Apple ecosystem. You are using faulty logic here, to prove your point about Vega being waste of time and money.
The most concerning thing I have seen with Vega so far is its power efficiency. It can't even sustain its advertised 13 TFLOPS because it can't get close to its 1600 Mhz clock rate. Its staying below 1500 Mhz. If gaming Vega competes with the GTX 1080 but has power consumption higher than the 1080 Ti its hard to see where the advantage is compared to the competition.
Compute throughput is higher than Pascal GPUs. So I would not worry about performance. It should be on par with GP100 chip, in pure compute throughput. All you need is software optimization.

As for gaming. Lets wait and see what will happen, with RX Vega release.
 
Well, lets wait and see.

Spec Perf - 9 tests in 6 of which Vega is faster in PCPer review. Luxmark - here Titan Xp is faster.

Why do you deliberately mislead people with your uneducated opinions, about the hardware? I thought you were professional.

GTX 1080 will never be faster in compute than Vega if already we can see that RX 480 can be faster than GTX 1070, in professional applications.

P.S. You do not need CUDA, to get best performance out of your application, especially when you have compiler from CUDA to OpenCL, to optimize for AMD GPUs. So it is only in your mind a problem.

P.S. Nobody cares about CUDA on Apple ecosystem. You are using faulty logic here, to prove your point about Vega being waste of time and money.

Compute throughput is higher than Pascal GPUs. So I would not worry about performance. It should be on par with GP100 chip, in pure compute throughput. All you need is software optimization.

As for gaming. Lets wait and see what will happen, with RX Vega release.

Why are you attacking me right now? I'm only repeating what most reviewer are saying presently.

And why are you twisting what I've said and claim that I said the GTX 1080 was faster at compute? Come on.

Again with the attack and lie, this time with CUDA... The cross compiler are quite limited and only support a subset of CUDA capabilities. A tech pro like yourself would know this.

And yes people care about CUDA, or lack thereof on Apple product. You would have to be blind to not see all the people asking for NVidia hardware to come back.

The fact is that even though, on paper, Vega has greater compute capability, at this time it doesn't translate to real performance in the field as it seems to struggle to even reach it's targeted clock speed.
 
Why are you attacking me right now? I'm only repeating what most reviewer are saying presently.

And why are you twisting what I've said and claim that I said the GTX 1080 was faster at compute? Come on.

Again with the attack and lie, this time with CUDA... The cross compiler are quite limited and only support a subset of CUDA capabilities. A tech pro like yourself would know this.

And yes people care about CUDA, or lack thereof on Apple product. You would have to be blind to not see all the people asking for NVidia hardware to come back.

The fact is that even though, on paper, Vega has greater compute capability, at this time it doesn't translate to real performance in the field as it seems to struggle to even reach it's targeted clock speed.
Well I look at data from reviews and Vega is faster than Titan Xp in compute in most cases. 6 out of 9, to be precise at least for SpecPerf.

So why do you deliberately mislead people saying that it trades blows with Titan Xp?

Yes, people who are tied with CUDA ecosystem care about it, on Apple hardware. But nobody who will develop software will care about CUDA, when there is no CUDA hardware available. So what is the point of CUDA, as argument here?

HIP is compiling 99.96% of CUDA code to OpenCL. You should know that. That 0.04% of CUDA, is what you described of those custom features, that require optimization.

If on paper Vega has higher compute throughput, than any Consumer Pascal GPU, and on par with GP100 chip, but it is not reflected in all of cases, its problem of what? Hardware of software? And what will happen when Software will be optimized for it?

First case - If you want to use FP16/Rapid Packed Math - you need to optimize your software for this. It does not work out of the box, unfortunately.
 
Why are you misleading people by relying on only one test, specperf, while the majority of reviewers based their finding on multiples? Who's misleading who here? It's as if you truly beleive that your opinion is worth more than every person who, contrary to you, have actually used and test the card. As always you're taking AMD stats as divine truth even if they're proving wrong by peoples actually using the damn card!
 
Last edited by a moderator:
  • Like
Reactions: AidenShaw
GTX 1080 will never be faster in compute than Vega if already we can see that RX 480 can be faster than GTX 1070, in professional applications.

Citation needed. Vega running at 1440 Mhz is very close compute wise (9.6 TFLOPS) to a retail GTX 1080 at 1800 Mhz (9.2 TFLOPS). I haven't seen any tests directly comparing the GTX 1080 and Vega FE. Only tests that compare Vega FE to the GTX 1080 TI, which it seems to trade blows with depending on the test.

I agree with tuxon86, you seem to attack anyone who disagrees with you.
 
  • Like
Reactions: AidenShaw
GTX 1080 will never be faster in compute than Vega...
[strike]Where can we buy this mythical "Vega" that you talk about. What are its clocks?[/strike]
Never mind - I see the Frontier Edition is available at Newegg now....

And what does "in compute" actually mean? Ridiculous theoretical GFLOPS, or measured performance on real, useful applications?

You're investing all of this effort in comparing unknown, unsold Vega cards to first generation Pascal cards - while carefully ignoring the latest Pascal cards and Volta.

And when someone points out better benchmarks for Nvidia, you fall back to "performance per watt" and "performance per dollar". Measurements that aren't that important for most of the hobbyists and pro-sumers here.

football-goal-dollies-736-p[1].jpg
 
Last edited:
We should ALL want Vega RX to succeed. It benefits everyone when competition happens: Prices drop.

Right now, I feel like Moses in Hardware (1990) where I'm scavenging wastelands (Craigslist) for GPUs because I don't want to pay $500 for one that is "of standard" for today's gaming. 1070/1080 are breathtaking and maybe the best GPUs to ever to be released in this era. With that said, they've been out for over a year now, and they are priced higher than MSRP.

We ALL NEED Vega RX to be successful. Even if you don't like AMD, it will BENEFIT you. Sign me up for team agnostic!
 
We should ALL want Vega RX to succeed. It benefits everyone when competition happens: Prices drop.

Right now, I feel like Moses in Hardware (1990) where I'm scavenging wastelands (Craigslist) for GPUs because I don't want to pay $500 for one that is "of standard" for today's gaming. 1070/1080 are breathtaking and maybe the best GPUs to ever to be released in this era. With that said, they've been out for over a year now, and they are priced higher than MSRP.

We ALL NEED Vega RX to be successful. Even if you don't like AMD, it will BENEFIT you. Sign me up for team agnostic!

I think we all want AMD and Vega to succeed but any time we try and have an honest discussion about any GPU we get jumped on by Koyoot. Clearly Apple has reasons to use AMD GPUs whether its performance, financial or other technical reasons so if we want to have the best Macs we also want AMD to make great GPUs.
 
I think we all want AMD and Vega to succeed but any time we try and have an honest discussion about any GPU we get jumped on by Koyoot. Clearly Apple has reasons to use AMD GPUs whether its performance, financial or other technical reasons so if we want to have the best Macs we also want AMD to make great GPUs.
And we STILL want the option to use off the shelf GPUs from Nvidia if we choose.
 
Where can we buy this mythical "Vega" that you talk about. What are its clocks?

And what does "in compute" actually mean? Ridiculous theoretical GFLOPS, or measured performance on real, useful applications?

You're investing all of this effort in comparing unknown, unsold Vega cards to first generation Pascal cards - while carefully ignoring the latest Pascal cards and Volta.

And when someone points out better benchmarks for Nvidia, you fall back to "performance per watt" and "performance per dollar". Measurements that aren't that important for most of the hobbyists and pro-sumers here.
Tuxon replied to my post, that Vega is smoked by GTX 1080. I replied that in games - maybe. But not in compute.

quadro-spec4_0.png

quadro-spec3_0.png

quadro-spec2_0.png


PCPer review tested 9 situations in SpecPerf, which indicates that compute performance, without optimized, and signed drivers is stronger than Nvidia GPUs. It appears that I am the one attacked, by usual suspects that have uneducated opinions about hardware.

I read the other day that you believe you did good job with ordering GTX 1080 Ti. Yes, you did good job with ordering GTX 1080 Ti's, because they are great GPUs.

However, if you judged that Vega is a failure compared to GTX 1080 Ti, based on gaming benchmarks, not on compute benchmarks, which were available - I would not want to be your professional partner.

Whole review available here, for those unable to type in Google: Vega Frontier Edition Review: https://www.pcper.com/reviews/Graph...B-Air-Cooled-Review/Professional-Testing-SPEC
Citation needed. Vega running at 1440 Mhz is very close compute wise (9.6 TFLOPS) to a retail GTX 1080 at 1800 Mhz (9.2 TFLOPS). I haven't seen any tests directly comparing the GTX 1080 and Vega FE. Only tests that compare Vega FE to the GTX 1080 TI, which it seems to trade blows with depending on the test.

I agree with tuxon86, you seem to attack anyone who disagrees with you.
Well, first of all, if you do the maths properly, you will find out, that Vega has 11.8 TFLOPs of compute power at 1440 MHz.

Titan Xp's Theoretical Max FLOPs is 12.76 TFLOPs, and yet it is still slower in most cases in what we have seen, so far. You have in upper part of post more information.

You appears to be another one of those with uneducated opinions.
If you want citation for RX 480 being faster than GTX 1070:

Why are you misleading people by relying on only one test, specperf, while the majority of reviewers based their finding on multiples? Who's misleading who here? It's as if you truly beleive that your opinion is worth more than every person who, contrary to you, have actually used and test the card. As always you're taking AMD stats as divine truth even if they're proving wrong by peoples actually using the damn card!
Because the only other thing that reviewers tested in compute was Luxmark Hotel 3.1 which tests theoretical TFLOPs performance. 11.8 TFLOPs Vega GPU@ 1.44 GHz is slower in it than 12.8 TFLOPs Titan Xp.
quadro-luxmark_0.png

Do I have to point out what compute power level have other GPUs and why do they score such low compared to Vega and Titan Xp in this test?

You are misleading, because I was writing about Compute throughput of Vega architecture from the beginning, compared to previous years, and you jumped out of the Cannabis field, saying that Vega is smoked by GTX 1080, because you felt that your beloved Nvidia is being attacked. So why do you deliberately mislead people by saying that it is smoked by GTX 1080 in gaming, like it is only thing in the world that matters?

In games - maybe. But not in compute, if more powerful GPU from GTX 1080 - the Titan Xp is slower than Vega.


One more thing, this time news from Facebook's AI guy:
https://www.reddit.com/r/MachineLea...leased_by_amd_deep_learning_software/djpfmu1/
For PyTorch, we're seriously looking into AMD's MIOpen/ROCm software stack to enable users who want to use AMD GPUs.

We have ports of PyTorch ready and we're already running and testing full networks (with some kinks that'll be resolved). I'll give an update when things are in good shape.

Thanks to AMD for doing ports of cutorch and cunn to ROCm to make our work easier.
 
Last edited by a moderator:
  • Like
Reactions: xsmi123
Your post was specifically talking about DX12 and this is what I was replying to. As always you moved the goal post to compute and made a passing remark concerning Volta performance:

"In games? Maybe. And we still do not know whether this is reason of hardware or software, like it was with Ryzen. What if Vega will turn out to be Volta competitor, not Pascal?" - Koyoot

Which would only be possible if there was some big regression in the performance or power usage on Volta compared to Pascal WHICH YOU CAN'T EVEN KNOW ABOUT SINCE IT ISN'T RELEASED YET!

And BTW it isn't DX 12.1, it's feature level 12_1 that has existed since DX12 originally launched and has been supported by NVidia before the advent of Vega.


https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D#Direct3D_12


And again, stop with the personal attack.
 
Last edited by a moderator:
Your post was specifically talking about DX12 and this is what I was replying to. As always you moved the goal post to compute and made a passing remark concerning Volta performance:

"In games? Maybe. And we still do not know whether this is reason of hardware or software, like it was with Ryzen. What if Vega will turn out to be Volta competitor, not Pascal?" - Koyoot

Which would only be possible if there was some big regression in the performance or power usage on Volta compared to Pascal WHICH YOU CAN'T EVEN KNOW ABOUT SINCE IT ISN'T RELEASED YET!

And BTW it isn't DX 12.1, it's feature level 12_1 that has existed since DX12 originally launched and has been supported by NVidia before the advent of Vega.


https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D#Direct3D_12


And again, stop with the personal attack.
It appears the argument occured because of misunderstanding...
Let me properly quote myself, so that you can understand the context, because you either deliberately omitted that part, or in outrage you forgot about this part. First post was about DX12.1, and throughput of the cores of Vega. Second one was about:

koyoot said:
In games? Maybe. And we still do not know whether this is reason of hardware or software, like it was with Ryzen. What if Vega will turn out to be Volta competitor, not Pascal?

Vega has higher compute Throughput of cores than Fiji, had, has higher graphical throughput that any AMD GCN GPU had. Something is holding the GPU back. What it is? Who knows.

The compute throughput, and graphical throughput in Graphics are factoring the performance you get from the GPU, but it is highly reliant on software. Is it software that holds back Vega? We have to wait and see, hence my words in post #2131: "well, lets wait and see".

Compute throughput of the GPU is higher than Pascal. I should have made more clear my idea, about broader picture, and analysis of GPU layouts. That would be far clearer.

Volta will bring increase to performance per core, if Nvidia will use GP100, or GV100 chip Architecture layout. If they will stick with consumer Pascal - it will not change anything per core.

But my words stay: what if Vega will actually be Volta competitor?


Two GPUs from the same company:
vega-v-furyx-specviewperf.png

attachment.php

attachment.php

In compute the GPU has higher throughput, in gaming, despite having much higher throughput than Fiji, per clock - it loses.

Its hard to look at those comparisons per clock and per core, and say: "Yes, I do not see anything wrong with that".

So again. What if it is not the hardware problem?

However, I do apologize for saying that you deliberately mislead people.
 
Last edited by a moderator:
Koyoot, you are right that I miscalculated Vega's TFLOPS. Its also important to note you only listed the charts that have AMD winning.

How many of those specperf tests are for apps that work on Mac? Why do we care about DX12? Gamersnexus showed that the 1080 Ti smoked Vega in blender. That seems relevant for a professional Mac user.

I think its great that AMD is trying to catch up in the AI game but Nvidia has been leading here for years. Almost every NN toolbox supports CUDA.
 
How many of those specperf tests are for apps that work on Mac? Why do we care about DX12? Gamersnexus showed that the 1080 Ti smoked Vega in blender. That seems relevant for a professional Mac user.
Erm...

https://twitter.com/themikepan/status/881339581525762048
https://developer.blender.org/rBeb293f59f2eb9847b8fd593ac2dde2781ac8ace1

Even they have said that their tests crashed because of bug in Blender. Performance is still buggy for Vega, even if it solved, and does not perform at 100% of performance, on Vega.

SpecPerf indicates actually the compute throughput of the GPU, not the actual performance.

What this means is that you can get similar results with optimized software for your hardware. And that is what is important for Apple ecosystem because Metal 2 is very well optimized for AMD hardware.

If you want to post the results that show AMD GPU loosing in SpecPerf or gaming, go ahead. I have posted the review.

P.S. You do not care about DX12, however you all always judge GPUs based on gaming benchmarks.
 
And if it was competing with other AMD card those graph would be great.

But.

quadro-spec1_0.png


quadro-spec5_0.png


quadro-spec7_0.png


quadro-spec9_0.png


Link: https://www.pcper.com/reviews/Graph...B-Air-Cooled-Review/Professional-Testing-SPEC

It's trading blow with the TitanXP and even loses to a $550 Quadro P2000 in one test (last slide).
And since AMD made the statement that this card would be the best for game/content creator since it had a game mode so you could create and test on the same machine it's kind of ironic that it is underperforming in the two most common application suited for that task in this test 3dsMax and Maya...

And it is doing so by burning way more power and costing just as much as the competition.
 
Regardless of whos on top, it makes no difference if they end up all out of stock and unobtainable again.
 
And if it was competing with other AMD card those graph would be great.

It's trading blow with the TitanXP and even loses to a $550 Quadro P2000 in one test (last slide).
And since AMD made the statement that this card would be the best for game/content creator since it had a game mode so you could create and test on the same machine it's kind of ironic that it is underperforming in the two most common application suited for that task in this test 3dsMax and Maya...

And it is doing so by burning way more power and costing just as much as the competition.
It appears that my question should stay...

Vega does not have signed, professional drivers, like Quadro/Radeon Pro Duo has.

How come then, the Quadro P5000 is faster than 12.78 TFLOPs GPU, despite having over 9 TFLOPs of compute power?

DRIVERS. How can you not factor something like this?


Why do you think it will not be faster with Proper, professional, signed drivers?
What if the GPU is real Volta competitor?

Vega is faster in 6 out of those 9 situations than Titan Xp, to which it was supposed to be compared, because of lack of professional, signed drivers. Yet, you claim it is trading blows with it. In what world 6 out of 9 is: "trading blows"?

No my friend. Vega is faster than Titan Xp in SpecPerf in most cases. Only GPUs that are faster, have proper, signed, professional drivers, that extract all of their capabilities, and cost even 4 times more. I am having hard time understanding, how you can point that in last slide 550$ GPU is faster than Vega, and completely fail to see that both GPUs are faster than Titan Xp, and say: "I see nothing wrong with that! Vega is a failure".

Back to topic, and a quick recap: of interesting technologies AMD patented in past, and may related to Vega:
https://forum.beyond3d.com/posts/1990969/
First and last are very interesting...
 
Last edited by a moderator:
How come then, the Quadro P5000 is faster than 12.78 TFLOPs GPU, despite having over 9 TFLOPs of compute power?

DRIVERS. How can you not factor something like this?

Or, like I've mentioned many times in the past, these (and plenty of other) tests are not solely limited by raw compute horsepower (i.e. TFLOPs). Many of those SPECviewperf tests render models with 10s of millions of triangles, and should be considered more of a graphics test than a raw compute test.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.