Where can we buy this mythical "Vega" that you talk about. What are its clocks?
And what does "in compute" actually mean? Ridiculous theoretical GFLOPS, or measured performance on real, useful applications?
You're investing all of this effort in comparing unknown, unsold Vega cards to first generation Pascal cards - while carefully ignoring the latest Pascal cards and Volta.
And when someone points out better benchmarks for Nvidia, you fall back to "performance per watt" and "performance per dollar". Measurements that aren't that important for most of the hobbyists and pro-sumers here.
Tuxon replied to my post, that Vega is smoked by GTX 1080. I replied that in games - maybe. But not in compute.
PCPer review tested 9 situations in SpecPerf, which indicates that compute performance, without optimized, and signed drivers is stronger than Nvidia GPUs. It appears that I am the one attacked, by usual suspects that have uneducated opinions about hardware.
I read the other day that you believe you did good job with ordering GTX 1080 Ti. Yes, you did good job with ordering GTX 1080 Ti's, because they are great GPUs.
However, if you judged that Vega is a failure compared to GTX 1080 Ti, based on gaming benchmarks, not on compute benchmarks, which were available - I would not want to be your professional partner.
Whole review available here, for those unable to type in Google: Vega Frontier Edition Review:
https://www.pcper.com/reviews/Graph...B-Air-Cooled-Review/Professional-Testing-SPEC
Citation needed. Vega running at 1440 Mhz is very close compute wise (9.6 TFLOPS) to a retail GTX 1080 at 1800 Mhz (9.2 TFLOPS). I haven't seen any tests directly comparing the GTX 1080 and Vega FE. Only tests that compare Vega FE to the GTX 1080 TI, which it seems to trade blows with depending on the test.
I agree with tuxon86, you seem to attack anyone who disagrees with you.
Well, first of all, if you do the maths properly, you will find out, that Vega has 11.8 TFLOPs of compute power at 1440 MHz.
Titan Xp's Theoretical Max FLOPs is 12.76 TFLOPs, and yet it is still slower in most cases in what we have seen, so far. You have in upper part of post more information.
You appears to be another one of those with uneducated opinions.
If you want citation for RX 480 being faster than GTX 1070:
Why are you misleading people by relying on only one test, specperf, while the majority of reviewers based their finding on multiples? Who's misleading who here? It's as if you truly beleive that your opinion is worth more than every person who, contrary to you, have actually used and test the card. As always you're taking AMD stats as divine truth even if they're proving wrong by peoples actually using the damn card!
Because the only other thing that reviewers tested in compute was Luxmark Hotel 3.1 which tests theoretical TFLOPs performance. 11.8 TFLOPs Vega GPU@ 1.44 GHz is slower in it than 12.8 TFLOPs Titan Xp.
Do I have to point out what compute power level have other GPUs and why do they score such low compared to Vega and Titan Xp in this test?
You are misleading, because I was writing about Compute throughput of Vega architecture from the beginning, compared to previous years, and you jumped out of the Cannabis field, saying that Vega is smoked by GTX 1080, because you felt that your beloved Nvidia is being attacked.
So why do you deliberately mislead people by saying that it is smoked by GTX 1080 in gaming, like it is only thing in the world that matters?
In games - maybe. But not in compute, if more powerful GPU from GTX 1080 - the Titan Xp is slower than Vega.
One more thing, this time news from Facebook's AI guy:
https://www.reddit.com/r/MachineLea...leased_by_amd_deep_learning_software/djpfmu1/
For PyTorch, we're seriously looking into AMD's MIOpen/ROCm software stack to enable users who want to use AMD GPUs.
We have ports of PyTorch ready and we're already running and testing full networks (with some kinks that'll be resolved). I'll give an update when things are in good shape.
Thanks to AMD for doing ports of cutorch and cunn to ROCm to make our work easier.