Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
LOLOLOLOLOLOL You have just proven me right and brought argument to what I have written. Reading comprehension.
IF AMD GPU is bottlenecked by software it will show lackluster performance. If Software is not bottlenecking AMD GPU it will be on the same level as Nvidia. GTX 980 Ti has 6 TFLOPs of compute power and is within 250W TDP. R9 390X has around 6 TFLOPs of compute and has 250W of TDP. The only one who is doing PR is you. You not even understand what is written in posts.

I just proved that you don't tell anything close to the truth.

Look at the chart, it proves you haven't got a clue what the truth looks like.

"Who are you gonna believe, me or your lyin' eyes" doesn't work.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I just proved that you don't tell anything close to the truth.

Look at the chart, it proves you haven't got a clue what the truth looks like.

"Who are you gonna believe, me or your lyin' eyes" doesn't work.
Look again to my post after the edit. Sorry, but you have no clue what you are talking about. We were talking about a situation, where AMD GPU is bottlenecked by software, and in that scenario it performs much worse, than normal. Then you jumped out of field of cannabis, you ignored completely factor of COMPUTE power per watt, shown gaming benchmarks where AMD GPUs are bottlenecked by software, and you are happy that you have proven me wrong.

Only thing you have proven is that you are clueless about what you are talking about.
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Better let Techpowerup know that they "tested it wrong".

From the conclusion:

The Radeon R9 290X has been notorious for its high power consumption and the R9 390X is no exception. MSI's R9 390X Gaming actually consumes much more power than the R9 290X, requiring around 350 W during typical gaming, with peaks at up to 370 W. The only card requiring more power is the R9 295X2. But not only gaming power consumption is high as multi-monitor and Blu-ray power consumption are increased too. Those two scenarios have been an issue on AMD cards for a long time, and things are even worse now. 98 W GPU power consumption just to playback a Blu-ray is simply insane. NVIDIA does the job with around 10 W, so there is no way a R9 390X should go into your media PC. Overall, the MSI R9 390X Gaming has one of the worst performance-per-watt ratings, worse than the R9 295X2. NVIDIA's GTX 980 is over twice as power efficient!
 
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Ok, I did not wanted to put it this way. DirectX11, because it is serial API it will always bottleneck AMD GPUs. What happens when we look at DirectX 12 benchmarks? Completely different story.

Lets compare 180W GPU with 180W GPU, that both have around 4 TFLOPs of compute power. http://cdn.wccftech.com/wp-content/uploads/2016/03/Hitman-PC-DirectX-12-Benchmarks_3.jpg
Can you count the TFLOPs numbers? And the performance of the both GPUs?

Lets look now at 250W GPUs, both with the same core count, and similar TFLOPs performance. http://cdn.wccftech.com/wp-content/uploads/2016/03/Hitman-PC-DirectX-12-Benchmarks_2.jpg Reference GPUs.

How come, 4 TFLOPs, 180W GPU has almost exactly the same level of performance of a GPU with 4 TFLOPs of compute power, and 180W of power draw?
How come 6 TFLOPs GPU 250W GPU has similar performance as 250W GPU with 6 TFLOPs of compute power?

Because the software is not bottlenecking it. You know what is even funnier? In latest DX11 games R9 290X is ALMOST on par with GTX 980 Ti. How Come? Because it has similar level of performance, but DX11 drivers are much better. The same thing goes for... 4 TFLOPs R9 380X and 4 TFLOPs GTX 980. Like here, for example: http://pic.yupoo.com/ztwss/Ftoyshb7/medish.jpg
http://pic.yupoo.com/ztwss/FtoysJQA/medish.jpg

Because of mindshare, and people like MVC, AMD is not considered being comparable to Nvidia, even if their software caught up. Most of industry caught up with software. That was whole point of the beginning of my discussion with Zarniwoop.

But hey, MVC has contract with Nvidia. He will always threadcrap about how awful AMD is, and how Apple awful is for using their hardware.
 
  • Like
Reactions: hollyhillbilly

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Ok, I did not wanted to put it this way. DirectX11, because it is serial API it will always bottleneck AMD GPUs. What happens when we look at DirectX 12 benchmarks? Completely different story.

Lets compare 180W GPU with 180W GPU, that both have around 4 TFLOPs of compute power. http://cdn.wccftech.com/wp-content/uploads/2016/03/Hitman-PC-DirectX-12-Benchmarks_3.jpg
Can you count the TFLOPs numbers? And the performance of the both GPUs?

Lets look now at 250W GPUs, both with the same core count, and similar TFLOPs performance. http://cdn.wccftech.com/wp-content/uploads/2016/03/Hitman-PC-DirectX-12-Benchmarks_2.jpg Reference GPUs.

How come, 4 TFLOPs, 180W GPU has almost exactly the same level of performance of a GPU with 4 TFLOPs of compute power, and 180W of power draw?
How come 6 TFLOPs GPU 250W GPU has similar performance as 250W GPU with 6 TFLOPs of compute power?

Because the software is not bottlenecking it. You know what is even funnier? In latest DX11 games R9 290X is ALMOST on par with GTX 980 Ti. How Come? Because it has similar level of performance, but DX11 drivers are much better. The same thing goes for... 4 TFLOPs R9 380X and 4 TFLOPs GTX 980. Like here, for example: http://pic.yupoo.com/ztwss/Ftoyshb7/medish.jpg
http://pic.yupoo.com/ztwss/FtoysJQA/medish.jpg

Because of mindshare, and people like MVC, AMD is not considered being comparable to Nvidia, even if their software caught up. Most of industry caught up with software. That was whole point of the beginning of my discussion with Zarniwoop.

But hey, MVC has contract with Nvidia. He will always threadcrap about how awful AMD is, and how Apple awful is for using their hardware.

Move the goalposts much?

We were talking about perf per watt.

Where AMD gets obliterated.

I understand it is hard to reach the screen and keyboard when you have a very long nose. (pinnochio)

I have no contract with Nvidia, none at all. I am an Art Director by trade, taking TV classes again at UCLA since Apple is leaving the Pro space.

AMD being considered to sell hand warmers has nothing to do with me, has more to do with tests like the linked ones.

Try again.
 
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
And to sum all of this up, you know what is funniest? Nvidia Needed new node to caught up with AMD on compute performance per watt.

Fury Nano uses 180W and brings 7 TFLOPs of compute power(average 850 MHz core clock). That is around 39 GFLOPs/watt.

And only 2 GPUs are better(GP104-400, GP104-200) than Fury Nano in this regard. Because even GP100 has 35.3 GLOPs/Watt.

Move the goalposts much?

We were talking about perf per watt.

Where AMD gets obliterated.

I understand it is hard to reach the screen and keyboard when you have a very long nose. (pinnochio)

I have no contract with Nvidia, none at all. I am an Art Director by trade, taking TV classes again at UCLA since Apple is leaving the Pro space.

AMD being considered to sell hand warmers has nothing to do with me, has more to do with tests like the linked ones.

Try again.
Im guessing you are not able to do maths, do you? Why do you take aftermarket non reference overvolted GPU as primary example of the point? I could also take aftermarket GTX 980 TI, and use it as an example, but you know why I don't? Because on the contrary to you I want to see the GPUs as they are, not the way I want them to be.

That is why I will always compare reference GPU to reference GPUs. Just like whole industry is doing, if they are counting efficiency.
 

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
From this list we can see, that Fury Nano is a lot better than it's been given a credit for. R9 390 series in not efficient, but at least it's the best card for Dual Precision computing. For other purposes it's far from efficient. GTX 980 TI does similarly bad job.

GeForce GTX 980, 4612 GFLOPs SP / 144 GFLOPs DP, TDP 165W
GeForce GTX 980 Ti, 5632 GFLOPs SP / 176 GFLOPs DP, TDP 250W
GeForce GTX Titan X, 6144 GFLOPs SP / 182 GFLOPs DP, TDP 250W

Radeon R9 390, 5120 GFLOPs SP / 640 GFLOPs DP, TDP 275W
Radeon R9 390X, 5913 GFLOPs SP / 739 GFLOPs DP, TDP 275W
Radeon R9 Nano, 8192 GFLOPs SP / 512 GFLOPs DP, TDP 175W
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Will Apple even consider NVIDIA cards in the nMP or do they have some sort of contract with AMD?
The Apple bean counters have a set of rules.

comline.gif

ferengi.gif

comline.gif

http://www.sjtrek.com/trek/rules/

If Nvidia can give Apple better margins and more profit than AMD, then it will be "Hello CUDA!" and Jen-Hsun Huang will be on stage for the MacWorld SF '16 keynote.
[doublepost=1462921774][/doublepost]
Fury Nano uses 180W and brings 7 TFLOPs of compute power(average 850 MHz core clock). That is around 39 GFLOPs/watt.

And only 2 GPUs are better(GP104-400, GP104-200) than Fury Nano in this regard. Because even GP100 has 35.3 GLOPs/Watt.
When you're losing on GFLOPs, suddenly GFLOPs/watt is the important metric. (AKA "moving the goalposts")

Or maybe you're dealing with a poorly designed chassis with severe constraints on power and cooling, and getting the watts down is more important than getting the GFLOPs up.
 
Last edited:
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
When you're losing on GFLOPs, suddenly GFLOPs/watt is the important metric.
Maybe I have missed something, but It is Nvidia who caught up with AMD on the compute field. And yet, they still needed new node to do this. So either way, Both in raw compute power and in compute power per watt on 28 nm Nvidia was destroyed. Were you able to get 7 TFLOPs of compute power from 180W on 28 nm using Nvidia hardware?

Yes, I am trolling now a bit ;).
 
  • Like
Reactions: pat500000

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Lets compare 180W GPU with 180W GPU, that both have around 4 TFLOPs of compute power. http://cdn.wccftech.com/wp-content/uploads/2016/03/Hitman-PC-DirectX-12-Benchmarks_3.jpg
Can you count the TFLOPs numbers? And the performance of the both GPUs?

The only 180 w amd gpu in this test is the 380x.

Lets look now at 250W GPUs, both with the same core count, and similar TFLOPs performance. http://cdn.wccftech.com/wp-content/uploads/2016/03/Hitman-PC-DirectX-12-Benchmarks_2.jpg Reference GPUs.

I don't see any power consumption numbers here. MacVidCards and I already provided a link that shows that the 390X is a 350 W card.

And to sum all of this up, you know what is funniest? Nvidia Needed new node to caught up with AMD on compute performance per watt.

Fury Nano uses 180W and brings 7 TFLOPs of compute power(average 850 MHz core clock). That is around 39 GFLOPs/watt.

This is ridiculous. At best the Fury Nano is roughly on par with the GTX 980 in performance per watt.

I am guessing Hawaii's terrible performance per watt is why we never saw it in the mac pro. Since the Nvidia 900 series and the AMD Fury cards are very much gaming oriented with little double precision compute Apple never saw much benefit in replacing Tahiti.
 
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
GUYS, THOSE ARE GAMING BENCHMARKS. NOT COMPUTE PERFORMANCE PER WATT BENCHMARKS. FOR GOD SAKE START READING!

Is Nvidia reality distortion field that strong that people misunderstand gaming benchmarks(of DX11) with Compute power per watt?

And no, R9 390X Gaming is not 250W GPU. It is the worst example of overvolting and overclocking the GPU on AMD side and getting spike in power consumption. No question why MVC used it. Stacc, I will give you exact context.
Nope, because their reference blower sucked. Not air, it just sucked. It happened after R9 290 and 290X.

R9 390 and 390X did not had this problem. It used even less(4W, but it was a difference ;) ) power than predecessors despite having more RAM, and higher core clocks and higher memory clocks.
http://tpucdn.com/reviews/Powercolor/R9_390_PCS_Plus/images/power_average.gif Compare R9 390 reference to R9 290 reference. And again: http://tpucdn.com/reviews/Powercolor/R9_390_PCS_Plus/images/power_peak.gif
Higher core clocks, higher memory clocks, twice the amount of RAM. Lower power consumption.
Performance per watt? It was on the same level as Nvidia. R9 390X and GTX 980 TI have similar thermal envelope and compute performance. It was software that blocked AMG GPUs in games and from that came mindshare of Nvidia GPUs, about performance per watt.

Look what happened lately when software caught up. R9 380X is on the level of GTX 980, R9 390X is on level of GTX 980 Ti. And in some cases performance per watt was much better than Nvidia's offerings(Fiji, overall).

People are staggered when R9 390X is much faster in OpenCL than GTX 980. But it has to be! It has almost 6 TFLOPs of compute power, compared to 4.6. Again, mindshare was the key role of driving perception of AMD GPUs.
Performance per watt outperforms Nvidia only on Fiji. You get the point?

P.S. MSI R9 390X Gaming in this review draws 258W of power. http://www.guru3d.com/articles-pages/asus-radeon-r9-390x-strix-8g-review,8.html
Asus Strix draws 286W. Explain this.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Maybe I have missed something, but It is Nvidia who caught up with AMD on the compute field. And yet, they still needed new node to do this. So either way, Both in raw compute power and in compute power per watt on 28 nm Nvidia was destroyed. Were you able to get 7 TFLOPs of compute power from 180W on 28 nm using Nvidia hardware?

Yes, I am trolling now a bit ;).
Trollling, and pointing the gun at your own feet and pulling the trigger. I accuse you of shifting to GFLOPs/watt because AMD is losing on GFLOPs - and you respond with a GFLOPs/watt question.

MP6,1 fans have blinders on, focused on "how fast can I go in a 450 watt power envelope". The rest of the computing world has a focus on "how fast can I go". Big difference. 50 watts more or less is one of the minor factors.

I recently posted this picture of one of my new set of systems:
catwoman-5.jpg
(click to enlarge)​
That's 33 GFLOPs, by the way - five Titan-X cards and 72 physical cores.

I have three of them so far, and I've been told to replace the Titan-X cards with GTX 1080s ASAP. (Actually, first get a 4th system with five GTX 1080s, and buy 15 more GTX 1080s and eBay the Titans if it looks good.)
 
  • Like
Reactions: tuxon86

Stacc

macrumors 6502a
Jun 22, 2005
888
353
GUYS, THOSE ARE GAMING BENCHMARKS. NOT COMPUTE BENCHMARKS. FOR GOD SAKE START READING!

Is Nvidia reality distortion field that strong that people misunderstand gaming benchmarks(of DX11) with Compute power per watt?

You haven't shown any compute per watt benchmarks. They are hard to fine. Anandtech does look at a few compute benchmarks and we see that Nvidia and AMD trade blows. Basically single precision tasks Nvidia wins, double precision tasks and video encoding AMD wins.

And no, R9 390X Gaming is not 250W GPU. It is the worst example of overvolting and overclocking the GPU on AMD side and getting spike in power consumption. No question why MVC used it. Stacy, I will give you exact context.

Fine, find me a 390X that is reasonably clocked. You keep linking to a lower clocked 390 that is still not power efficient. In the same anandtech article they underclock a 390X to 1050 Mhz and it still is off the charts in power consumption.
 
  • Like
Reactions: tuxon86

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Performance per watt outperforms Nvidia only on Fiji. You get the point?

How soon we forget.

This is called "changing your tune".

Performance per watt? It was on the same level as Nvidia. R9 390X and GTX 980 TI have similar thermal envelope and compute performance. It was software that blocked AMG GPUs in games and from that came mindshare of Nvidia GPUs, about performance per watt.

Look what happened lately when software caught up. R9 380X is on the level of GTX 980, R9 390X is on level of GTX 980 Ti. And in some cases performance per watt was much better than Nvidia's offerings(Fiji, overall).

People are staggered when R9 390X is much faster in OpenCL than GTX 980. But it has to be! It has almost 6 TFLOPs of compute power, compared to 4.6. Again, mindshare was the key role of driving perception of AMD GPUs.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
You haven't shown any compute per watt benchmarks. They are hard to fine. Anandtech does look at a few compute benchmarks and we see that Nvidia and AMD trade blows. Basically single precision tasks Nvidia wins, double precision tasks and video encoding AMD wins.



Fine, find me a 390X that is reasonably clocked. You keep linking to a lower clocked 390 that is still not power efficient. In the same anandtech article they underclock a 390X to 1050 Mhz and it still is off the charts in power consumption.
Erm... Compute performance, per watt is, as far as I know, counted from Teraflops performance and power consumption... O_O

Reasonably clocked? Was the MSI Gaming in that review. Was the Asus Strix from that review. And Gaming was reference, and Strix was OC'ed. Strix has 6.05 TFLOPs of compute power and draws 286W of power. Gaming has presumably reference core clocks: 1050 MHz and 5.9 TFLOPs of compute power, and draws 258W. Is it far from GTX 980 TI?
How soon we forget.

This is called "changing your tune".
Your reading comprehension is faulty.
Look what happened lately when software caught up. R9 380X is on the level of GTX 980, R9 390X is on level of GTX 980 Ti. And in some cases performance per watt was much better than Nvidia's offerings(Fiji, overall).
From the beginning I was saying that only Fiji was better in performance per watt!
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Erm... Compute performance, per watt is, as far as I know, counted from Teraflops performance and power consumption... O_O

So instead of looking at actual performance benchmarks you are just going to read off the specs? Sounds like a poor way of choosing the best tool for a given task.

Reasonably clocked? Was the MSI Gaming in that review. Was the Asus Strix from that review. And Gaming was reference, and Strix was OC'ed. Strix has 6.05 TFLOPs of compute power and draws 286W of power. Gaming has presumably reference core clocks: 1050 MHz and 5.9 TFLOPs of compute power, and draws 258W. Is it far from GTX 980 TI?

There is a specific note in that review in which they say they downclock the 390X to 1050 Mhz. Even downclocked it draws 90 W more than the GTX 980 Ti while playing a game.

MR readers will be able to draw their own conclusions.

You might want to stick to the facts in the future.

Can't believe I am saying this, but I agree with MacVidCards :p
 
  • Like
Reactions: tuxon86

pat500000

Suspended
Jun 3, 2015
8,523
7,515
The power consumption and performance per watt for the 390x are terrible, at least in gaming.
Those are brutal charts....dang....
[doublepost=1462926993][/doublepost]
Trollling, and pointing the gun at your own feet and pulling the trigger. I accuse you of shifting to GFLOPs/watt because AMD is losing on GFLOPs - and you respond with a GFLOPs/watt question.

MP6,1 fans have blinders on, focused on "how fast can I go in a 450 watt power envelope". The rest of the computing world has a focus on "how fast can I go". Big difference. 50 watts more or less is one of the minor factors.

I recently posted this picture of one of my new set of systems:
View attachment 630838
(click to enlarge)​
That's 33 GFLOPs, by the way - five Titan-X cards and 72 physical cores.

I have three of them so far, and I've been told to replace the Titan-X cards with GTX 1080s ASAP. (Actually, first get a 4th system with five GTX 1080s, and buy 15 more GTX 1080s and eBay the Titans if it looks good.)
WHOA...that's some skynet (terminator reference) products....
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
So instead of looking at actual performance benchmarks you are just going to read off the specs? Sounds like a poor way of choosing the best tool for a given task.



There is a specific note in that review in which they say they downclock the 390X to 1050 Mhz. Even downclocked it draws 90 W more than the GTX 980 Ti while playing a game.



Can't believe I am saying this, but I agree with MacVidCards :p
Stacc, please, I have given already on this page of this thread link to THIS: http://www.guru3d.com/articles-pages/asus-radeon-r9-390x-strix-8g-review,8.html

Asus Strix - 1070 MHz core clocks: 286W power draw. MSI Gaming R9 390X - 258W power draw. Titan X - 254W power draw. GTX 980 TI - 250W power draw. Explain this. If this is inaccurate that means the actual power draw of the GPUs is lower, than they calculated.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.