Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Stacc, please, I have given already on this page of this thread link to THIS: http://www.guru3d.com/articles-pages/asus-radeon-r9-390x-strix-8g-review,8.html

Asus Strix - 1070 MHz core clocks: 286W power draw. MSI Gaming R9 390X - 258W power draw. Titan X - 254W power draw. GTX 980 TI - 250W power draw. Explain this. If this is inaccurate that means the actual power draw of the GPUs is lower, than they calculated.

It does seem that the MSI card has the highest power consumption of the bunch. That said, no 390X is going to win any efficiency awards. Nvidia pretty clearly made efficiency a priority with the 900 series and did a very good job. There seems like there is a reasonable chance AMD could do the same with Polaris.

Yet again, this topic has devolved into childish personal attacks.

GROW UP AND GIVE IT A REST! ALL of you!

Oh come on, this is the internet. It was basically made for nerds arguing about computers. :p
 

linuxcooldude

macrumors 68020
Mar 1, 2010
2,480
7,232
koyoot has many valid points. Performance benchmarks are typically gaming benchmarks that don't do much good on a workstation that is not geared for gaming. Especially on the Mac Pro which has zero interest in gaming. AMD does not do well on software that has code specific to Nvidia CUDA, Gameworks PhysX ect. But put Nvidia & AMD on compute tasks not hindered by Nvidia specific code we get similar results.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
It does seem that the MSI card has the highest power consumption of the bunch. That said, no 390X is going to win any efficiency awards. Nvidia pretty clearly made efficiency a priority with the 900 series and did a very good job. There seems like there is a reasonable chance AMD could do the same with Polaris.
And where I claimed it is more efficient? I claimed from the beginning Fiji is more power efficient. What efficiency means? It means how much power you get from 1 watt of power consumed. Lets look at numbers of reference cards, shall we?

1050 MHz R9 390X has 5913 GFLOPs and has TDDP around 258W. That gives 22.9 GFLOPs/Watt
TITAN X has 6144 GFLOPs and 254 TDP. That gives 24.18 GFLOPs/Watt. Not on similar level/same level?(Both from Guru3D Numbers)
R9 380X Strix has 1030 MHz core clock, that gives 4.2 TFLOPs of compute power. It consumes 179W according to TPU numbers. 23.56 GFLOPs/Watt.
Reference GTX 980 has 4.6 TFLOPs of compute power, and consumes 184W of power. That gives around 25GFLOPs/Watt. Not on similar level of efficiency?

For comparison: 850 MHz core clock, gives 7 TFLOPs of compute power. Power draw according to TPU numbers(peak: 209W, because that is equivalent to heavy work on OpenCL/CUDA). 33GFLOPs/Watt.

THAT WAS WHOLE POINT FROM BEGINNING! I claimed that R9 380X is on similar level of efficiency currently, the same goes for R9 390X and GTX 980 Ti. And that is reflected in current games. Radeon are not faster in DX11 than Maxwell GPUs. But are finally very close to Nvidia GPUs.

P.S. I am sorry if anyone felt offended by my post and rhetoric.

koyoot has many valid points. Performance benchmarks are typically gaming benchmarks that don't do much good on a workstation that is not geared for gaming. Especially on the Mac Pro which has zero interest in gaming. AMD does not do well on software that has code specific to Nvidia CUDA, Gameworks PhysX ect. But put Nvidia & AMD on compute tasks not hindered by Nvidia specific code we get similar results.
Not exactly what I meant. I meant that because of mindshare AMD Radeons are not considered on similar level of efficiency to Nvidia counterparts. It is because gaming benchmarks from DX11 drove this perception. Right now, when software caught up, performance is reflected even in games(Killer Instinct benchmarks). People still consider Radeon not being on the same level as Nvidia GPUs. Right now in gaming benchmarks, R9 290X is close to GTX 980 TI, R9 380X is close to GTX 980. Both counterparts have similar power consumption, and similar compute power. THAT is what I meant. Differences are still reflected, however.
 
Last edited:

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
I'm not going to bother to post actual measured numbers again.

You were proven wrong once, that's enough. Nobody reading the rhetoric is fooled.

"I claimed that R9 380X is on similar level of efficiency currently, the same goes for R9 390X and GTX 980 Ti."

Does not represent reality on the planet Earth. It is in fact a lie.
 
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
MVC, look at Guru3D numbers, count performance per watt.

I know that you want to use Techpowerup numbers from gaming benchmarks, but that has absolutely NOTHING to do with the merit of the argument. You use it to prove your logic. Fair enough. But you forget one thing: it has nothing to do with the merit of argument. Read everything again, however I do not believe it will help. You will not understand the merit.
 

netkas

macrumors 65816
Oct 2, 2007
1,198
394
>1050 MHz R9 390X has 5913 GFLOPs and has TDDP around 258W. That gives 22.9 GFLOPs/Watt
TITAN X has 6144 GFLOPs and 254 TDP. That gives 24.18 GFLOPs/Watt. Not on similar level/same level?(Both from Guru3D Numbers)

these gigaflops don't represent real performance in 3d, only gpgpu. so absolutely useless for most of consumers.
 
  • Like
Reactions: tuxon86

antonis

macrumors 68020
Jun 10, 2011
2,085
1,009
The aforementioned GPUs, according to this, are not even close. I know the specific test is not a gaming one, rather than represents the raw power of each gpu. But I guess it tells a significant part of the story.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
>1050 MHz R9 390X has 5913 GFLOPs and has TDDP around 258W. That gives 22.9 GFLOPs/Watt
TITAN X has 6144 GFLOPs and 254 TDP. That gives 24.18 GFLOPs/Watt. Not on similar level/same level?(Both from Guru3D Numbers)

these gigaflops don't represent real performance in 3d, only gpgpu. so absolutely useless for most of consumers.
Erm, have you seen benchmarks that I provided in this thread within last two pages? Why does suddenly in DX11 game R9 380X and R9 280X be on GTX 980 Level of performance. What is more it is within COMPUTE margin of error of both cards? 4.6 TFLOPs vs 4.2 TFLOPs. 10%? And what I have been talking all this time? MINDSHARE based on gaming benchmarks perception. People tend to believe that 250W 6 TFLOPs GPU from Nvidia is MUCH faster than AMD with similar thermal envelope and compute power. That was true - when software was rubbish. Currently, software - drivers - are much better, that is why even in gaming benchmarks, latest with most recent drivers, 180W GPU 4.2 TFLOPs GPU is 10% slower than 4.6 TFLOPs, 180W GPU. That is why 5.9 TFLOPs 250W GPU is 5-10% slower from 6.0 TFLOPs 250W GPU.

All of what I have been writing about on last few pages, was around that. Currently DX11 and DX12 benchmarks, when software is NOT BOTTLENECKING the hardware show that actually R9 390X is on similar level of performance to GTX 980 Ti. Why did MVC used worst case scenario, to prove his imaginary point of view? Why did he used benchmarks from very long time ago, when software WAS bottlenecking AMD hardware?

All of you guys have EXACTLY proven what I and Zarniwoop discussed. That perception of inefficiency of AMD GPUs came from gaming benchmarks when software was bottlenecking them. Right now, with latest drivers, and with latest APIs it doesn't. 6 TFLOPs, 250W GPU is on the same level of performance regardless of brand. You get the point? Whats more, reference GTX 1080 is 5-10% faster in DX12 games than reference Fury X. And what level of compute performance both GPUs have? 9 vs 8.6 TFLOPs. Why it is important? Because gaming and VR performance will reflect COMPUTE performance of the GPUs. This is my last post on this matter.

I am sorry for going on for so long, but I wanted to make clear my point.
 

DEMinSoCAL

macrumors 603
Sep 27, 2005
5,075
7,297
Apple's' Macbook sales plummeted 40% after holiday season.

http://www.cnet.com/news/apple-hit-by-huge-drop-in-notebook-shipments/

Tim Cook's' clock is ticking..

One quarter of decline won't offset years of increased sales compared to the PC market.

Apple continues to gain in PC market share despite year-over-year shipment drop

How did MR not get this story on the front page?

@linuxcooldude -- one quarter may not, but how about two? Three? First iPhones, now Macbooks. Then what? iMacs? Pretty soon, all products are in decline. It's called a sign of things to come unless Apple does something. Sure, they're not going bankrupt any time soon, but if you read practically any thread on MR regarding sales and products, there is an overwhelming majority that feel that Apple is set for a big decline with no innovative products and abandonment of some lines (like Mac Pro).

Who knows, but these aren't good signs for Apple.
 

linuxcooldude

macrumors 68020
Mar 1, 2010
2,480
7,232
How did MR not get this story on the front page?

@linuxcooldude -- one quarter may not, but how about two? Three? First iPhones, now Macbooks. Then what? iMacs? Pretty soon, all products are in decline. It's called a sign of things to come unless Apple does something. Sure, they're not going bankrupt any time soon, but if you read practically any thread on MR regarding sales and products, there is an overwhelming majority that feel that Apple is set for a big decline with no innovative products and abandonment of some lines (like Mac Pro).

Who knows, but these aren't good signs for Apple.

It will certainly have to be more than one quarter of dismal sales to convince me. You have to expect fluctuations every quarter. Its the year to year sales that ultimately will tell the tale. Since we are also seeing even more decline in PC sales as well.
 
Last edited:

ManuelGomes

macrumors 68000
Original poster
Dec 4, 2014
1,617
354
Aveiro, Portugal
1080 PCB in the nude (oops, can I say this here?!):

http://www.techpowerup.com/222408/nvidia-geforce-gtx-1080-reference-pcb-pictured
[doublepost=1462980565][/doublepost]Vega 11 comes after Vega 10 (of course):

http://www.techpowerup.com/222403/amd-pulls-radeon-vega-launch-to-October
[doublepost=1462980697][/doublepost]Originally from:
http://videocardz.com/59818/nvidia-geforce-gtx-1080-taken-apart
[doublepost=1462980963][/doublepost]GDDR5X enters mass production:
http://videocardz.com/59839/micron-gddr5x-memory-enters-mass-production
Good news.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
What I want to say about tech journalism currently is that it is quite funny that day after "leak" that Vega will come faster there is rumor/news that AIBs are angry at AMD for not giving them information about upcoming Polaris GPUs.

Yeah, it is quite funny to observe those things. However, AMD can be blamed themselves for this situation. Not from technical point of view, however.
 

Sinx2oic

macrumors regular
Mar 26, 2009
142
0
Does anyone really still think of the nMP as a gaming machine?
A gaming machine no I doubt it, but I game on mine all the time, xfire d700. But would love if they stuck 2 1080's in the next update, can't afford 2 seperate machines, overal enjoyed my 2013 8core bin for after effects, but was very expensive and feel Apple needs to listen to what pro's want. I don't wanna go Windows except for gaming.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Here is an interesting rumor tidbit from VRWorld:

"According to sources in the know, the Polaris for PlayStation Neo is clocked at 911 MHz, up from 800 MHz on the PS4. The number of units should increase from the current 1152. Apparently, we might see a number higher than 1500, and lower than 2560 cores which are physically packed inside the Polaris 10 GPU i.e. Radeon R9 400 Series."

This is the first time I've seen anyone mention specifically 2560 cores on Polaris 10. So based on what Nvidia was able to do going from the Titan X to the 1080, lets try and figure out what sort of performance we may see on Polaris 10.

Titan X -> 1080
Die features (billion) 8 -> 7.1
Die size (mm2) 601 -> ~320
Base clock speed (Mhz) 1000 -> 1607
Cores 3072 -> 2560
GFLOPS 6144 (base) -> ~8200 (base) ~9000 (boost)

Just a note here, when Nvidia quotes its performance on the 1080, its calculated at the boost frequency.

So lets assume that Polaris 10 is more akin to shrinking Hawaii

AMD 390X -> Polaris 10
Die features (billion) 6.2 -> 5.5?
Die size (mm2) 438 -> ~232
Base Clock speed (Mhz) 1050 -> 1400? 1500?
Cores 2816 -> 2560
GFLOPS 5900 -> 7200? 7700?

I've indicated my assumptions with ?. This lands us with performance somewhere between the 390X and the Fury X. If they can get this in the power envelope of 125 - 150 W it will be a pretty good chip. The most uncertainty here is the final clock speeds. AMD has tended to keep them lower and if that is still true that would decrease performance from what I have guessed.

Polaris 10 probably won't touch the Nvidia 1080, but it would likely beat the 1070. Another note is that we don't know about any sort of architectural enhancements from AMD and how those will compare to the architectural enhancements Nvidia has made.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.