Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
http://www.3dmark.com/compare/spy/16288/spy/17096/spy/15835#

1530 MHz on core of RX 480. Custom model with 8 pin. More than 170W power consumption.

In DX12 and Vulkan this GPU will be close or faster than GTX 1070.

However this is absolutely pointless. RX 480 is bottlenecked by memory bandwidth.

Over 20% OC turns into around 20% higher performance however. Which is not bad(Graphics scores).
 
http://www.3dmark.com/compare/spy/16288/spy/17096/spy/15835#

1530 MHz on core of RX 480. Custom model with 8 pin. More than 170W power consumption.

In DX12 and Vulkan this GPU will be close or faster than GTX 1070.

However this is absolutely pointless. RX 480 is bottlenecked by memory bandwidth.

Over 20% OC turns into around 20% higher performance however. Which is not bad(Graphics scores).

Thats not bad. It would be interesting if AMD released an RX 490 that is Polaris 10 with an 8-pin adapter and GDDR5X. Power consumption wouldn't be great, probably 175 - 200 W but if they can get enough chips that run reliably at ~1500 Mhz they could compete with the GTX 1070.
 
It will not be Polaris 10 GPU. Only two possibilities are dual Polaris 10, or... Something else.

P.S. GDDR5 from RX 480 uses 37W. GDDR5X from GTX 1080 uses 18W of power.
 
It will not be Polaris 10 GPU. Only two possibilities are dual Polaris 10, or... Something else.

P.S. GDDR5 from RX 480 uses 37W. GDDR5X from GTX 1080 uses 18W of power.

You can't say that definitively. I don't like the way you state rumors as fact. The only thing that is probably true is that if the RX 490 comes this year, it will be based on GPUs that exist now. Either a higher performing Polaris 10 or dual Polaris 10.
 
You can't say that definitively. I don't like the way you state rumors as fact. The only thing that is probably true is that if the RX 490 comes this year, it will be based on GPUs that exist now. Either a higher performing Polaris 10 or dual Polaris 10.
Yes I can. RX 490 will have wider than 256 bit memory bus. And in P10 die shot there is only 256 bit bus memory controller.

So it will be technically different die if it will use GDDR5X with wider than 256 bit memory bus.
And this is not a rumour. This is what AMD stated about the RX 4XX lineup.

And thirdly: in the film that I linked in one of previous pages, about AMD presentation in Australia, in Q&A session there was also a question about GDDR5X memory. And the answer was short and simple: No.

Polaris 10 will be only GDDR5.

P.S. AMD is supposed to show something at Siggraph conference.
 
koyoot is right, 490 seems to be >256b according to AMD. I'm thinking dual GPU, although >256b is a bit misleading.
At Siggraph it's supposed to be the FirePro based on Polaris it seems.

It still puzzles me the lack of GDDR5X support in Polaris (or not, and AMD is waiting on something to release it). But by now seems unlikely.
 
My Time Spy result with modest 200Mhz OC on CPU and GPU, basic free version of 3D Mark that has no settings

http://www.3dmark.com/3dm/13276967

I managed to break 6000 points with 4500mhz on the CPU.

CPU at 4600mhz was unstable. GPU with 250mhz OC was unstable. You need water cooling for these settings.

Do you have a pre 2013 Mac Pro to test this in? I'd like to see if it'd be bottlenecked by a W3670/W3680/W3690 :)

From what I can tell online, if I'd stuck with my W3530 it'd bottleneck a 1070/1080, but I'm hoping the W3670 manages to overcome that :)
 
Do you have a pre 2013 Mac Pro to test this in? I'd like to see if it'd be bottlenecked by a W3670/W3680/W3690 :)

From what I can tell online, if I'd stuck with my W3530 it'd bottleneck a 1070/1080, but I'm hoping the W3670 manages to overcome that :)

I have a cMP but Windows is not installed on that.

This is a seriously heavy benchmark that is also hungry for CPU clockspeed and extra cores. So it's not GPU bound. The nMP CPU's would probably be quite hit. The Westmere and Nehalem in cMP would suffer heavily.
 
I have a cMP but Windows is not installed on that.

This is a seriously heavy benchmark that is also hungry for CPU clockspeed and extra cores. So it's not GPU bound. The nMP CPU's would probably be quite hit. The Westmere and Nehalem in cMP would suffer heavily.

Running DOOM on my W3670/Nvidia 680 maxes the GPU out but CPU is at 22-25% utilisation. That's at 1440p at Ultra settings. Since that's quite a demanding game, I'd say I should be ok, but happy to be corrected :)
 
Running DOOM on my W3670/Nvidia 680 maxes the GPU out but CPU is at 22-25% utilisation. That's at 1440p at Ultra settings. Since that's quite a demanding game, I'd say I should be ok, but happy to be corrected :)

Doom is intensive. But much less intense on the CPU than 3dMark. It's also less intensive on the GPU than Rise of the Tomb Raider.
 
http://techreport.com/review/30382/radeon-rx-480-performance-revisited-with-amd-16-7-1-driver/2 Not bad gains.

Vulkan and all of the Mantle-derived APIs have been designed for lifting the CPU bottlenecks. So even if you have weak or old CPU you will not see huge differences. Thats at least for AMD GPUs.

Nvidia on the other hand must have fast CPU for getting highest possible amount of commands for scheduling(because the CPU is dealing with scheduling - static scheduling).

Edit: http://www.linleygroup.com/newsletters/newsletter_detail.php?num=5514
High Bandwidth Memory 2 (HBM2) has several incremental but critical improvements over the previous-generation technology: it increases the memory capacity and bandwidth, adds ECC support, and improves memory-controller efficiency. These enhancements are essential for high-performance computing (HPC) and workstation tasks and will lead to wider industry adoption. Samsung’s HBM2 chips are already in production, and the memory will appear in new graphics cards later this year.
 
Last edited:
Yeah, depressingly my machine performs dramatically worse with Vulkan in DOOM. OpenGL 4.5 is a good 25% faster on average.
 
Do you have a pre 2013 Mac Pro to test this in? I'd like to see if it'd be bottlenecked by a W3670/W3680/W3690 :)

From what I can tell online, if I'd stuck with my W3530 it'd bottleneck a 1070/1080, but I'm hoping the W3670 manages to overcome that :)

Don't know whether this is of interest to you, but my MP 5,1* scores 2090 in Time Spy. The sub-scores are 1919 for GPU and 4236 for CPU.

* W3690, 24GB, vanilla GTX 680 2GB @ PCIe 2.0 x16, Win10

(As an aside, I don't find the visuals in Time Spy to be particularly impressive. It doesn't really matter that a game uses the latest graphics technology... The best looking games are the ones with the most talented/inventive artists.)
 
Remember when I said that in future GPUs will be CPU independent?

https://translate.google.com/transl...impress.co.jp/docs/column/kaigai/1010459.html
https://translate.google.com/transl...impress.co.jp/docs/column/kaigai/1009584.html
In the future, hardware scheduler, the scheduling of .GPU that will be able to perform all of the tasks related to scheduling of GPU, CPU is expected to be so not involved at all. However, Sonaruto, and task scheduling of the OS since the complexity of similar occurs, awaiting a variety of challenge (Mantor said, AMD).
A more enhanced micro-controller to control the GPU by hardware scheduler of AMD Polaris, moved is also one process to the GPU from the CPU. So that the dependence on the GPU CPU is also reduced. Cramped task scheduling was a big weakness of the GPU are being eliminated. In the long run, GPU is going to change to a separate processor, that process, can also be considered in AMD, also one advanced with.

What that means: If you have a computer like Mac Pro trash can, and you will want to connect rack that has 50 GPUs inside, you do not have to worry about scheduling for them all. All your CPU has to run is Application. GPUs, and API will handle the rest of the job.

All of this was thought by AMD in 2007, when GCN development started.

And all of this related with increased production costs, and shift in all of the markets.

I like this feeling of satisfaction being proven right after years ;).
 
Remember when I said that in future GPUs will be CPU independent?

https://translate.google.com/transl...impress.co.jp/docs/column/kaigai/1010459.html
https://translate.google.com/transl...impress.co.jp/docs/column/kaigai/1009584.html



What that means: If you have a computer like Mac Pro trash can, and you will want to connect rack that has 50 GPUs inside, you do not have to worry about scheduling for them all. All your CPU has to run is Application. GPUs, and API will handle the rest of the job.

All of this was thought by AMD in 2007, when GCN development started.

And all of this related with increased production costs, and shift in all of the markets.

I like this feeling of satisfaction being proven right after years ;).

If Commodore didn't go bust 20 times the Amiga would have achieved this long ago. They were so far ahead of the PC or Mac and when it came to dedicated graphics processors. Progress would have been faster if they had a decent desktop PC strategy.
 
If AMD's managers are smarts (and yet seems so), they should have in advanced development an nVidia 1080 competitor, derived from polaris10 and using HBM2, and it should be available q4'16/q1'17, I dont care if no leaks confirm that, the RX490 on HBM2 and on a single gpu (no dual GPU card) is market predicted (as nvidia 1060, never leaked "un expected" ), if AMD didn't develop it they loss.
 
There is rumour that Vega architecture could have integrated ARM coprocessor working as next generation scheduler or simply it could bring Next Generation Hardware Schedulers.

Now everything what AMD developed in past few years makes more sense, especially in the context of Mantle and low-level access to GPUs.

Mago. You consider it as consumer GPU, that is accounting for less than 20% of GPU Market. GTX 1080 is 9 TFLOPs GPU, and it is 5-10% faster than Fury X in DX12 games. A GPU that is from previous generation.

With Polaris, AMD focused on mainstream market because it will bring them much more revenue this year, than competing in high-end market would do. After all, they did played well this time.

Also it appears that Vega is for HPC market from ground up, and heavy focus on compute performance. But this direction appears to be the right thing to do, with knowing that most of applications will rely on compute on GPUs(Tessellation in Metal, anyone?).
 
If AMD's managers are smarts (and yet seems so), they should have in advanced development an nVidia 1080 competitor, derived from polaris10 and using HBM2, and it should be available q4'16/q1'17, I dont care if no leaks confirm that, the RX490 on HBM2 and on a single gpu (no dual GPU card) is market predicted (as nvidia 1060, never leaked "un expected" ), if AMD didn't develop it they loss.


Amd works from bottom up and nVidia is top down, which means, nVidia releases an architecture like Pascal with the fastest possible speeds that even OC'ing won't give it more than 5-10% performance increase. AMD works in reverse, releases an architecture like Polaris and scales up over the years.

Really different strategies, to be honest. I prefer nVidia for Windows, because of their raw power and much much better drivers. AMD drivers are terrible.

The reason Apple likes AMD is because of computational OpenCL (an open standard) support by AMD, and possibly high volumes at lower prices, and lower clock speeds.

There's also a history with MacBook Pros and nVidia GPUs dying, there were a couple of lawsuits before.

CUDA is far superior, it's used by the scientific community at large, but Apple as always has other ideals.
 
SDAVE, all you write are myths that are coming from very far past. Today world is different. Like drivers from AMD are much better, than they were before, and drivers from Nvidia are much worse then they were before. At this moment they are equal in terms of driver quality.

P.S. Apple likes AMD because it is easy for them to write drivers for Metal, whereas Nvidia's architecture is not documented on this area. Nvidia like to keep control of their drivers.

Secondly, AMD hardware offers much higher compute power than Nvidia, if we consider 28 nm process, and is better for purposes which Apple targets(video Editing, OpenCL, etc).
 
Last edited:
history with MacBook Pros and nVidia GPUs dying
Actually the MBP with dying GPU was '11 model with AMD (MBP 15/17 2011), iMacs with nVidia GPU was the other Mac I remember recently having Dying GPU, about the MBPs the issue was related to thermal compound degradation (a 2011 MBP15 was my last MBP, and died due GPU very earlier, then i got an iPad Pro 12").
[doublepost=1468790793][/doublepost]
You consider it as consumer GPU

still a multi-billion worth market, despite if accounts only 5%, nVidia developed the 1060 quietly as surely is doing AMD with RX490, the RX490 surely is not targeted at HPC (no ecc hbm) and should be different silicon than HPC's VEGA (w ecc HBM)
 
Last edited:
still a multi-billion worth market, despite if accounts only 5%, nVidia developed the 1060 quietly as surely is doing AMD with RX490, the RX490 surely is not targeted at HPC (no ecc hbm) and should be different silicon than HPC's VEGA (w ecc HBM)
http://forums.anandtech.com/showpost.php?p=38360343&postcount=20
http://forums.anandtech.com/showpost.php?p=38360817&postcount=31
These two posts relate to the speculated RX 490 board.

So it looks like it is single GPU. Also in that film that I have posted on previous pages, from Australian AMD briefing, on Polaris lineup, in Q&A session was question about RX 490. The answer was that he cannot talk about that, but he said that RX 490 being as dual GPU is a possibility, not "surety" ;).

We have to keep in mind what Vega means. Whole family of GPUs. Vega architecture appears to be aimed at HPC. Not particular GPU dies ;). And HBM2 has ECC mode possible ;). So if RX 490 could spoil two HBM2 stacks with 512 GB/s, and 8 GB of RAM - then it would also be viable as HPC GPU.

But that is remaining to be confirmed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.