Yes, while being around 50% faster in OpenCL compute. That is pretty remarkable.Amazing that tiny card can still use 26 Watts more than a 980 (non-Ti)
Yes, while being around 50% faster in OpenCL compute. That is pretty remarkable.
I have to say. Im pretty disappointed. I was expecting more of gaming performance, but I put the blame here on DirectX 11. DX12 will make this GPU absolutely shine.
I would love to see some tests of this GPU in Ashes of Singularity DX12 benchmark.
The one where is get smoked by the 980Ti?
If this does go into the Mac Pro.....these would be in crossfire...so wouldn't they be faster than a single 980TI?
Apple may end up their best customer.
AMD said:There has been some misconception out there that the Nano is an uber binned part. Actually, both Nano and Fury X share the same ASIC configuration. What differs is the audience and form factor the products are targeted for.
https://community.amd.com/thread/187797
It is the same ASIC that is in Fury X. That is astonishing! Im having a hard time to get this over my head. Im wondering what is possible with really best binned cards in Fiji design.
At least I was expecting that the Fury Nano has HDMI 2.0 and even Display Port 1.3. It's now one year ago Display Port 1.3 specs are published.
And actual Nvidia Maxwell cards have HDMI 2.0.
980 performance at a 980Ti price.
Nope it isn't. Asynchronous Compute is all about context switching between graphics and compute. In Nvidia cards, the commands are dispatched in order, so they cannot refresh the pipeline, because there is only one Asynchronous Compute Engine. Thats why the pipeline stalls when it goes from Graphics to Compute and graphics again. Because GCN cards have 8 ACE's or 4 ACE's and 2 HWS you can dispatch the orders accordingly and the pipeline can be more... utilized. The pipeline is not stalled because of the context switching in AMD GPUs. It is some form of Out-of-Order execution of the pipeline.
Nothing, no matter what will Nvidia do will change the fact that there is only one ACE, and is incapable of doing at the same time Graphics and Compute. Thats where you will get stalled. Well, planned obsolescence to be precise.
What is worse for Nvidia, they cannot do anything to optimize it, because the API talks to the GPU, and the GPU only has API driver, not specific drivers for games, which is the case of DirectX11 games - there is no room to optimize in Driver. Every optimization of the hardware performance is done by the developers, they have to add specific code to the application which is exact case of Ashes of Singularity benchmark and Nvidia GPUs.
Look at this: http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/ R9 290X ties GTx 980 Ti. But look at 4K results and Nvidia hardware and the results with 4 and 6 cores. To get more power you will need higher CPU core count in higher resolutions. That is exactly the emanation of the lack of context switching - the CPU has much more work to do.
Its pretty ironic. Low-Level APIs came to life to reduce the amount the CPU has to work. With Nvidia hardware its not the case, with the inability to switch contexts. Table has turned around. DX11 - that was where AMD GPUs were bottlenecked by CPU, and not utilizing the whole GPU in parallel way. Nvidia Maxwell power came from optimized drivers. Direct12 - Nvidia struggles in parallel environment, where GCN starts to spread its wings. It will be much worse, when developers will go more into compute complexity in their games and applications. Im really curious to see the effects, but from everything we know so far: the gap between AMD and Nvidia will start to get bigger. In plus for AMD, in minus for Nvidia.
Nope it isn't. Asynchronous Compute is all about context switching between graphics and compute. In Nvidia cards, the commands are dispatched in order, so they cannot refresh the pipeline, because there is only one Asynchronous Compute Engine. Thats why the pipeline stalls when it goes from Graphics to Compute and graphics again. Because GCN cards have 8 ACE's or 4 ACE's and 2 HWS you can dispatch the orders accordingly and the pipeline can be more... utilized. The pipeline is not stalled because of the context switching in AMD GPUs. It is some form of Out-of-Order execution of the pipeline.
Nothing, no matter what will Nvidia do will change the fact that there is only one ACE, and is incapable of doing at the same time Graphics and Compute. Thats where you will get stalled. Well, planned obsolescence to be precise.
What is worse for Nvidia, they cannot do anything to optimize it, because the API talks to the GPU, and the GPU only has API driver, not specific drivers for games, which is the case of DirectX11 games - there is no room to optimize in Driver. Every optimization of the hardware performance is done by the developers, they have to add specific code to the application which is exact case of Ashes of Singularity benchmark and Nvidia GPUs.
Look at this: http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/ R9 290X ties GTx 980 Ti. But look at 4K results and Nvidia hardware and the results with 4 and 6 cores. To get more power you will need higher CPU core count in higher resolutions. That is exactly the emanation of the lack of context switching - the CPU has much more work to do.
Its pretty ironic. Low-Level APIs came to life to reduce the amount the CPU has to work. With Nvidia hardware its not the case, with the inability to switch contexts. Table has turned around. DX11 - that was where AMD GPUs were bottlenecked by CPU, and not utilizing the whole GPU in parallel way. Nvidia Maxwell power came from optimized drivers. Direct12 - Nvidia struggles in parallel environment, where GCN starts to spread its wings. It will be much worse, when developers will go more into compute complexity in their games and applications. Im really curious to see the effects, but from everything we know so far: the gap between AMD and Nvidia will start to get bigger. In plus for AMD, in minus for Nvidia.
Sapphire might be tempted again. Regardless of the niche, I think their Mac 7950 was a commercial success... I suppose it is too much to hope for a Mac Edition with a boot screen and no issues with OS X updates however...