Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

netkas

macrumors 65816
Original poster
Oct 2, 2007
1,198
395
first reivews show it sits between gtx 980 and r9 fury.

Meh the price 650 usd, but thing fits fine into cMP, just need to wait for osx support
 
Nano has proven that thing I said about Fury-X weeks ago.

They had to OC the Heck out of it to get a glimpse at GM200's taillights.

Looks like 175 Watts extra power to gain a couple fps.

This is the card it was always meant to be. A very able and suitable contender for role as a true competitor to Nvidia's 2nd tier cards.

And will likely make a reasonable 7,1 GPU.

Amazing that tiny card can still use 26 Watts more than a 980 (non-Ti)

"The catch for AMD here is that while their energy efficiency has improved over the Fury series, it’s still not fantastic, at least under Crysis 3. The counterpoint to the R9 Nano here is the GTX 980, a card that by and large targets a similar power profile. Furthermore the GTX 980 also gets an almost identical framerate to the R9 Nano here – 65.7fps vs. 65.3fps – which gives us an interesting opportunity to rule out the impact to the CPU of generating more frames. To that end the R9 Nano ends up drawing more power than the GTX 980, 327W vs. 301W."
 
  • Like
Reactions: Synchro3
Nice! Let's hope the OSX support starts with El Capitan.
 
Last edited:
Amazing that tiny card can still use 26 Watts more than a 980 (non-Ti)
Yes, while being around 50% faster in OpenCL compute. That is pretty remarkable.

I have to say. Im pretty disappointed. I was expecting more of gaming performance, but I put the blame here on DirectX 11. DX12 will make this GPU absolutely shine.

I would love to see some tests of this GPU in Ashes of Singularity DX12 benchmark.
 
Yes, while being around 50% faster in OpenCL compute. That is pretty remarkable.

I have to say. Im pretty disappointed. I was expecting more of gaming performance, but I put the blame here on DirectX 11. DX12 will make this GPU absolutely shine.

I would love to see some tests of this GPU in Ashes of Singularity DX12 benchmark.

Check out guru3d review
 
DX 12.PNG
Check out guru3d review

The one where is get smoked by the 980Ti?
 
If this does go into the Mac Pro.....these would be in crossfire...so wouldn't they be faster than a single 980TI?
 
Depends what you are looking for. In compute single Nano will be faster than GTX 980 Ti, even at 850 MHz of core clock.

In games: it really depends on the test, but Dual Fiji XT chip will mop the floor at the same power envelope in comparison to GTX 980 Ti.

Again, it really is to whether the Nano is capable of maintaining 850 MHz core clock at 125W of power draw needed for next revision of Mac Pro, for single GPU. If thats the case - 14 TFLOPs of compute power from 250W.

Because of the Xfire scaling simply double the results from the tests.

http://www.computerbase.de/2015-09/amd-radeon-r9-nano-test/6/
http://www.computerbase.de/2015-09/amd-radeon-r9-nano-test/8/ one more interesting benchmark. Total power draw under 300W.
 
Eurogamer's exhaustive benchmarks put the Nano in 980 territory for gaming. But it's almost a year behind and the price is too. The form factor is nice. I don't see the point of upgrading to this card unless you have something much weaker like a 7x00 Radeon or 6x0 Geforce.
 
The one where is get smoked by the 980Ti?

Smoked like a Salmon.

Obviously for a standard PC this is a horrible deal.

(GTX980 performance with a 980Ti price, Yay !)

But for Mini PCs (like nMP) it makes more sense.

The real incongruity is that it would make a great living room gaming/BluRay PC card, except 99% of 4K TVs don't have DP so a 30Hz HDMI 4K port is worthless. Someone at AMD needs to be fired, what were they thinking?

I know this for a fact, I just moved heaven and earth to end up with a 4K TV with DP. Not easy on West Coast.

And really shows how poor Fiji chip is in most wafer yields, only the best work this well.

Apple may end up their best customer.
 
If this does go into the Mac Pro.....these would be in crossfire...so wouldn't they be faster than a single 980TI?

OS X doesn't support Crossfire. Unless you mean under Boot Camp, in which case probably.

Apple may end up their best customer.

Probably already almost are, aside from the embedded console business. The only places AMD has made any inroads these days are where they can lure in a company to build in their chips at a bottom-dollar price that makes it impossible to say no. Their bread & butter is "units moved".
 
https://community.amd.com/thread/187797

AMD said:
There has been some misconception out there that the Nano is an uber binned part. Actually, both Nano and Fury X share the same ASIC configuration. What differs is the audience and form factor the products are targeted for.

It is the same ASIC that is in Fury X. That is astonishing! Im having a hard time to get this over my head. Im wondering what is possible with really best binned cards in Fiji design.
 
https://community.amd.com/thread/187797



It is the same ASIC that is in Fury X. That is astonishing! Im having a hard time to get this over my head. Im wondering what is possible with really best binned cards in Fiji design.

OMG ! OMG !!!

Does the AMD promo department write this stuff for you or what?

980 performance at a 980Ti price.

Yippity.

So, if you have a tiny PC it's almost a good deal. At least until NVIDIA releases an ITX 980 for $500 and shuts AMD down.

The fact that they have to jack the same ASIC up with 200 more watts to get 10-15% more performance actually proves my " overclocked all to hell" quote that you argued with earlier.

They had to more then double the current going in (and this heat coming out) to get a brief glimpse at Nvidia's taillights. The Nano is the normal version, the "x" was the desperation version.

Will make a fine nMP card though going to be hard sell for geniuses to explain the backward RAM slide from D700. The newer Apple customers have shorter attention spans, 4GB is less then 6GB no matter how you slice it.
 
There is a misconception that "binning" is some magic thing that can drop the power by a factor of two or double the performance per watt, etc.

In general, there will be only minor variations in performance and power between the same designs fabricated on the same line.
 
At least I was expecting that the Fury Nano has HDMI 2.0 and even Display Port 1.3. It's now one year ago Display Port 1.3 specs are published.

And actual Nvidia Maxwell cards have HDMI 2.0.
 
At least I was expecting that the Fury Nano has HDMI 2.0 and even Display Port 1.3. It's now one year ago Display Port 1.3 specs are published.

And actual Nvidia Maxwell cards have HDMI 2.0.

Probably would have taken a bigger architecture change than AMD was willing to pay for. Hopefully we'll see DP 1.3 and HDMI 2.0 across the board next year.
 
980 performance at a 980Ti price.

The Nvidia cards are looking really poor these days under DirectX 12 and compute. Nvidia may have optimized for certain benchmarks but I think as an all around card they're a lot worse. They optimized for DirectX 11 games, and now they're getting hammered in the newer, broader APIs.

Next I expect them to throw a fit that they didn't get to influence any of the next gen APIs and drag their feet on Vulkan, all the while still sucking in performance.

AMD took a lot of performance hits before now, but even already released cards are really shining with the new API.
 
Nope it isn't. Asynchronous Compute is all about context switching between graphics and compute. In Nvidia cards, the commands are dispatched in order, so they cannot refresh the pipeline, because there is only one Asynchronous Compute Engine. Thats why the pipeline stalls when it goes from Graphics to Compute and graphics again. Because GCN cards have 8 ACE's or 4 ACE's and 2 HWS you can dispatch the orders accordingly and the pipeline can be more... utilized. The pipeline is not stalled because of the context switching in AMD GPUs. It is some form of Out-of-Order execution of the pipeline.

Nothing, no matter what will Nvidia do will change the fact that there is only one ACE, and is incapable of doing at the same time Graphics and Compute. Thats where you will get stalled. Well, planned obsolescence to be precise.

What is worse for Nvidia, they cannot do anything to optimize it, because the API talks to the GPU, and the GPU only has API driver, not specific drivers for games, which is the case of DirectX11 games - there is no room to optimize in Driver. Every optimization of the hardware performance is done by the developers, they have to add specific code to the application which is exact case of Ashes of Singularity benchmark and Nvidia GPUs.

Look at this: http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/ R9 290X ties GTx 980 Ti. But look at 4K results and Nvidia hardware and the results with 4 and 6 cores. To get more power you will need higher CPU core count in higher resolutions. That is exactly the emanation of the lack of context switching - the CPU has much more work to do.

Its pretty ironic. Low-Level APIs came to life to reduce the amount the CPU has to work. With Nvidia hardware its not the case, with the inability to switch contexts. Table has turned around. DX11 - that was where AMD GPUs were bottlenecked by CPU, and not utilizing the whole GPU in parallel way. Nvidia Maxwell power came from optimized drivers. Direct12 - Nvidia struggles in parallel environment, where GCN starts to spread its wings. It will be much worse, when developers will go more into compute complexity in their games and applications. Im really curious to see the effects, but from everything we know so far: the gap between AMD and Nvidia will start to get bigger. In plus for AMD, in minus for Nvidia.
 
Last edited:
  • Like
Reactions: Redneck1089
Nope it isn't. Asynchronous Compute is all about context switching between graphics and compute. In Nvidia cards, the commands are dispatched in order, so they cannot refresh the pipeline, because there is only one Asynchronous Compute Engine. Thats why the pipeline stalls when it goes from Graphics to Compute and graphics again. Because GCN cards have 8 ACE's or 4 ACE's and 2 HWS you can dispatch the orders accordingly and the pipeline can be more... utilized. The pipeline is not stalled because of the context switching in AMD GPUs. It is some form of Out-of-Order execution of the pipeline.

Nothing, no matter what will Nvidia do will change the fact that there is only one ACE, and is incapable of doing at the same time Graphics and Compute. Thats where you will get stalled. Well, planned obsolescence to be precise.

What is worse for Nvidia, they cannot do anything to optimize it, because the API talks to the GPU, and the GPU only has API driver, not specific drivers for games, which is the case of DirectX11 games - there is no room to optimize in Driver. Every optimization of the hardware performance is done by the developers, they have to add specific code to the application which is exact case of Ashes of Singularity benchmark and Nvidia GPUs.

Look at this: http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/ R9 290X ties GTx 980 Ti. But look at 4K results and Nvidia hardware and the results with 4 and 6 cores. To get more power you will need higher CPU core count in higher resolutions. That is exactly the emanation of the lack of context switching - the CPU has much more work to do.

Its pretty ironic. Low-Level APIs came to life to reduce the amount the CPU has to work. With Nvidia hardware its not the case, with the inability to switch contexts. Table has turned around. DX11 - that was where AMD GPUs were bottlenecked by CPU, and not utilizing the whole GPU in parallel way. Nvidia Maxwell power came from optimized drivers. Direct12 - Nvidia struggles in parallel environment, where GCN starts to spread its wings. It will be much worse, when developers will go more into compute complexity in their games and applications. Im really curious to see the effects, but from everything we know so far: the gap between AMD and Nvidia will start to get bigger. In plus for AMD, in minus for Nvidia.

These companies leapfrog each other like this for the last 15+ years and whenever an issue like this occurs someone on an online forum says 'Company X is screwed, they cannot compete now!'

And what happens? Within 6-9 months a new generation of products come out and the tables are turned again.

It's boring to see the same old debates repeating yearly. You'll be upgrading one day anyway and will wonder why you sat there pulling your hair out over a synthetic benchmark that didn't represent how much you enjoyed the games regardless. You might even laugh at the first world problem of gamers crying that the 100FPS they got wasn't as good as the 110FPS another GPU was getting.
 
Nope it isn't. Asynchronous Compute is all about context switching between graphics and compute. In Nvidia cards, the commands are dispatched in order, so they cannot refresh the pipeline, because there is only one Asynchronous Compute Engine. Thats why the pipeline stalls when it goes from Graphics to Compute and graphics again. Because GCN cards have 8 ACE's or 4 ACE's and 2 HWS you can dispatch the orders accordingly and the pipeline can be more... utilized. The pipeline is not stalled because of the context switching in AMD GPUs. It is some form of Out-of-Order execution of the pipeline.

Nothing, no matter what will Nvidia do will change the fact that there is only one ACE, and is incapable of doing at the same time Graphics and Compute. Thats where you will get stalled. Well, planned obsolescence to be precise.

What is worse for Nvidia, they cannot do anything to optimize it, because the API talks to the GPU, and the GPU only has API driver, not specific drivers for games, which is the case of DirectX11 games - there is no room to optimize in Driver. Every optimization of the hardware performance is done by the developers, they have to add specific code to the application which is exact case of Ashes of Singularity benchmark and Nvidia GPUs.

Look at this: http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/ R9 290X ties GTx 980 Ti. But look at 4K results and Nvidia hardware and the results with 4 and 6 cores. To get more power you will need higher CPU core count in higher resolutions. That is exactly the emanation of the lack of context switching - the CPU has much more work to do.

Its pretty ironic. Low-Level APIs came to life to reduce the amount the CPU has to work. With Nvidia hardware its not the case, with the inability to switch contexts. Table has turned around. DX11 - that was where AMD GPUs were bottlenecked by CPU, and not utilizing the whole GPU in parallel way. Nvidia Maxwell power came from optimized drivers. Direct12 - Nvidia struggles in parallel environment, where GCN starts to spread its wings. It will be much worse, when developers will go more into compute complexity in their games and applications. Im really curious to see the effects, but from everything we know so far: the gap between AMD and Nvidia will start to get bigger. In plus for AMD, in minus for Nvidia.

Koyoot, these benchmarks make it seem like AMD is crushing it with directx 12 if a R9 290X can keep up with a Nvidia GTX 980 Ti. I was reading another review with the AMD nano and the R9 390X, which should both be faster than the 290X, but they fail to match the GTX 980 Ti. What is the disconnect here? Are there better Nvidia drivers that make the difference?

My take is it is too early to tell from this one game that Nvidia will not be able to compete against AMD for directx 12. Windows 10 has only been out a month, more driver support and games are needed to determine how effective AMD and Nvidia are at directx 12. Nvidia's Maxwell has crushed AMD in power efficiency and performance in directx 11 titles, its simply too early to write it off for directx 12.
 
This card looks interesting as an AMD GPU solution (except the price and perhaps a little more VRAM would be nice for future proofing).

It looks like it might fit within the power requirements of the cMP without an extra PSU or other concerns yet offer about twice the performance of a 7950?

I suppose it is too much to hope for a Mac Edition with a boot screen and no issues with OS X updates however...
 
.. I suppose it is too much to hope for a Mac Edition with a boot screen and no issues with OS X updates however...
Sapphire might be tempted again. Regardless of the niche, I think their Mac 7950 was a commercial success.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.