Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
RX 480 is 150W TDP with max board power from connector and PCIex. The GPU does not draw 150W. It draws between 90-100W in gaming load. So yes, dual RX 480 might have better efficiency and price and performance than single GTX 1080.

The TDP of an RX 480 (150W) matches the TDP of a GTX 1070 (150W), yet the latter is nearly twice as fast. That means the 1070 is nearly twice as efficient as the 480, using the standard perf per watt metric. You can't just shave off 1/3 of the TDP and claim the AMD chip is more efficient, sorry.

At the end of the day, if you want to buy a Polaris GPU, go for it. Personally, I'm waiting for a 1080 Ti before I upgrade.
 
The TDP of an RX 480 (150W) matches the TDP of a GTX 1070 (150W), yet the latter is nearly twice as fast. That means the 1070 is nearly twice as efficient as the 480, using the standard perf per watt metric. You can't just shave off 1/3 of the TDP and claim the AMD chip is more efficient, sorry.

At the end of the day, if you want to buy a Polaris GPU, go for it. Personally, I'm waiting for a 1080 Ti before I upgrade.

Its not quite fair to go off the TDP listed in the specs, as it is a bit arbitrary. Nvidia's TDP tends to be an estimation of the average power consumption under load where AMD's tends to be a max TDP. The RX 480 could be just as efficient as the 1070, just at a lower performance level. We have to wait and see the benchmarks for the RX 480 to know the actual power usage and efficiency.
 
The TDP of an RX 480 (150W) matches the TDP of a GTX 1070 (150W), yet the latter is nearly twice as fast. That means the 1070 is nearly twice as efficient as the 480, using the standard perf per watt metric. You can't just shave off 1/3 of the TDP and claim the AMD chip is more efficient, sorry.

At the end of the day, if you want to buy a Polaris GPU, go for it. Personally, I'm waiting for a 1080 Ti before I upgrade.
http://wccftech.com/amd-rx-480-faster-than-nano-980/So they have confirmed, what has been rumored. RX 480 has power draw around 100W, not 150W, under load.
 
Its not quite fair to go off the TDP listed in the specs, as it is a bit arbitrary. Nvidia's TDP tends to be an estimation of the average power consumption under load where AMD's tends to be a max TDP. The RX 480 could be just as efficient as the 1070, just at a lower performance level. We have to wait and see the benchmarks for the RX 480 to know the actual power usage and efficiency.

In other word you keep believing what the AMD marketing department feeds you instead of the cold hard fact that AMD strugles to even meet the high mid to lower high end of NVidia GPU...
 
Varies between 130 and 151W depending on site and method of testing, and gaming scenario.

So, nearly double the raw perf for an extra 30-50% power. Sounds like the 1070 is a more efficient architecture, right?
 
So, nearly double the raw perf for an extra 30-50% power. Sounds like the 1070 is a more efficient architecture, right?
No. 5.833 TFLOPs - that is compute power of RX 480 with power draw around 100W. That gives 58.3 GFLOPs/Watt. GTX 1070 has 6.1 TFLOPs with 130W of power draw. That gives 47GFLOPs/watt.

Right now, the most efficient GPU in the world is the RX480. After that are in order: GTX 1070, GTX 1080, Fury Nano, Fury X, Fury.

GTX 1070 is not 2 times faster in games. It will be like 15-20%, depending on clocks. there will be 1400 and 1622 MHz versions of the RX 480 which might be very close in performance to... GTX 1080.

P.S. GTX 1070 is on Titan X level of performance. And in the benchmark on WCCFTech you have directly benchmark comparing RX 480 to Titan X. And it is not 2 times faster...
 
Last edited:
So, nearly double the raw perf for an extra 30-50% power. Sounds like the 1070 is a more efficient architecture, right?

Right now we don't really know how efficient Polaris is. We don't know performance or power usage. Once the reviews hit next week it will be much more clear how it stands.
 
What I'd love to know is, without a resistor mod for PCIe 2.0 speeds, what impact would running a 1080 have if being run on a PCIe 1.1 slot?
 
What I'd love to know is, without a resistor mod for PCIe 2.0 speeds, what impact would running a 1080 have if being run on a PCIe 1.1 slot?

There is NO resistor mod for Nvidia card.

I OSX, web driver will enable PCIe 2.0 (not support the 1080 yet, but that's what the web driver do to the existing card).

In Windows, only EFI can enable PCIe 2.0, not resistor mod.

Resistor mod is for AMD card.
 
  • Like
Reactions: TheStork
What I'd love to know is, without a resistor mod for PCIe 2.0 speeds, what impact would running a 1080 have if being run on a PCIe 1.1 slot?
Not much impact at all as it is still running x16. See my previous posts on this thread.
Framerates only see a significant dip when gaming in 4K with certain games only. Otherwise it's negligible. 5-10% max.
 
Not much impact at all as it is still running x16. See my previous posts on this thread.
Framerates only see a significant dip when gaming in 4K with certain games only. Otherwise it's negligible. 5-10% max.

Only have a 27" ACD.. this is so tempting now! But £620 is so much money..
 
Only have a 27" ACD.. this is so tempting now! But £620 is so much money..
Same. I'm on 2560x1600 and this would be a great card at that res. Concerned about the many reports of coil whine however. I value my silent Mac Pro.
It's also now possible to buy the 5K Dell monitor for less than a 1080. Tricky deciding between the two.
 
Last edited:
All models with 1 * 8 pin at present:
GTX 1080.png
 
Got a sonnet echo express 3 coming Monday , going to test a 1070 and 1080 with a Mac Pro 2013 , Mac mini 2012 and rMBP . Depends on the performance differences and bottleneck of TB2 will decide which to keep.
 
  • Like
Reactions: filmak and Synchro3
Are there already Pascal drivers for OS X ?
Only internally. There is some gossip it was shown to Apple.

Very early support for Pascal and Volta first arrived in Windows driver version 358.66.

http://wccftech.com/nvidia-pascal-volta-gpus-supported-geforce-drivers-adds-support-vulkan-api/

Pascal support stayed in beta until the 368 driver builds sent to 1070/1080 reviewers.

In Sierra the included Nvidia driver version 365 is almost as high as the current PC version but Pascal has been stripped out.

The current downloadable web driver for El Capitan is still based on the ancient 346 builds.
 
  • Like
Reactions: MH01 and Synchro3
The clock speed is quite high though so it could be an improvement over Maxwell, but Pascal has an even newer version of the 4th gen delta color compression algorithms which won't be supported by the web driver(stuck at 1st gen).

There are other newer features that also won't be available in the current state of web drivers such as PureVideo h.265 decode, NVENC h.265 encode, GPU Boost 3.0, and simultaneous multicast.

And then we still have an OSX with an old OpenGL and a Metal API that is still not useful. Apple might hold back on Vulkan support in favour of Metal.
I noticed that MSI’s GPUs have overclocking built-in. For example, the MSI GTX 960 Gaming 4G has three different modes: Silent, Gaming and OC, where there are high speeds and low speeds. Silent mode goes from 1127 MHz to 1178 MHz, Gaming mode goes from 1216 MHz to 1279 MHz and OC mode goes from 1241 MHz to 1304. Is that overclocking part of Nvidia GPU Boost or is it separate? If it’s separate from Nvidia GPU Boost, would it work in OS X without the user having to do anything? Or maybe there’s a way to make it work?
 
I noticed that MSI’s GPUs have overclocking built-in. For example, the MSI GTX 960 Gaming 4G has three different modes: Silent, Gaming and OC, where there are high speeds and low speeds. Silent mode goes from 1127 MHz to 1178 MHz, Gaming mode goes from 1216 MHz to 1279 MHz and OC mode goes from 1241 MHz to 1304. Is that overclocking part of Nvidia GPU Boost or is it separate? If it’s separate from Nvidia GPU Boost, would it work in OS X without the user having to do anything? Or maybe there’s a way to make it work?

Nvidia GPU Boost is a default feature of the graphics card that is enabled by the drivers, in Windows anyway. I don't know if it is part of the Mac drivers.

MSI Afterburner is an over clocking utility that works with the recent four generations of Nvidia GPUs and is compatible with all vendors not just MSI.

The only way to keep an overclock on OSX is to use a firmware with an OC written into it. There's people who share this online and there are some you can download from Tech Power Up. Make sure you know your card is compatible before doing this. Even better to have dual BIOS in case you mess up.

https://www.techpowerup.com/vgabios/
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.