Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Great shout @nixsky. Will deffo compare benches. At present my Mac Pro isn't doing much, it's a recent purchase for this project. I've installed Windows 10 on it, but don't have a monitor I can use with it at the moment so it's sitting unused beside me! However it's got the following specs:

4,1>5,1
3.46GHz 6 Core (W3690)
32GB RAM
Nvidia GT120
640GB HDD

I'm going to buy an SSD to stick in it as well.
 
I'm specing out a 3.33Hz 5,1. Is the 680 GTX still the best card for Mac gaming and Windows 10 gaming with the least amount of problems at $200?
 
I'm specing out a 3.33Hz 5,1. Is the 680 GTX still the best card for Mac gaming and Windows 10 gaming with the least amount of problems at $200?

I think so. If you search "680 vs 7970" on the internet. You will find that they are very close in performance. However, the 2GB version become slower and slower after 2014 because the 2GB VRAM simply not enough for today's game. In fact, even the 3GB on the 7970 is a serious limiting factor now. So, if I have to choose between these 2 now, I will go for the 680 4GB.

Also, the 7970 has a significantly higher failure rate. So, I will say 680 4GB is the best choice (If you only consider OOTB card).

And the bonus is that you can flash this card to get the boot screen.
 
Guys a little update on Witcher 3 benchmarks. My rig 2 x X5690 + 96GB ram + 2x gtx680 4gb in SLI plays still strong on stable 60fps in the biggest city in Witcher 3 - Novigrad on 1080p :)
I even changed the character in bg to uber without dropping fps.
I think the two cpu are doing the work and are not bottlenecking the gpus at all in this example.
I read some forums that W3 can bottleneck even i7-6700 in Novigrad but with my two cpus that's not an issue.
I believe the massive core count helps here :)

So to sum it up: I think the gtx 1080ti will be great gaming gpu in my current cMP config in the future (SLI support is crappy), but for sure it will be bottlenecked somehow especially in 1080p.
But who cares about that if I'll get stable 60fps in the newest games ;)
 
  • Like
Reactions: nixsky
Guys a little update on Witcher 3 benchmarks. My rig 2 x X5690 + 96GB ram + 2x gtx680 4gb in SLI plays still strong on stable 60fps in the biggest city in Witcher 3 - Novigrad on 1080p :)
I even changed the character in bg to uber without dropping fps.
I think the two cpu are doing the work and are not bottlenecking the gpus at all in this example.
I read some forums that W3 can bottleneck even i7-6700 in Novigrad but with my two cpus that's not an issue.
I believe the massive core count helps here :)

So to sum it up: I think the gtx 1080ti will be great gaming gpu in my current cMP config in the future (SLI support is crappy), but for sure it will be bottlenecked somehow especially in 1080p.
But who cares about that if I'll get stable 60fps in the newest games ;)

Even I doubt if Witcher 3 utilise beyond 6 cores, but at least it proved that the CPU is fast enough to get 60FPS, and now is down to the GPU to do all the job. Which also means better GPU can have better graphics. Thanks!
 
Guys a little update on Witcher 3 benchmarks. My rig 2 x X5690 + 96GB ram + 2x gtx680 4gb in SLI plays still strong on stable 60fps in the biggest city in Witcher 3 - Novigrad on 1080p :)
I even changed the character in bg to uber without dropping fps.
I think the two cpu are doing the work and are not bottlenecking the gpus at all in this example.
I read some forums that W3 can bottleneck even i7-6700 in Novigrad but with my two cpus that's not an issue.
I believe the massive core count helps here :)

So to sum it up: I think the gtx 1080ti will be great gaming gpu in my current cMP config in the future (SLI support is crappy), but for sure it will be bottlenecked somehow especially in 1080p.
But who cares about that if I'll get stable 60fps in the newest games ;)

Having just spent £300 on X5690s, this is reassuring! Thanks!
 
Even I doubt if Witcher 3 utilise beyond 6 cores, but at least it proved that the CPU is fast enough to get 60FPS, and now is down to the GPU to do all the job. Which also means better GPU can have better graphics. Thanks!

I will take a look at the core use tonight and take a screenshot.
I play W3 on uber settings and there are only two options that can make better graphics:
- Nvidia HairWorks and...
- 4k :D
 
Last edited:
  • Like
Reactions: h9826790
I think so. If you search "680 vs 7970" on the internet. You will find that they are very close in performance. However, the 2GB version become slower and slower after 2014 because the 2GB VRAM simply not enough for today's game. In fact, even the 3GB on the 7970 is a serious limiting factor now. So, if I have to choose between these 2 now, I will go for the 680 4GB.

http://www.anandtech.com/bench/product/1722?vs=1719

http://techreport.com/review/25466/amd-radeon-r9-280x-and-270x-graphics-cards/4
(280X = 7970)

The 7970 is a better card, especially when you consider DX12 gaming in Windows.

The GTX 680 isn't fast enough to make use of its extra VRAM.

Also, the 7970 has a significantly higher failure rate. So, I will say 680 4GB is the best choice (If you only consider OOTB card).

Wow.
Do you have a link for that claim? And I mean an actual link from a reputable website or a class action lawsuit.

And the bonus is that you can flash this card to get the boot screen.

Same with 7970.
 
Here is the review or 680 vs 7970 3 years after their released. It's quite clear that back in 2012, 680 is generally faster. And getting slower and slower if compare to the 7970. By considering both card's processing power won't change over time. It's not too hard to conclude that should be the VRAM size limitation make the difference. Therefore, I will consider a 4GB VRAM card is a better option.

http://www.babeltechreviews.com/hd-7970-vs-gtx-680-2013-revisited/

Can you provide a link that from a reputable website to show that the 680 is not fast enough to utilise 4GB VRAM?

Again, buy considering the 7970 is fast enough to utilise 3GB or VRAM. and even though my 7950 can be 3GB VRAM limiting today. I don't think the 680 can never utilise that 4GB VRAM because of it's speed. And even 4GB is too much, at least it can utilise the first 3GB of VRAM like the 7970 does, and make it able to perform (like the comparison back in 2012).

On the other hand, here is the failure rate statistics

https://linustechtips.com/main/topi...ts-french-but-i-translated-nearly-everything/

7970 7.24%
680 2.66%

Therefore, the failure rate of 7970 is 172% higher than 680. It's no doubt "significantly higher".

The worst 7970 is the Sapphire Radeon HD 7970 OC Edition 3 GB, 14.29% fail.
The worst 680 is the ASUS GTX680-DC2O-2GD5 2 GB, 6.98% fail.

So, The average failure rate of the 7970 is higher than the worst 680.
And the worst 7970 has failure rate that 105% higher than the worst 680.

I will say that we should avoid that Sapphire Radeon HD 7970 OC Edition 3 GB by all means, ~15% failed is not something worth to try.

Please don't get me wrong. I am not saying that the 7970 is a bad GPU. It's definitely the best OOTB choice we can have the AMD side (but I personally believe that we should still void the Sapphire Radeon HD 7970 OC Edition 3 GB). And in this case, there is a good OOTB candidate on the Nvidia said as well, the 680 4GB, which is a direct competitor to the 7970 3GB. By considering it has more VRAM, lower failure rate. And the question was mentioned about "least amount of problems". I will consider a lower failure rate GPU with more VRAM (less limiting) and similar performance is the best answer for that particular question.
 
Last edited:
The difference in performance of AMD GPU vs Nvidia GPUs over time has nothing to do with higher amount of VRAM, but software maturity(both application and drivers from AMD), and overall higher performance on AMD GPUs from the same price brackets.
 
The difference in performance of AMD GPU vs Nvidia GPUs over time has nothing to do with higher amount of VRAM, but software maturity(both application and drivers from AMD), and overall higher performance on AMD GPUs from the same price brackets.

I don't agree this point.

To check driver's performance. We should use the same game, same setting, but only different driver for comparison. It that review, there is zero hint on that.

And my 7950 has virtually zero improvement over time (I always use the up to date AMD driver). And I always use the same game (TR) to check the performance after driver upgrade.

Since the performance difference (between 680 2G and 7970 3G) getting bigger and bigger when resolution increase. Also occasionally a sharp drop. I will say it's more point to hardware limitation (e.g. VRAM), but not software maturity

In this techpowerup review. It basically shows that 680 4GB > 7970 3GB > 680 2GB.

https://www.techpowerup.com/reviews/Point_Of_View/GeForce_GTX_680_TGT_Ultra_4_GB/28.html
 
Last edited:
I should also state that I game at 1440p w/out FSAA.

So the GTX 680 4GB is going to be generally faster than the AMD?

I do like that the 7970 has MDPs which is what I use.
 
Here is the review or 680 vs 7970 3 years after their released. It's quite clear that back in 2012, 680 is generally faster. And getting slower and slower if compare to the 7970. By considering both card's processing power won't change over time. It's not too hard to conclude that should be the VRAM size limitation make the difference. Therefore, I will consider a 4GB VRAM card is a better option.

http://www.babeltechreviews.com/hd-7970-vs-gtx-680-2013-revisited/

[Alien] Babeltech!? I'm sorry, but that is a terrible reference. For example, if we had this discussion on a reputable website -- such as Anandtech, The Tech Report, TechPowerUp, or arstechnica -- no one would ever bring up ABT in a GPU discussion. Everyone knows that they have a large Nvidia bias.

Your ABT review contradicts the data provided by most reliable websites.

Can you provide a link that from a reputable website to show that the 680 is not fast enough to utilise 4GB VRAM?

Sure:
https://www.techpowerup.com/reviews/Point_Of_View/GeForce_GTX_680_TGT_Ultra_4_GB/28.html

The performance gains of this 4GB 680 model (at most ~5% at 2560x1600 -- a resolution that the GTX 680 can't provide playable framerates at, even with 4GB of RAM) come from overclocking the GPU, not from the extra RAM. The same performance gains were simulated in the original reviews of the GTX 680 from reliable websites such as Anandtech and TR.

Again, buy considering the 7970 is fast enough to utilise 3GB or VRAM. and even though my 7950 can be 3GB VRAM limiting today. I don't think the 680 can never utilise that 4GB VRAM because of it's speed. And even 4GB is too much, at least it can utilise the first 3GB of VRAM like the 7970 does, and make it able to perform (like the comparison back in 2012).

Even the extra RAM on the 7970 is generally unhelpful. For example, the Tonga GPU (which is fitted with either 2GB or 4GB) which was basically a replacement for the Tahiti GPU in the 7970/280X performs almost identically to Tahiti. Most benefits come from bandwidth increases (memory speed increases) or GPU clock speed increases.

On the other hand, here is the failure rate statistics

https://linustechtips.com/main/topi...ts-french-but-i-translated-nearly-everything/

7970 7.24%
680 2.66%

I asked for a reliable website or a class action lawsuit -- That LTT post links data to a single French online PC component store. Hardly reliable. Furthermore, you didn't mention these points from the LTT post:

"We have to add that these statistics are limited to products sold by this e-vendor, and returns done specifically to said vendor, which is not always the case because people will sometimes return the product to the manufacturer, however this is a minority of the cases."

"The reported failure rates concern products sold between April 1st, 2012 and October 1st 2012, for returns created before April 2013"

"*Please note that obviously not all brands of particular components are noted either because of retailer availability, regional availability or sample sizes that are too small for this large French e-vendor*"

So we basically have a limited timeframe, with limited sample sizes, and limited model selection too. Hm...

Furthermore:

"Certain numbers are very strongly impacted by certain models, which is the case with the 7870s by Sapphire for example. With the 7970, if we exclude the problematic Sapphire model, we get 5.47%[...].

the rate of failure for 7870 lowers considerably, although it's still abnormally high, with sapphire cards still having the problems. In general, we see that GeForce models are more reliable according to this data, notably with an excellent ROF for the GTX 660.
"

Reading the original post, most AMD cards sold were Sapphire custom board overclocked models. But what about the OEM reference board 7970s and 7870s? Where are other prominent AMD IHVs such as HIS? Were the GeForce models Nvidia reference board designs?

Therefore, the failure rate of 7970 is 172% higher than 680. It's no doubt "significantly higher".

The worst 7970 is the Sapphire Radeon HD 7970 OC Edition 3 GB, 14.29% fail.
The worst 680 is the ASUS GTX680-DC2O-2GD5 2 GB, 6.98% fail.

So, The average failure rate of the 7970 is higher than the worst 680.
And the worst 7970 has failure rate that 105% higher than the worst 680.

As discussed by the points above. This is dubious accounting, and a great example of the phrase: there are lies, damn lies, and then there are statistics.

The 7970 failure rate (once excluding Sapphire's questionable custom overclock model) falls to 5.47% -- and even that statistic is questionable because it may have a small sample size.

So, actually, it's what, 5.47% for the 7970 verses 6.98% for the Asus GTX 680? Also, if we ignore the ASUS GTX 680 model and look at other GTX 680s, the GTX 680 failure rate is just 2.66%.

5.47% (7970) - 2.66% (GTX 680) = 2.81% difference

Hardly statistically significant. Furthermore, we don't even know sample sizes.

Since the performance difference (between 680 2G and 7970 3G) getting bigger and bigger when resolution increase. Also occasionally a sharp drop. I will say it's more point to hardware limitation (e.g. VRAM), but not software maturity.

This has been proven incorrect many times by the aforementioned websites. Kepler is simply a rubbish GPU design for modern games. It was great for the DX10/DX11 generation, but it's horrible with DX12/Vulkan. For example, the GTX 780 Ti with 3GB has aged very poorly, and sometimes struggles to match Tahiti (with 3GB), a significantly smaller and cheaper GPU design.

Finally, as I said, the RAM myth was debunked when looking at Tonga card with only 2GB of RAM.

I'll leave with these performance summary charts from techpowerup:

https://www.techpowerup.com/reviews/AMD/RX_480/24.html

As you can see GK104 (GTX 680/GTX 770) and GK110 (GTX Titan/780 series) did not age well -- RAM had very little to do with it.
 
I don't agree this point.

To check driver's performance. We should use the same game, same setting, but only different driver for comparison. It that review, there is zero hint on that.

And my 7950 has virtually zero improvement over time (I always use the up to date AMD driver). And I always use the same game (TR) to check the performance after driver upgrade.

Since the performance difference (between 680 2G and 7970 3G) getting bigger and bigger when resolution increase. Also occasionally a sharp drop. I will say it's more point to hardware limitation (e.g. VRAM), but not software maturity

In this techpowerup review. It basically shows that 680 4GB > 7970 3GB > 680 2GB.

https://www.techpowerup.com/reviews/Point_Of_View/GeForce_GTX_680_TGT_Ultra_4_GB/28.html
Why are you using 2012 reviews in 2017?

Especially considering my point: Over time, AMD GPUs are becoming faster, than Nvidia competitors.


You do not have to agree with my point. Lets cut to the chase. GTX 770 4GB vs R9 280X 3 GB:
s3_1920.png

GTX 770 slower.
wd2_1920.png

GTX 770 slower.
doom_1920_v.jpg

GTX 770 slower massively than HD 7970.

GTX 770 is using the same core as GTX 680. In 2016-2017 titles. I am actually staggered that nobody here tends to say anything about R9 380X, which should be slightly faster than HD 7970, and should work flawlessly, because it is the same GPU as R9 395X in iMac, and should work out of the box.

At the beginning AMD GPUs are always slightly slower than Nvidia counterparts. They start to outpace, when software: which means drivers and applications mature. No wonder that you see no difference in performance if you use single Application, which is also very dated.
 
  • Like
Reactions: HiroThreading
Good responses. Sounds like there are other options for a Mac 5,1 in 2017 besides the GTX 680 or the 7950 at a $200ish pricepoint. I just want something that's plug-and-play for MacOS Sierra, and don't want to deal with driver issues when Apple decides to do an OS update. I also want full boot screen support.

Also would heavily prefer using the internal PSU.
 
Good responses. Sounds like there are other options for a Mac 5,1 in 2017 besides the GTX 680 or the 7950 at a $200ish pricepoint. I just want something that's plug-and-play for MacOS Sierra, and don't want to deal with driver issues when Apple decides to do an OS update. I also want full boot screen support.

Also would heavily prefer using the internal PSU.

I bought a GTX 980Ti for £200 two weeks ago on eBay, which isn't much above $200 but has a massive performance uplift over a GTX 680 or 7970. Even if you wanted to be cautious and go for a GTX 980 because of the power draw, surely you could get one for just a shade above $200?

Spending $200 on either a GTX 680 or a 7970 seems like a massive waste of money unless being able to flash the card for a boot screen is non-negotiable to me.
 
  • Like
Reactions: thornslack
Good responses. Sounds like there are other options for a Mac 5,1 in 2017 besides the GTX 680 or the 7950 at a $200ish pricepoint. I just want something that's plug-and-play for MacOS Sierra, and don't want to deal with driver issues when Apple decides to do an OS update. I also want full boot screen support.

Also would heavily prefer using the internal PSU.

The fact that you won't install drivers rules out a lot of good options and, since you want boot screens, leaves you with the GTX 680 or the HD 7950.

You could choose to get a GT 120 and an RX 460. The 460 is a modern card which works OOTB (just don't buy an XFX) and the GT 120 would provide the boot screen you desire. However, the RX 460 isn't that fantastic....so you may as well get a GTX 680 or an HD 7950.

Fenders do indeed rule.
 
Hmm, I guess I'm a little confused. How about just buying a flashed 7970 on eBay and running with it? Doesn't MacOS have drivers for that, no questions asked? Auction said full boot screen and driver support, etc.

Assuming you can get a flashed 7970 for $200-225 or a GTX 680 for $200ish, isn't there more benefit with modern games to go for the 7970?

The question also goes back to the 3.33GHz Westmere's limitation. You can only go so much with the GPU (even at 1440p) until you've lost benefit of going with a faster GPU. Is a 7970 kind-of about there?

Is there something I don't know about the 7970?
 
Last edited:
I would recommend a 1080 Ti. Best bang for the buck.

However I would not recommend putting it in an old mac pro. I'd recommend building yourself a dual boot hackintosh/windows box. It is not for non-technical people or the faint of heart, but there are many motherboards that work admirably well and easily.

My current machine is more powerful, by leaps and bounds, than anything Apple sells.

IMG_40962.jpg
 
Last edited:
I got my X5690s in. The performance increase is amazing and I'm really happy.

Slight problem with my 1070 though.... it needs 2x8 pin connectors.... any ideas? Send it back and order another one?
 
I got my X5690s in. The performance increase is amazing and I'm really happy.

Slight problem with my 1070 though.... it needs 2x8 pin connectors.... any ideas? Send it back and order another one?

2x8 pin doesn't mean that it can / will pull 375W. The power draw of 1070 is so low, even with extreme OC, I am quite sure it won't cause any trouble.
 
  • Like
Reactions: owbp
I've just came home with newly bought GTX980 and it has 2x8 pin connections.
It consumes almost half of what my late R9 280X did (1x6 and 1x8 pin). It's there for overclockers.
 
2x8 pin doesn't mean that it can / will pull 375W. The power draw of 1070 is so low, even with extreme OC, I am quite sure it won't cause any trouble.

So are you guys saying I can simply connect one of the 6 + 2 plugs and leave the other one empty? Not to bite the hand that feeds me, but won't that damage the card?

All of the other EVGAs have only one 6 + 2 connector. It's just the FTW that seems to have two? Help!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.