Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Again, their focus is their iPad & iPhone and yes I would agree that they went big on their hours and money spent on this "new" mac pro, but to me compared to my machine; The Hackinbeast, their TrashCan Pro (TCP) is crap and they should be schooling me with a better performing mac pro, not the other way around. No logic in that either. That's what I'm sayin'. Feel me?...

Sadly, they have abandoned many of the Creative Pros who created part of their cool image. Last week I handled 5 @ Titan Blacks for one post house and 2 @ 980s for another. I actually drive to these places and pick up/deliver the cards so I get to talk to the people who work there. Both of these places are sticking with 5,1s as long as they can and have been transitioning to Windows machines to keep the power they need.

An even larger place that is rather famous is upgrading 100 machines. They are sticking with Kepler cards, Apple having successfully scared them out of getting something that doesn't have drivers in OS. But a large, famous Post facility using 4,1/5,1s who also didn't drink the Kool Aid.

That said, i was on a shoot recently where the DIT on set had a 6,1. Handy for moving files from camera cards to HDs, not sure how the files got handled after that.

Just saw this over in the El Capitan section. Liked how he put it:

This is what frustrates me a great deal. How much wider does the gap in power, heat, and performance between nVidia and AMD have to get before Apple switches? AMD cards are HORRIBLE right now. They are huge power hogs, their TDPs are through the roof, and nVidia cards that draw half the power and half the TDP spank them. The excessive heat that AMD cards generate is an even worse problem when you consider the ultrathin computer casings into which Apple packs its components like sardines. Heat is incredibly damaging to computer components. From an engineering standpoint it seems appalling to build computers this way when nVidia offers such a superior alternative.

I have been posting in the iMac section. The AMD card in the 5K iMac routinely runs at 100-105C. I have told them that running this machine without Applecare is crazy. And they should all unload them before warranty runs out. Having the GPU radiating that much heat mere millimeters from that 5K panel is BEGGING for a yellow/brown cast over the GPU area in the future.

https://discussions.apple.com/thread/6641477

Apple has moved to a complete line of 2nd rate machines due to this addiction to AMD's Bargain Bin GPUs. They are cheaper for a good reason.
 
Last edited:
  • Like
Reactions: rdav and tuxon86
between nVidia and AMD have to get before Apple switches?
tell nvidia to open up cuda.

it's not necessarily an AMD vs nVidia scenario.. it's more an openCL vs CuDA war.

if the last couple years worth of macs (mainly macpro) had nvidia, what would be the outcome?
in 2020, say for example, nvidia wants to charge apple more for the GPUs in the mac-- what type of negotiating power would they have at that time? could apple do in 2020 what they can do now with nvidia? (tell them to screw off with their little ploy)

---
but i don't think they're switching back anytime soon.. has the mac lineup ever been so no-nvidia before? (real question.. not as rhetorical as those other ones ; ) )
 
tell nvidia to open up cuda.

it's not necessarily an AMD vs nVidia scenario.. it's more an openCL vs CuDA war.

if the last couple years worth of macs (mainly macpro) had nvidia, what would be the outcome?
in 2020, say for example, nvidia wants to charge apple more for the GPUs in the mac-- what type of negotiating power would they have at that time? could apple do in 2020 what they can do now with nvidia? (tell them to screw off with their little ploy)

---
but i don't think they're switching back anytime soon.. has the mac lineup ever been so no-nvidia before? (real question.. not as rhetorical as those other ones ; ) )

Apple is trying to make small, thin, light computers with long battery life.

They are using GPUs that are well proven to be slower, run significantly hotter meanwhile sucking lots more power. So, less battery life and ultimately shorter lived computers in general.

Supporting the company that has monopolized the CPU market doesn't bother them even a little bit, but they don't want to play nice with Nvidia so they sell their computers and customers down the river and use the hot, steaming pile of gPU that is AMD.

I have no doubt that AMD sold them 7970 parts and even agreed to the "FirePro" name, handing Apple a huge markup. AMD was (and is) desperate. So Apple has them by the bxxxs. Anyone wanna guess how long those 105C iMacs are going to last? One part of logic board getting past boiling on a regular basis while rest at different temps. Someone should make a little rack to hold bread on the back, call it the i-Toast.

If there were 2 HD manufacturers and one had hot, power hungry drives that were slower, and the other company had cooler, faster drives that used less power would there be any doubt which one we would want in our Macs?

Why are we even having this discussion? I am glad AMD is around to keep Nvidia's nose to the grindstone, but their entire range of cards is completely inferior and as the quote I included says, they are a remarkably poor fit for Apple's design choices, yet they picked them.

The only reason I can imagine to defend slower, hotter, more power hungry cards is being an employee of AMD and/or Apple.
 
Last edited:
Why are we even having this discussion? I am glad AMD is around to keep Nvidia's nose to the grindstone, but their entire range of cards is completely inferior and as the quote I included says, they are a remarkably poor fit for Apple's design choices, yet they picked them.

the discussion could happen because it's entirely relevant.. why aren't nvidia gpus in macs?
you guys discuss the consequences etc all the time but what are the actual reasons?

i get it that it's more fun to answer 'apple is stupid' and the like but come on.. what are the real reasons?
 
the discussion could happen because it's entirely relevant.. why aren't nvidia gpus in macs?
you guys discuss the consequences etc all the time but what are the actual reasons?

i get it that it's more fun to answer 'apple is stupid' and the like but come on.. what are the real reasons?
AMD is willing to bend to Apple and nVidia isn't. AMD needs the money nVidia doesn't.
 
  • Like
Reactions: Xteec and rdav
the discussion could happen because it's entirely relevant.. why aren't nvidia gpus in macs?
you guys discuss the consequences etc all the time but what are the actual reasons?

i get it that it's more fun to answer 'apple is stupid' and the like but come on.. what are the real reasons?

Perhaps Apple built Metal together with AMD.. Apple has licensed IP from AMD and the deal included rights to buy GPU's with good price.

Btw, AMD is building / designing a new APU processor (similar to PS4) for a secret client that is not Microsoft or Sony.. it could be Nintendo, but it could be also Apple.

"We also began development of a new Semi-Custom design in the quarter. Like our other Semi-Custom designs, the details are customer confidential, but we are pleased with our progress continuing to expand our customer base in this important part of our business." - Lisa T. Su on Q2 2015 Results - Earnings Call Transcript

So Apple could create a new product.. maybe a combination of Apple TV and Mac Mini that could rival game consoles? Or prehaps new Mac (the Mac)? Or something totally new...

FinFet 14nm is just behind the corner.. maybe it is APU with HBM v1 or v2? A computer in one chip...
 
Last edited:
Why nVidia is not in Macs? First of all price. Way higher than any AMD proposition, in the same segment. Secondly, AMD offers higher performance in OpenCL than Nvidia. I will not link to benchmarks again, no matter how Nvidia PR machine goes on this forum. Maxwell GPUs may be faster in games, but are not in OpenCL work.

Third is Metal. Metal is possibly derived from Mantle, from AMD. And as we know everyone got it apart from Nvidia. Only they know why. Fourth: AMD Freesync. Fifth. AMD is willing to work with Apple, Nvidia is not. OS X Drivers for El Cap have been improved for GCN cards. AMD finally cracked them. The reasons can go on, and on, and on...

P.S.
http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x-power-pump-efficiency,4215.html
Fury X at -50% TDP. Power consumption of the GPU decreases from 267W to 170. FULL FIJI CHIP!

Average FPS drops for 3 FPS in 4K resolution. That means that the GPU itself was clocked at over 1000 MHz at 170W.

Fury Nano is possible.

8 TFLOPs from 170W of power consumption. If that is not power efficiency I don't know what is.

P.S.2. That means it is possible to lock full Fiji chip to 125W at 900 MHz, just as I have said few pages back.
 
Last edited:
  • Like
Reactions: rdav
Why nVidia is not in Macs? First of all price. Way higher than any AMD proposition, in the same segment. Secondly, AMD offers higher performance in OpenCL than Nvidia. I will not link to benchmarks again, no matter how Nvidia PR machine goes on this forum. Maxwell GPUs may be faster in games, but are not in OpenCL work.

Third is Metal. Metal is possibly derived from Mantle, from AMD. And as we know everyone got it apart from Nvidia. Only they know why. Fourth: AMD Freesync. Fifth. AMD is willing to work with Apple, Nvidia is not. OS X Drivers for El Cap have been improved for GCN cards. AMD finally cracked them. The reasons can go on, and on, and on...

P.S.
http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x-power-pump-efficiency,4215.html
Fury X at -50% TDP. Power consumption of the GPU decreases from 267W to 170. FULL FIJI CHIP!

Average FPS drops for 3 FPS in 4K resolution. That means that the GPU itself was clocked at over 1000 MHz at 170W.

Fury Nano is possible.

8 TFLOPs from 170W of power consumption. If that is not power efficiency I don't know what is.

P.S.2. That means it is possible to lock full Fiji chip to 125W at 900 MHz, just as I have said few pages back.

Yay. If strangled down and de-volted, de-clocked, and thoroughly de-balled you might squeeze the watered down result into 125 Watts.

Yippity.

GTX960 at 120 Watts, at stock clocks. No clipping, cutting, reducing needed.

And unlike nMP, no excuses either.

SO a throttled, choked, and castrated Fiji can squeeze it's way down to 170 Watts?

Again, yip pity. Still writing a check that nMP can't cash, by 90+ Watts. Not gonna happen.

It's Nvidia, or AMD Mobil GPUs, or mid range AMD Space Heaters plucked to spec.

Face it.

EDIT: Once again you have linked to an article that disproves your point.

From the "Sum it Up" section:

"At this point, all we can say is that AMD’s new graphics card catches up to the GeForce GTX 980 Ti’s power consumption at Full HD, but can’t keep up when it comes to performance. At UHD, its performance is competitive, but its power consumption is much higher compared to Nvidia’s graphics card."

Bravo.
 
Last edited:
And which GPU will be faster? Fury at 125W or GTX960? You have to be out of your mind if you think GTX960 will be faster than Fury at even 850 MHz and 125W of TDP.

What is the TFLOPs performance of GTX960 at 125W? 2.3 TFLOps at best? 850 MHz give you 7 TFLOPs on Fiji.

I will put this again this way. You can resist it if you want. Getting Full Fiji chip to 170W of TDP is possible with 1000 MHz core clock. 900 MHz is possible at 125W. You have been given proofs from experiments. You are still resisting them. And you think that 120W GTX960 will be better offering than Fiji at 125W. And one more thing. nMP will always have two GPUs. That goes for 250W of thermal envelope.

Lets look at worst case scenario when Full Fiji has 850 MHz core clock. That gives on both cards 14 TFLOPs of compute power. Almost 2.5 times more compute power than Titan X in the same power envelope. And if you want to think that the thermals or power supply will not handle it - you provided already a link. 250W on both GPUs is 250W of power drawn and emitted to environment in thermals. You can resist it however you want.

P.S. About that GTX 960. Tell me MVC, why Cutting GTX 980 by half reduced the TDP and power consumed only by 45W, not 80? Even if you have the same clocks on GPU. It should be -50% if you add all of the design of the GPU(half the amount of cores, half the memory bus). Why is not possible to cut power by 50% with cutting the GPU itself by 50%? And it goes for every single Maxwell GPU.

AMD GPUs can loose 50% of power draw while being reduced by 10% performance.
 
Last edited:
And which GPU will be faster? Fury at 125W or GTX960? You have to be out of your mind if you think GTX960 will be faster than Fury at even 850 MHz and 125W of TDP.

What is the TFLOPs performance of GTX960 at 125W? 2.3 TFLOps at best? 850 MHz give you 7 TFLOPs on Fiji.

I will put this again this way. You can resist it if you want. Getting Full Fiji chip to 170W of TDP is possible with 1000 MHz core clock. 900 MHz is possible at 125W. You have been given proofs from experiments. You are still resisting them. And you think that 120W GTX960 will be better offering than Fiji at 125W. And one more thing. nMP will always have two GPUs. That goes for 250W of thermal envelope.

Lets look at worst case scenario when Full Fiji has 850 MHz core clock. That gives on both cards 14 TFLOPs of compute power. Almost 2.5 times more compute power than Titan X in the same power envelope. And if you want to think that the thermals or power supply will not handle it - you provided already a link. 250W on both GPUs is 250W of power drawn and emitted to environment in thermals. You can resist it however you want.

Where do you get all of these numbers?

Nobody else has them.

You are making wild promises about cards that don't exist yet. Over and over again you yourself have linked to articles that show Fiji to be a power slurping space heater that can't begin to keep up with Nvidia's Maxwell cards. I don't even have to make the links, just pull quotes from YOUR links.

The cards are crap, every article has tried really hard not to type it that way, but the summations always have to tap dance.

AMD swung and missed. Apple picked the wrong horse.

We just have to wait and see what castrated version of what is in 7,1. Or maybe someone from AMD would like to hand us more numbers from the "pie in the sky" project?

And again, if AMD could simply lower clocks, still have performance but get power draw down to 125 Watts, why are they allowing themselves to look like such fools in all of the reviews? (Including the ones you link to?)
 
Did you read, that Tomshardware reduced the TDP by 50% and the power consumption went from 267W to 170W? Did you read that link at all? Did you see that at 170W it maintained 95% of nominal performance? Did you read links from few pages back when a guy reduced the TDP by 40% of Fury X and it went down to 225W TDP while maintaining 99% of nominal performance? Did you read links when I show that people reduced Hawaii GPUs to 145W while maintaining 85-90% nominal performance?

The answer is no. Because otherwise you would not ask this stupid question above.
 
AMD is willing to bend to Apple and nVidia isn't. AMD needs the money nVidia doesn't.

hmm.. not really sure if that's realistic.. nvidia was selling millions of gpus via macs.. i can't imagine them 'not needing the money' to the point where they don't care if they're in apple computers or not..
apple may not be their biggest outlet but still, losing hundreds of millions of dollars per year is not something i imagine nvidia being able to just brush off.

Scuttlebutt is that nVidia's price to supply for the nMP was in the vicinity of 2 grand more than AMD's per machine.

maybe.. but it's all of the machines.. mbp and imacs have been switched over to amd as well.. nvidia has been deleted from the entire mac line.

that said, if a mac pro were really $2000 more per unit simply based on whether or not apple went with nvidia instead of amd then apple certainly made the right choice.. the mac pro is overpriced as is.. adding another two grand is ridiculously overpriced especially when considering the two machines would run virtually identical from the user point of view.
 
And for all people thinking that Im AMD fan: https://forums.macrumors.com/thread...rds-you-ever-had.1904750/page-2#post-21667073
Here you have a link. If I am biased then only for the green brand.
edit.
And again, if AMD could simply lower clocks, still have performance but get power draw down to 125 Watts, why are they allowing themselves to look like such fools in all of the reviews? (Including the ones you link to?)
Fury Nano is Full Fiji chip with 175W bios Cap. It will over or down clock itself depending on the thermal and power supply environment.

That means it can bounce dynamically between 900 and 1000 MHz and not exceed the 175W of power consumption. Already there were links in this thread showing that Fury Nano is getting 97% of nominal Fury X performance at 157W of power consumption. That was however prerelease AMD slide with 4K Unigine Valley or something benchmark. So we would need to wait for real reviews. But the power consumption was a bit promising.
 
Last edited:
Why nVidia is not in Macs? First of all price. Way higher than any AMD proposition, in the same segment. Secondly, AMD offers higher performance in OpenCL than Nvidia. I will not link to benchmarks again, no matter how Nvidia PR machine goes on this forum. Maxwell GPUs may be faster in games, but are not in OpenCL work.
that's maybe the current take on it but it's really short term analysis which could change on a quarterly basis. apple made the amd decision for mac pro long ago.. probably in 2011 or 2012.

at that time, openCL (which was authored by apple) was maybe one or two years old whereas CUDA would of been more 'mature' at 4-5 years old.. the design/configuration of mac pro seems based around gpgpu computing.,

i personally believe cuda had way too much traction with developers at that time and was much easier to get working in applications than openCL..

if apple went with nvidia, developers would have (more likely than not) went with cuda because it was more powerful and easier to use.

so we'd now have software for osx.. adobe, autodesk, the renderers, etc which were written with cuda instead of openCL since nvidia saw the future of gpgpu before anybody else and got the jump on creating the frameworks. (ie- smart on nvidia's part)..

unfortunately for us, nvidia is using cuda as a means to make/secure money instead of a means to improve computing as a whole..

i really do believe if apple allowed nvidia to continue supplying macs-- with this proprietary language thing going on.. nvidia would have secured their position inside of all future macs.. much of the software for os x would require nvidia gpus therefore apple would have to bend to nvidia at negotiations instead of the other way around.. a position i highly doubt apple wants to be in.. it's their computers and they want the upper hand when it comes to buying components to build them.

i also believe if CUDA were open from the getgo, the mac/nvidia landscape would look different right now.
 
Its the same story for GameWorks. GameWorks games work much better on Nvidia hardware than on AMD. Its simple as that.

But we are not talking about CUDA. Apple don't want to be bound to hardware, and one supplier, one solution. They wanted the suppliers to be bound to Apple solutions. They wanted the best solutions for their ecosystem. And for Apple best solution is in their mind OpenCL. Apple decided to go with that route, because of efficiency and compute power from AMD GPUs. Nvidia right now supports at max OpenCL 1.2 whereas Intel and AMD already supports 2.0 version.
From the "Sum it Up" section:

"At this point, all we can say is that AMD’s new graphics card catches up to the GeForce GTX 980 Ti’s power consumption at Full HD, but can’t keep up when it comes to performance. At UHD, its performance is competitive, but its power consumption is much higher compared to Nvidia’s graphics card."

Bravo.
Well, When I am talking about OpenCL performance, you talk about gaming performance. When I am talking about downclocking Fury X to 125 and getting 7 TFLOP of compute power on single GPU, you jump out with 120W GTX960 which has 2.3 TFLOPs of compute power. Im not interested in gaming, because for my gaming and my interests Fury which will possibly end up being D510 or whatever is way over my needs. Im interested in Final Cut Pro X and Photoshop performance on OS X. Final Cut Pro X is much faster on AMD cards thanks to Radeon OpenCL performance and optimizing the FCPX to OpenCL not the GPU. Would I get better option for Photoshop on OS X? Possibly. But again. Dual Fury will be way too much for my needs even if its 4K editing and Gaming.

P.S. Lets assume that Apple puts Fury X into the new Mac Pro and downclock it to 850 MHz and 125W TDP. Its ultimately Fury X crossfire in 250W thermal envelope - the same as that GTX 980 Ti you have quoted. Would not in Crossfire Fury X wipe that GTX 980 Ti from the earth, while using the same power amount? Thats what makes difference.

And yes, I am efficiency freak ;).
 
Also remember that apple needs a compute solution that works on all of its macs, not just those with discrete video cards. Intel does not support cuda, but it does support opencl. Apple didn't want to tie itself to nvidias proprietary tech, and it needed something that works across all macs.

Additionally, amd is more willing to build custom cards like those found in the Mac Pro, and was willing to give Apple exclusive access to tonga (r295x).

It is possible that amd is faster at OpenGL, but I haven't seen anything conclusive on this. It does seem like amd has an edge in compute, especially when we are looking at Tahiti (d500/d700). This card was a monster in compute performance and it makes sense why Apple built a computer around 2 of these.

Nvidia certainly hit a home run with maxwell. However, it is strictly a gaming and single precision compute chip. It's not nearly the all around compute chip that Tahiti is. Apple is favoring amd at the moment, but. I doubt it lasts forever. Apple likes to pit their suppliers against each other, and it wouldn't surprise me to see maxwell in something like the iMac. However, I think the Mac Pro will always be compute first, and whatever has the best performance using pro tools or whatever else in the given power envelope is what Apple will choose.
 
hmm.. not really sure if that's realistic.. nvidia was selling millions of gpus via macs.. i can't imagine them 'not needing the money' to the point where they don't care if they're in apple computers or not..
apple may not be their biggest outlet but still, losing hundreds of millions of dollars per year is not something i imagine nvidia being able to just brush off.



maybe.. but it's all of the machines.. mbp and imacs have been switched over to amd as well.. nvidia has been deleted from the entire mac line.

that said, if a mac pro were really $2000 more per unit simply based on whether or not apple went with nvidia instead of amd then apple certainly made the right choice.. the mac pro is overpriced as is.. adding another two grand is ridiculously overpriced especially when considering the two machines would run virtually identical from the user point of view.

But not enough to customize
 
I think the real discussion here is what you want/need your Mac for.
Some are more gaming or extreme performance just for the sake of it, like mine is bigger than yours kinda thing.
OK, fair enough, it's a way of seeing things. You're entitled to get the most out of what you pay for.
But this is not the MacPro way, nor was it ever I'd say, nor will it ever be.
You get what Apple wants you to, or what better suits them, or what they envision it to be. And that's understandable too.
They want and need to control their closed ecosystem, like it or not.

Like you koyoot, I've always been more of an NVidia fan, owning only their cards - still have one 8800GT 512 pumping.
Still, that doesn't make me go bashing AMD or their cards.
OK, their cards tend to draw more power in certain applications, performance can be lower, but they're good enough for me. I can live with those trade-offs.
Would it be nice to have the nVidia option? Well, of course. But over the years I see nVidia closing down on itself and AMD opening up. Is this bad? No, nVidia is entitled to have their own way too, much as Apple, and profit from it - in the end it's what thy work for and with due credit. But their choices are reflected in the partnerships with others, Apple in this case.
Too bad for us, but it's the way it is in the business world really.
Don't game much now, but others do and want every drop of performance out of their hardware. But nMP is really not a gaming machine, or not supposed to be anyway.
Maybe with Metal Apple will make a gaming rig, just for those who keep asking for it. But don't hold your breath...
 
You get what Apple wants you to, or what better suits them, or what they envision it to be. And that's understandable too.
They want and need to control their closed ecosystem, like it or not.
No. Apple believes that OpenCL will give you more than any other solution there is. Maybe that is true, maybe not. Depends on software. Again. I really would love to see applications, that are using CUDA, on OpenCL. But that will not happen. Apples to apples comparison between two platforms would answer a lot of questions.
 
I really would love to see applications, that are using CUDA, on OpenCL. But that will not happen. Apples to apples comparison between two platforms would answer a lot of questions.
Indigo renderer uses both cuda and OpenCL for the current gpu acceleration.. i can pick cuda or OpenCL to run on the gpu or just limit openCl to the cpu..

indigocurrent2.png


that said, this implementation has been in the app for years now (at least 4-5) and they both give similar enhancements.. (the main bottleneck is still the cpu.. the gpu just assists it).

however, indigo devs are currently rewriting the program as a pure gpu renderer (you can follow the progress at their forum if you want.. the devs are pretty open about it and will answer questions etc.).. this pure gpu implementation is openCL only.. they ditched cuda.
 
pure gpu implementation is openCL only.. they ditched cuda.


Also remember that apple needs a compute solution that works on all of its macs, not just those with discrete video cards. Intel does not support cuda, but it does support opencl. Apple didn't want to tie itself to nvidias proprietary tech, and it needed something that works across all macs.

Neither of these comments are really relative, now that Nvidia has competitive OpenCL performance.
 
I was actually referring to the hardware used, not OpenCL.
Although the hardware will somewhat condition or not how software runs on it.
Choosing AMD, besides the other mentioned reasons, surely was related to the OpenCL vision of Apple as the platform of choice, no doubt.
Maybe devs will see how Apple is serious about this and port their applications, the ones that are not yet. Or maybe they just stopped following Apple altogether.
 
I think I know why OC on Fury X brings such small addition to performance, and overall the GPU is a bit, well disappointment.

Cores on the GPU are not used fully. If we will see the results from Windows 10 and Ryse: Son of Rome at 1080p it turns out that Fury X not only got around 20% increase in performance but now also is faster than GTX 980 Ti.
http://www.computerbase.de/2015-07/...ndigkeit/#diagramm-ryse-son-of-rome-1920-1080

If we look at reports that Fury X is in some situations idling, and the load of GPU is never 100% we can go to strange conclusion. If the cores would be fully used not only OC on the GPU would be better, but also the differences in games would be much higher.

R9 290X is 10 FPS slower than R9 390X with only 5% higher core clock on 390X. How much faster is Fury X than R9 390X in the same resolution? Exactly the same 10 FPS. If it has the same core clock, has higher Pixel filtrate thanks to Tonga technology, and has much higher core amount, then something is really wrong here. Not with the GPU itself, but definitely with games.
http://tpucdn.com/reviews/ASUS/R9_Fury_Strix/images/bf4_1920_1080.gif

How big difference between R9 290X and 390X in 4K? 4 FPS. How big difference between R9 390X and Fury X in the same resolution? 5 FPS. http://www.techpowerup.com/reviews/ASUS/R9_Fury_Strix/11.html Most of the additional cores are idling, and that is why in games we see underwhelming situation of underperforming compared to Nvidia solutions, and why we see gigantic increase when there is no CPU bottleneck and Game really can use all of the cores.

To sum it up, I bring quote from person who owns two Fury X in his computer. http://semiaccurate.com/forums/showpost.php?p=242097&postcount=1106

And I will not be completely surprised if eventually Fury X will end up being faster than GTX 980 Ti. But that is only relative to gaming. In compute there is completely different story.

Edit. This is even better. http://tpucdn.com/reviews/ASUS/R9_Fury_Strix/images/gta5_1920_1080.gif
I have no doubt right now that there is something wrong with the games. R9 390X 2.5 FPS slower than Fury, Fury X 3.9 FPS faster than Fury. Not bad.
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.