Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I would not buy the RX580. To increase a little the clock, you have to feed it quite a bit more power.

RX480 can indeed already beat the 1060 in some games.

Anything that includes Rise of the Tomb Raider is useless, as this game seems not optimized at all for AMD.

And 3GiB VRAM is not enough for some games at 1080p Ultra.

Right, as I said the 480 can beat the 1060 in a handful of games, but overall the 1060 wins more than it loses. That's why I said it's better on average. It's not really fair to exclude a game just because it runs poorly on AMD, using that logic I could just exclude any game that runs better on AMD and claim that NVIDIA wins 100% of the time. Granted, if all you care about is that one specific game, then by all means buy the GPU that runs it the best, but this is rarely the best course of action.
 
The RX480 also beats the 1060 and they are priced similarly.

Look at the performance of the RX480 vs GTX970 and their respective launch prices.
It is called confirmation Bias. People will look at things and will made their minds about a product based on their perception of the brands.

You are saying that RX 480 beats in performance GTX 1060, while costing less. Asgo will tell you that is not true. None of people will ever get to any conclusion, even if you can bring valid arguments, for your logic.
Right, as I said the 480 can beat the 1060 in a handful of games, but overall the 1060 wins more than it loses. That's why I said it's better on average. It's not really fair to exclude a game just because it runs poorly on AMD, using that logic I could just exclude any game that runs better on AMD and claim that NVIDIA wins 100% of the time. Granted, if all you care about is that one specific game, then by all means buy the GPU that runs it the best, but this is rarely the best course of action.
Confirmation Bias. You cannot look only on one thing, and say that GTX 1060 wins more. Because that is absolutely not true. Especially when we have seen reviews comparing GTX 1060 and RX 480 few months later, to test the future proofing, and the affect of mature drivers and software has on performance of GPUs.

Like I have said. It is confirmation Bias. In 2016 and 2017 games RX 480 is beating consistently GTX 1060. You will reiterate that at launch reviews RX 480 was slower, and consumed more power.
 
Right, as I said the 480 can beat the 1060 in a handful of games, but overall the 1060 wins more than it loses. That's why I said it's better on average. It's not really fair to exclude a game just because it runs poorly on AMD, using that logic I could just exclude any game that runs better on AMD and claim that NVIDIA wins 100% of the time. Granted, if all you care about is that one specific game, then by all means buy the GPU that runs it the best, but this is rarely the best course of action.
This game is an outlier, that's why it should be excluded from an analysis of the card itself. You're testing the programming, not the GPU.
 
This game is an outlier, that's why it should be excluded from an analysis of the card itself. You're testing the programming, not the GPU.
None of Nvidia Gameworks titles are optimized for AMD. Gameworks are simply gimping performance of AMD GPUs and to some degree Nvidia GPUs also(who remembers GTX 980 Ti required to play Project Cars in 1080p in 60 Hz?).
 
None of Nvidia Gameworks titles are optimized for AMD. Gameworks are simply gimping performance of AMD GPUs and to some degree Nvidia GPUs also(who remembers GTX 980 Ti required to play Project Cars in 1080p in 60 Hz?).

By this logic we shouldn't include any cross platform games in determining the best card, since they are optimized to run on an AMD powered console.

This game is an outlier, that's why it should be excluded from an analysis of the card itself. You're testing the programming, not the GPU.

Unless you are only going to compare theoretical performance like TFLOPS or memory bandwidth, it is literally impossible to test a computing device without testing the "programming."
 
By this logic we shouldn't include any cross platform games, since they are optimized to run on an AMD powered console.
They are not. Games designed for consoles are often not optimized to run on anything else than Console.

If that would be true, Unreal Engine games would run better on AMD arch than on Nvidia. And this is not true, because Epic always designed Unreal Engines for Nvidia in the first place(they have partnership).

Effects of this we can see with partnerships Nvidia have. Star Wars Battlefront was considered AMD title. And was running perfectly on both vendors. Because in reality it was not AMD title. Mass Effect Andromeda - Nvidia optimized Title. And is running much better on Nvidia hardware than on AMD. And both games are using THE SAME engine.
 
They are not. Games designed for consoles are often not optimized to run on anything else than Console.

If that would be true, Unreal Engine games would run better on AMD arch than on Nvidia. And this is not true, because Epic always designed Unreal Engines for Nvidia in the first place(they have partnership).

Effects of this we can see with partnerships Nvidia have. Star Wars Battlefront was considered AMD title. And was running perfectly on both vendors. Because in reality it was not AMD title. Mass Effect Andromeda - Nvidia optimized Title. And is running much better on Nvidia hardware than on AMD. And both games are using THE SAME engine.

Every modern game engine has been tuned to work well on consoles. Why would developers strip out these enhancements when running these games on PC if a significant percentage of their user base could make use of them. The fact that some games run better on Nvidia is just because Nvidia is pretty good at making GPUs.

Games can use the same engine but have different performance constraints. One game may have more geometry, the other have higher res textures, the other use more shader effects. Each of these constraints could result in different performance depending on the chip architecture.

Nvidia gameworks is only a program where Nvidia will give developers access to its own library of effects and enhancements. Its obviously no coincidence that these are tuned to work well on Nvidia hardware but its hard to fault a developer for wanting to save time and implement these effects.

There is nothing stopping AMD from doing the same thing except that they don't have the same amount of money to spend. AMD is playing its own game hoping that at some point its use in the console space will make games run better on their hardware.
 
Every modern game engine has been tuned to work well on consoles. Why would developers strip out these enhancements when running these games on PC if a significant percentage of their user base could make use of them. The fact that some games run better on Nvidia is just because Nvidia is pretty good at making GPUs.

Games can use the same engine but have different performance constraints. One game may have more geometry, the other have higher res textures, the other use more shader effects. Each of these constraints could result in different performance depending on the chip architecture.

Nvidia gameworks is only a program where Nvidia will give developers access to its own library of effects and enhancements. Its obviously no coincidence that these are tuned to work well on Nvidia hardware but its hard to fault a developer for wanting to save time and implement these effects.

There is nothing stopping AMD from doing the same thing except that they don't have the same amount of money to spend. AMD is playing its own game hoping that at some point its use in the console space will make games run better on their hardware.
Because DX11 is stripped from some of console optimizations. Same as OpenGL. For example Shader Intrinsics. Available for both Nvidia and AMD hardware. Provide huge boost to performance. Available where? Only in DX12 and Vulkan. There is a very good damn reason why Games on consoles are extremely well optimized.

Sometimes you guys baffle me with your lack of knowledge of game development, and then come here and try to push your agendas, whatever they are.

Its funny that you provide excuses, massive number of them in your post, and you have no idea about what you are talking about.

One more thing about Gameworks. No. It is IMPOSSIBLE for AMD to optimize the drivers and performance of the GPUs for Gameworks titles, and they will always behave much worse, than Nvidia. How come? Because Gameworks libraries are specifically designed to work with Nvidia GPUs. Similar thing will happen with every other GPU vendor on the earth. Intel, PowerVR. ALL of those vendors will see gimping of performance on their hardware.

Everybody who has a little clue about this will tell you this. Im absolutely baffled that we are arguing about this, whatsoever.
 
Last edited:
Because DX11 is stripped from some of console optimizations. Same as OpenGL. For example Shader Intrinsics. Available for both Nvidia and AMD hardware. Provide huge boost to performance. Available where? Only in DX12 and Vulkan. There is a very good damn reason why Games on consoles are extremely well optimized.

Sometimes you guys baffle me with your lack of knowledge of game development, and then come here and try to push your agendas, whatever they are.

Its funny that you provide excuses, massive number of them in your post, and you have no idea about what you are talking about.

One more thing about Gameworks. No. It is IMPOSSIBLE for AMD to optimize the drivers and performance of the GPUs for Gameworks titles, and they will always behave much worse, than Nvidia. How come? Because Gameworks libraries are specifically designed to work with Nvidia GPUs. Similar thing will happen with every other GPU vendor on the earth. Intel, PowerVR. ALL of those vendors will see gimping of performance on their hardware.

Everybody who has a little clue about this will tell you this. Im absolutely baffled that we are arguing about this, whatsoever.

Hmm, I suppose I do have an agenda. I want Apple to build the best macs they can, using the best components available. I am sorry you feel so defensive trying to have an objective conversation about graphics cards.
 
  • Like
Reactions: tuxon86
Hmm, I suppose I do have an agenda. I want Apple to build the best macs they can, using the best components available. I am sorry you feel so defensive trying to have an objective conversation about graphics cards.
By looking at your posts from last pages, it has absolutely nothing to do with Apple hardware, but hardware in general.

What Apple cares about? Compute. Why do Apple decided to go with AMD? Because properly optimized software is working on AMD hardware better than Nvidia. Why is Final Cut Pro X faster on AMD? Why is currently OpenCL Blender faster on AMD hardware from the same price level than Nvidia using CUDA?

Why people are not able to add 2+2 in this thread? Why is constantly illogical argument, without looking at broader perspective in the context of Apple computers?

Finally. Why do you want to pay more for worse performance?
 
Last edited:
By looking at your posts from last pages, it has absolutely nothing to do with Apple hardware, but hardware in general.
Much of this forum and in particular this thread (especially given that the RX 480 has yet to appear in a mac) speculates about future Apple products. Certainly discussing current and future general hardware is relevant.
 
What Apple cares about? Compute. Why do Apple decided to go with AMD? Because properly optimized software is working on AMD hardware better than Nvidia. Why is Final Cut Pro X faster on AMD? Why is currently OpenCL Blender faster on AMD hardware from the same price level than Nvidia using CUDA?

We were discussing gaming, now you are bringing up compute. I think AMD does have an edge in OpenCL and Final Cut Pro X if you are comparing two similar sized chips such as the D700 against say a GTX 980. However, there are many workloads that favor Nvidia and Nvidia is the only manufacturer offering high end and enthusiast options that have come out in the last year. For instance these benchmarks give us a taste of what Pascal could offer on the mac in compute workloads.
 
We were discussing gaming, now you are bringing up compute. I think AMD does have an edge in OpenCL and Final Cut Pro X if you are comparing two similar sized chips such as the D700 against say a GTX 980. However, there are many workloads that favor Nvidia and Nvidia is the only manufacturer offering high end and enthusiast options that have come out in the last year. For instance these benchmarks give us a taste of what Pascal could offer on the mac in compute workloads.
Which of those workloadds are available on Apple ecosystem? You still believe guys that Apple is having the same workloads like Windows?

What API will software use in very near future? CUDA? When there is no Nvidia hardware on Mac? Or Metal, which runs on everything? What we will see then? Will the situation not be the same as is with Final Cut Pro X? Or with Blender?
 
Which of those workloadds are available on Apple ecosystem? You still believe guys that Apple is having the same workloads like Windows?

What API will software use in very near future? CUDA? When there is no Nvidia hardware on Mac? Or Metal, which runs on everything? What we will see then? Will the situation not be the same as is with Final Cut Pro X? Or with Blender?

I'm not really sure what your point is here. There are many workloads that are very similar across windows and mac. For instance the adobe suite. Nvidia's GPUs for mac support OpenCL, Metal and CUDA. Right now Nvidia has the fastest OpenCL GPUs available. They also have the fastest metal GPUs.
 
How come those benchmarks contradict real world usage?

https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL
For example here, the link that was debated lately. How come in Final Cut Pro X AMD GPUs are faster while using OpenCL?

How come the benchmarks you use contradict real world usage?
If you will use 11 TFLOPs GPU vs 6TFLOPs, yes, Nvidia GPUs will be faster. But it will be actually interesting to see what will happen when AMD will come up with 11 or higher TFLOPs GPUs.

Why you guys are looking at what was in the past, and not where Apple is going?
amd-687.c1-nvidia-gtx-1080.jpg

amd-687.c1-nvidia-gtx-1080-ti.jpg

And this is not the fastest AMD Card.

P.S. The GPU from AMD in question has maximum of 9.8 TFLOPs. And it trades blows with 11 TFLOPs monster.

Interesting comparison of Video Composition here.
 
How come those benchmarks contradict real world usage?

https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL
For example here, the link that was debated lately. How come in Final Cut Pro X AMD GPUs are faster while using OpenCL?

How come the benchmarks you use contradict real world usage?
If you will use 11 TFLOPs GPU vs 6TFLOPs, yes, Nvidia GPUs will be faster. But it will be actually interesting to see what will happen when AMD will come up with 11 or higher TFLOPs GPUs.

Why you guys are looking at what was in the past, and not where Apple is going?
amd-687.c1-nvidia-gtx-1080.jpg

amd-687.c1-nvidia-gtx-1080-ti.jpg

And this is not the fastest AMD Card.

P.S. The GPU from AMD in question has maximum of 9.8 TFLOPs. And it trades blows with 11 TFLOPs monster.

Interesting comparison of Video Composition here.

I'm not sure what you mean by "real world usage." This benchmark shows that Pascal is very fast at Blender in macOS.

I do think Final Cut Pro X has been tuned to perform well for AMD. Given that recent Nvidia GPUs do very well with OpenCL but the performance is still behind AMD, I have to think there is some sort of fixed function hardware on AMD that is giving it an advantage. If Apple every shipped a mac with Nvidia it would be interesting to see if Final Cut could use the fixed function hardware on it.

I look at the data that is available from products that have been released and thoroughly tested independently of the manufacturer. Not tests from a product that hasn't been released. Vega may be the fastest OpenCL/Metal GPU ever, but its too early to tell. Currently its Nvidia who is making the fastest GPUs.
 
I'm not sure what you mean by "real world usage." This benchmark shows that Pascal is very fast at Blender in macOS.

I look at the data that is available from products that have been released and thoroughly tested independently of the manufacturer. Not tests from a product that hasn't been released. Vega may be the fastest OpenCL/Metal GPU ever, but its too early to tell. Currently its Nvidia who is making the fastest GPUs.
Do you have a Mac with Maxwell or Pascal GPUs? Which will be released faster? Vega architecture, or Mac with Nvidia hardware? Or maybe Mac with AMD hardware?

Look at broader perspective.

On the other hand: I will not be radical that Nvidia is locked out from Mac hardware for foreseeable future. If they will come up with software that is working properly, and meets Apple quality requirements, if they will come up with powerful hardware, that meets Apple requirements, and if they will offer good pricing that meets Apple requirements we might see Nvidia Macs. Biggest problem is not pricing or hardware, but the software, itself.
 
Do you have a Mac with Maxwell or Pascal GPUs? Which will be released faster? Vega architecture, or Mac with Nvidia hardware? Or maybe Mac with AMD hardware?

Look at broader perspective.

On the other hand: I will not be radical that Nvidia is locked out from Mac hardware for foreseeable future. If they will come up with software that is working properly, and meets Apple quality requirements, if they will come up with powerful hardware, that meets Apple requirements, and if they will offer good pricing that meets Apple requirements we might see Nvidia Macs. Biggest problem is not pricing or hardware, but the software, itself.

I think the frustration that at the same time Apple's desktops have stopped being updated coincides with AMD's absence from high end GPUs. Given that macs have only used AMD GPUs, part of Apple's issues may their dependence on AMD. At this point you are right, Vega is close enough that Apple could wait it out but a tube Mac Pro and/or an iMac could have shipped 9 months ago with GTX 1080 derived GPUs and had a huge amount of compute power over their predecessors.

We don't actually know the criteria Apple uses to choose its GPUs. It could be as simple as cost. It could be FCP performance. Maybe metal, who knows. Maybe there is some sort of driver issue that Nvidia can't fix to Apple's satisfaction. But all of this is speculation while every other platform has enjoyed the 11 TFLOPS of compute performance Nvidia has been delivering since last august.
 
On the other hand: I will not be radical that Nvidia is locked out from Mac hardware for foreseeable future. If they will come up with software that is working properly, and meets Apple quality requirements, if they will come up with powerful hardware, that meets Apple requirements, and if they will offer good pricing that meets Apple requirements we might see Nvidia Macs. Biggest problem is not pricing or hardware, but the software, itself.

Source? You're just speculating that it's due to software (quality, performance, something else). You have no idea why Apple has chosen to stick with AMD, and has worked so hard to optimize their software for the AMD GPUs. You keep citing Final Cut as an example, but just like AMD's contribution to Blender, have you considered that Apple worked with AMD to improve their drivers so that FCP ran really well on their GPUs, perhaps at the expense of NVIDIA and Intel?

It could very easily be a matter of pricing, and that AMD is willing to basically give away their GPUs for very little (or no) profit, and that NVIDIA thinks it has better GPUs and thus is not willing to get into a price war with inferior technology. You (and I, and everyone else here) will never know the real reason, so please stop citing your opinions as fact.
 
  • Like
Reactions: tuxon86
It could very easily be a matter of pricing, and that AMD is willing to basically give away their GPUs for very little (or no) profit, and that NVIDIA thinks it has better GPUs and thus is not willing to get into a price war with inferior technology. You (and I, and everyone else here) will never know the real reason, so please stop citing your opinions as fact.
This.

AMD reported a first-quarter net loss of $73 million, or 8 cents a share, on sales of $984 million

AMD plunges more than 7%
 
Source? You're just speculating that it's due to software (quality, performance, something else). You have no idea why Apple has chosen to stick with AMD, and has worked so hard to optimize their software for the AMD GPUs. You keep citing Final Cut as an example, but just like AMD's contribution to Blender, have you considered that Apple worked with AMD to improve their drivers so that FCP ran really well on their GPUs, perhaps at the expense of NVIDIA and Intel?

It could very easily be a matter of pricing, and that AMD is willing to basically give away their GPUs for very little (or no) profit, and that NVIDIA thinks it has better GPUs and thus is not willing to get into a price war with inferior technology. You (and I, and everyone else here) will never know the real reason, so please stop citing your opinions as fact.
From what I have discussed with people in knowledge of software development, AMD GPUs optimization does not gimp performance on any other vendor. If that would be true, games that are cross platform would be gimped on Nvidia and Intel hardware and this is not the case. One of examples is Shader Intrinsics, which were claimed by gamers to be AMD exclusive, but it is not true, they are available to Nvidia, Nvidia just has to optimize drivers for it. Same as Intel. This should give you a little picture.

This is the benefit of OpenSource software, which most of forum experts neglect. Shader Intrinsics were developed by AMD and MS and Sony, but are available to everyone.

Nvidia's strength comes from gimping performance on other vendors, not by exposing their own hardware capabilities. Gameworks is the perfect example of this. AMD, other way around. They design software to expose their own hardware capabilities without gimping anyone else's performance. This is the example of Final Cut Pro X and Blender. Why? Because from strong software, and added value to it from other IEM's(Intel, Nvidia, PowerVR, etc) to the software, AMD also can benefit, and in the end - the users are going to benefit from it. That is why they pack so much features in the hardware even if the software is not ready/designed to use them.

I like their approach much more. For two reasons. Software always matures. Its easier to redesign the software, than waste money on hardware.
 
Just a speculation here, but could it be, that Apple just wanted to reduce costs by sacking one GPU vendor out from Macs for the time they're making Metal API? Could it be also the reason why Apple has generally released just few Macs after 2014? The fewer new GPU's, the less need to write drivers. Because it can be pretty expensive to write drivers for openGL, openCL and Metal for every GPU there is and will be.

Whenever Metal is ready, Apple will stop making / buying / maintaining new GPU specific drivers for openGL and CL. There will be just one driver, for Metal. OpenGL and openCL are going to work as a layer on top of Metal to keep the compatibility with old apps. Still, they'll be considered as a legacy APIs when Metal is ready.

And when that happens, Nvidia could make a return.
 
From what I have discussed with people in knowledge of software development, AMD GPUs optimization does not gimp performance on any other vendor. If that would be true, games that are cross platform would be gimped on Nvidia and Intel hardware and this is not the case. One of examples is Shader Intrinsics, which were claimed by gamers to be AMD exclusive, but it is not true, they are available to Nvidia, Nvidia just has to optimize drivers for it. Same as Intel. This should give you a little picture.

This is the benefit of OpenSource software, which most of forum experts neglect. Shader Intrinsics were developed by AMD and MS and Sony, but are available to everyone.

Nvidia's strength comes from gimping performance on other vendors, not by exposing their own hardware capabilities. Gameworks is the perfect example of this. AMD, other way around. They design software to expose their own hardware capabilities without gimping anyone else's performance. This is the example of Final Cut Pro X and Blender. Why? Because from strong software, and added value to it from other IEM's(Intel, Nvidia, PowerVR, etc) to the software, AMD also can benefit, and in the end - the users are going to benefit from it. That is why they pack so much features in the hardware even if the software is not ready/designed to use them.

I like their approach much more. For two reasons. Software always matures. Its easier to redesign the software, than waste money on hardware.

I disagree with your first point. The AMD shader intrinsics are specific to their GPU architecture. For example, see:

http://www.frostbite.com/2017/03/4k-checkerboard-in-battlefield-1-and-mass-effect-andromeda/

You can't just use a GCN-optimized shader on NVIDIA, and yes if there are NVIDIA-optimized shaders they will fail to run on AMD in exactly the same way (unsupported intrinsics). These intrinsics are harnessing instructions or modes that are not exposed by the higher-level APIs like DirectX or Vulkan (or Metal), but that does not make them generic across all GPUs.

Have you considered that GameWorks is just taking advantage of the NVIDIA shader intrinsics? Most game developers are already using the AMD intrinsics as part of their initial console port, so once again I find the way you're categorizing GameWorks as evil somewhat amusing. NVIDIA provides a set of prebaked effects in a library that the game developers can use instead of or in addition to their GCN-optimized console versions, and somehow this makes them evil or terrible?

It's also somewhat amusing that you claim AMD is the only vendor to add tons of hardware features to their GPUs, even if there is no software support for them yet. Have you seen the list of NVIDIA's OpenGL extensions? They've historically exposed every single thing their GPUs can do, at least on Windows and Linux where they control the entire driver stack.

In any case, it's clear that there is little point in discussing this stuff with you. Enjoy your Vega GPU when AMD finally releases it, and Apple finally provides driver support for it. I'm very happy with my Pascal GPU that is enabled by the NVIDIA web driver.
 
  • Like
Reactions: tuxon86
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.