Ok, I have results for 1 x 1080, 1 x 1070 and 2 x 1070
Ok, I have results for 1 x 1080, 1 x 1070 and 2 x 1070View attachment 695738 View attachment 695739 View attachment 695740 View attachment 695741 View attachment 695742 View attachment 695743 View attachment 695744 View attachment 695745 View attachment 695746
Both.hmm, no performance gain for current webdriver for Pascal... cpu or driver bottleneck under osx? we will see
The other bit of good news is you can easily install 2 1070 or 1080's off internal power (I now await many messages saying this isn't safe as each GPU requires 1 x 8pin). As you can see in the LuxMark test 2 x 1070's score over 30000. The nMP with dual D700's scores just over 20000 in LuxMark.
There is a performance gain for Pascal over Maxwell. At the moment it looks like the 1080 is the best card to choose but if the driver can be improved or you need more VRAM maybe Ti and Titan Xp would be an option.Both.
I'm unfamiliar with that test but I have run a permutationTry running Furmark with two GTX 1080s.
GpuTest - Cross-Platform GPU Stress Test and OpenGL Benchmark for Windows, Linux and OS X | Geeks3D.com
I've just tested the 1080Ti and Metal in GB is the same as Xp and Heaven is the same as Xp. LuxMark OpenCL test the Ti scores less than the Xp. No CUDA driver available yet unfortunately. If any one is interested I also have some 1080's and 1070's. Let me know if you want the results. View attachment 695723 View attachment 695724 View attachment 695725
This CUDA driver should work. Has anyone tried it? I have to wait until tonight to get my GTX 1080 up and running in my hack.Wait, there's no CUDA?!?!
This CUDA driver should work. Has anyone tried it? I have to wait until tonight to get my GTX 1080 up and running in my hack.
Ok, I have results for 1 x 1080, 1 x 1070 and 2 x 1070View attachment 695738 View attachment 695739 View attachment 695740 View attachment 695741 View attachment 695742 View attachment 695743 View attachment 695744 View attachment 695745 View attachment 695746
C'mon guys, are you really running Unigine Heaven at 1600x900??? This is going to be absolutely 100% CPU limited on macOS. Why don't you try running it at 4K, or even 2560x1600 or something. This might give the GPU a chance to stretch its legs despite being limited by Apple's terrible OpenGL framework design. Or, focus on Metal benchmarks since those should run much, much better due to the low overhead nature of the API.
I read more about the bottleneck issue. Now I understand. Your are right sir. All cards are bottlenecked somehow. If I'm correct, all that matters is full potential of the gpu (saturation of the pcie for pro apps) and stable 60fps in games. More demanding games or applications will use more potential of the card that is available. High, 200+ fps only matters on high frequency monitors. So I'm gonna buy 1080ti, it will be the most future proof gpu.
Right, this is why my FAQ says "do not use Cinebench as a GPU benchmark". At some point, the CPU simply cannot keep up with the GPU. This happens very quickly when you have a slow CPU (e.g. in the cMP) or a very, very fast GPU (e.g. a high-end Pascal card). If you were to run Unigine Heaven/Valley at 4K resolution, the GPU would have a ton more work to do (since it's rendering more than 5x the number of pixels compared with 16x9). You'd be able to very clearly see that the GPU is mostly idle if you ran the OpenGL Driver Monitor and enabled the "GPU Core Utilization" stat in Linear display mode while running a CPU-limited benchmark like low-res Unigine or Cinebench.
Yes, default.Why are so many people doing 1600x900 windowed? Is that the default setting? A common standard for comparison?
There is a performance gain for Pascal over Maxwell. At the moment it looks like the 1080 is the best card to choose but if the driver can be improved or you need more VRAM maybe Ti and Titan Xp would be an option.
[doublepost=1491922683][/doublepost]
I'm unfamiliar with that test but I have run a permutationView attachment 695774 of it. Result attached.
Do you guys think a GTX 1050 Ti or GTX 1060 will be a worthwhile upgrade, for me, $$$-wise, coming from an RX 460, if, all I care about is performance in FCP X?
For FCPX, usually AMD card works better (for the same cost).
So, you're saying keep using the RX 460? I'm only asking because I do notice that my system is completely not smooth editing 60P 28M (PS) 1080P video from a Sony NEX-6 camera. Not sure, if, it's because I upgraded to a 1440P monitor. But, I do notice FCP X not keeping up with me as I work on the video. But, it's workable. It's just not smooth all the time. Do you know what I mean?
I was just wondering, if I need a little more "juice" in the GPU department to make it smooth again.
PS-My system is a mid-2010 Mac Pro, 3.33 6-core, 16MB RAM, RX460, latest MacOS Sierra and FCPX... I also have an SSD for my scratch drive....