Web Drivers?Very likely never. But even if Nvidia enabled the device IDs, the driver will most likely stay in beta. The last generation never left beta and never had an official licensed release.
Web Drivers?Very likely never. But even if Nvidia enabled the device IDs, the driver will most likely stay in beta. The last generation never left beta and never had an official licensed release.
Here's where to get the AMD Radeon Software Crimson Edition Display Driver version 16.30.2311 directly from ATI:AMD (ATI) supplies no drivers directly to the consumer.
AMD (ATI) supplies no drivers directly to the consumer.
Very likely never. But even if Nvidia enabled the device IDs, the driver will most likely stay in beta. The last generation never left beta and never had an official licensed release.
Nvidia implemented G-Sync and solved latest issues with OpenCL with newest web drivers for Sierra. In my opinion this is an indication that they continue developing drivers.
Which shows rather on what people drive their mindshare about particular GPU brands on this forum."Solved latest issues" is disingenuous to the members and readers. Long standing issues haven't been resolves and complaints keep escalating on this forum and others. Just two days ago we were shown how many OpenCL errors and validation fails happen with Maxwells in Luxmark. This and trying to sell G-sync monitors is not exactly a sign of progress, especially since G-Sync is pointless for most Mac Pro usage. You wouldn't use it for professional work and most games are too crap to benefit.
"Solved latest issues" is disingenuous to the members and readers. Long standing issues haven't been resolves and complaints keep escalating on this forum and others. Just two days ago we were shown how many OpenCL errors and validation fails happen with Maxwells in Luxmark. This and trying to sell G-sync monitors is not exactly a sign of progress, especially since G-Sync is pointless for most Mac Pro usage. You wouldn't use it for professional work and most games are too crap to benefit.
"Solved latest issues" is disingenuous to the members and readers.
Which shows rather on what people drive their mindshare about particular GPU brands on this forum.
Nah, this is not even spooky behavior I have witnessed with Nvidia Web Drivers.Even their iBooks are screwed with the web drivers
https://www.tonymacx86.com/threads/nvidia-web-driver-ibooks-bug.203229/
Nah, this is not even spooky behavior I have witnessed with Nvidia Web Drivers.
macOS Sierra 10.12.1, Latest web Drivers.
Game: Hearthstone, Heroes of the Storm.
Platform: Macbook Pro Mid 2012 with GT650M.
GPU active while playing a game: Intel HD4000
What is happening? Artifacts in the game while playing, and doing anything interactive on the screen, eg. placing mouse over the opponents portrait.
And again, this is while active is Nvidia web driver in the system, instead of standard Apple drivers. What is funnier, the artifacts happen, when the integrated GPU is active(same thing happens when discreet is active).
In HotS I get sometimes graphical lags and terrible stuttering on Nvidia web Drivers.
When I disabled the Nvidia web drivers, and went back to Apple drivers in HotS everything went to normal and I got 59 FPS(locked) 99% of the time.
However in Hearthstone the problem persisted. You know what cured it completely? Uninstalling Nvidia web drivers from the computer, whatsoever.
Now, everything with GPU behavior is completely fine.
Any thoughts? The only thing that comes to my mind is that people report that they get better performance in World of Warcraft Legion(Metal) on standard Apple drivers, than on Nvidia web Drivers, on the exactly same machine. But neither HotS and Hearthstone use it.
That doesn't explain why does Nvidia web drivers affect reliability(artifacts) on Integrated Intel GPU.My thoughts are they have one college intern working in the Mac driver department and every few months they get bored of sitting in a tiny closet all day with no support and just a cMP to test on, so they quit and then that stupid job is listed on the sites again followed by people posting on forums 'Look, Nvidia is developing for Apple again. New shiny future!'
That's real weird that it can crossover.That doesn't explain why does Nvidia web drivers affect reliability(artifacts) on Integrated Intel GPU.
Even their iBooks are screwed with the web drivers
https://www.tonymacx86.com/threads/nvidia-web-driver-ibooks-bug.203229/
Starting in iOS 8 and macOS 10.10, the system offers library validation as a policy for the dynamic libraries that a process links against. The policy is simple: A program may link against any library with the same team identifier in its code signature as the main executable, or with any Apple system library. Requests to link against other libraries are denied.
This is not NVIDIA's fault, for what it's worth.
https://developer.apple.com/library...s.html#//apple_ref/doc/uid/TP40005929-CH4-SW9
Xcode, iBooks and other apps appear to have "library validation" enabled, which means they can only link with Apple-signed libraries.
I didn't have time to go through your link yet. However, do you mean since OSX 10.10, it's a known issue that Nvidia web driver can't work properly in some apps because of Apple's restriction?
If yes, then it doesn't matter if it is Nvidia's fault. You stated the fact that these apps can't work with Nvidia's web driver properly, and no work around yet. So, if we want to stay with Apple's OS, may be better to avoid those web driver required Nvidia GPU. Just in case more and more apps can't work with the Nvidia web driver properly.
There are only 3 ways to handle a problem, accept it, change it, or leave it. Now, we can't change it. And if we don't want to accept that we must run the apps with glitches. We can only leave it. Either leave the Apple new OS, or leave Nvidia web driver.
In this case, I will choose to leave the Nvidia web driver. As you pointed out in another post. We should keep using the most up to date OS and web driver, because Nvidia tends to only fix the bugs in the newest driver. Therefore, it's quite meaningless if we choose leave the Apple new OS (since this is a Mac forum, I assume this means that we will stay at the old OSX, but not go to something like Windows / Linux), because those known annoying bugs may stay forever.
I've pretty much given up with OS X on my Mac Pro, configured it for Windows 10 only and it works really well. If I want to get a new GPU, it'll just work, albeit I'll lose the EFI boot menu, and possibly some performance because of CPU, RAM and PCIe negitiation issues, but it'll still be better.
If it doesn't work that well, can always save up, buy a new system, use the new card and sell the Mac Pro. Not really tied to OS X any more.
https://forums.anandtech.com/thread...tle-dualshockers.2486734/page-3#post-38505214
Performance of Nvidia GPUs in first true DX12 title.
4.4 TFLOPs GPU(GTX 1060) on par with 5.9 TFLOPs GTX 980 Ti. Despite being exactly the same architecture. Ask yourself where does it come from. Is it GTX 1060 that fast, or... ?
Well, not really, and not in DX12. It was designed to expose real GPU compute capabilities. Gaming performance should reflect compute performance of the GPUs, unless they are constrained by other factors such as design(ROPs, memory bus, etc). I suggest reading a bit Beyond3D forum, and developers who actually work on those games with those APIs.Or, does real application/game performance have very little to do with theoretical metrics like TFLOPs? If the game/app was doing nothing but math computations, then the TFLOPs score matters. If it's a more balanced workload that leverages other parts of the GPU like texturing, ROP etc, then raw TFLOPs is often not the bottleneck. This is also why the smaller 1060 can compete with and often beat the bigger RX 480.
Well, not really, and not in DX12. It was designed to expose real GPU compute capabilities. Gaming performance should reflect compute performance of the GPUs, unless they are constrained by other factors such as design(ROPs, memory bus, etc). I suggest reading a bit Beyond3D forum, and developers who actually work on those games with those APIs.
Nvidia simply gimps performance of GTX 980 Ti in DX12 application through drivers. But you are free to believe Nvidia marketing malarkey.
1280 CC design with 192 Bit memory bus, and 48 ROPs active is faster than 2816 CC design, with 384 bit memory bus, and 96 ROPs.
It should be 25% slower.
Well everything would be terrific, IF Pascal consumer GPUs would use new architecture, from GP100 chip. But they do not. Consumer Pascal GPUs are the same Maxwell architecture, just on 16 nm process.I love how you come up with conspiracy theories like NVIDIA gimping GM200 performance rather than accepting the simple fact that gaming performance, even with DX12, is not solely controlled or limited by raw TFLOPs. Do I need to link all the Pascal architecture reviews for you to understand? There's a massive increase in GPU clockspeeds, and a large increase in memory speeds. So, looking at raw shader core counts or memory bus widths doesn't give you anywhere near enough information to compare the two architectures. There's more to a GPU than "compute performance". There are plenty of compute applications that do nothing but math calculations, and in those cases performance maps pretty well to the raw TFLOPs score. Games are usually a lot more complex than that, and leverage all parts of the GPU.
But no, sure, NVIDIA must be gimping their previous GPUs (and AMD must be gimping their new GPUs) for the 1060 to be performing as well as it does.
This is exactly the same architecture, just on new node. And this is what I was pointing out not once on this forum, with saying that only GP100 is true new GPU from Nvidia.
So no my friend, there is absolutely no conspiracy theory here.