Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
^^^^
AMD (ATI) supplies no drivers directly to the consumer.

For the Macintosh OS!!!!!!!

Although years ago, before being acquired by AMD, ATI did supply Mac drivers and software to the user.

Lou
 
Last edited:
Very likely never. But even if Nvidia enabled the device IDs, the driver will most likely stay in beta. The last generation never left beta and never had an official licensed release.

Nvidia implemented G-Sync and solved latest issues with OpenCL with newest web drivers for Sierra. And released CUDA 8. In my opinion this is a strong indication that they continue developing drivers.
 
Last edited:
Nvidia implemented G-Sync and solved latest issues with OpenCL with newest web drivers for Sierra. In my opinion this is an indication that they continue developing drivers.

"Solved latest issues" is disingenuous to the members and readers. Long standing issues haven't been resolves and complaints keep escalating on this forum and others. Just two days ago we were shown how many OpenCL errors and validation fails happen with Maxwells in Luxmark. This and trying to sell G-sync monitors is not exactly a sign of progress, especially since G-Sync is pointless for most Mac Pro usage. You wouldn't use it for professional work and most games are too crap to benefit.
 
"Solved latest issues" is disingenuous to the members and readers. Long standing issues haven't been resolves and complaints keep escalating on this forum and others. Just two days ago we were shown how many OpenCL errors and validation fails happen with Maxwells in Luxmark. This and trying to sell G-sync monitors is not exactly a sign of progress, especially since G-Sync is pointless for most Mac Pro usage. You wouldn't use it for professional work and most games are too crap to benefit.
Which shows rather on what people drive their mindshare about particular GPU brands on this forum.
 
"Solved latest issues" is disingenuous to the members and readers. Long standing issues haven't been resolves and complaints keep escalating on this forum and others. Just two days ago we were shown how many OpenCL errors and validation fails happen with Maxwells in Luxmark. This and trying to sell G-sync monitors is not exactly a sign of progress, especially since G-Sync is pointless for most Mac Pro usage. You wouldn't use it for professional work and most games are too crap to benefit.

Those people weren't running on the latest drivers. After updating to 10.12 and running the latest drivers, the errors in Luxmark went away, and the score went up too. You don't get fixes if you stick with out of date drivers on old OSes.
 
"Solved latest issues" is disingenuous to the members and readers.

Your posts certainly fall into the disingenuous category.
577017-1b11751ca45cfac34d9d3e145152a376.jpg


Lou
 
Nah, this is not even spooky behavior I have witnessed with Nvidia Web Drivers.

macOS Sierra 10.12.1, Latest web Drivers.

Game: Hearthstone, Heroes of the Storm.
Platform: Macbook Pro Mid 2012 with GT650M.
GPU active while playing a game: Intel HD4000

What is happening? Artifacts in the game while playing, and doing anything interactive on the screen, eg. placing mouse over the opponents portrait.

And again, this is while active is Nvidia web driver in the system, instead of standard Apple drivers. What is funnier, the artifacts happen, when the integrated GPU is active(same thing happens when discreet is active).

In HotS I get sometimes graphical lags and terrible stuttering on Nvidia web Drivers.

When I disabled the Nvidia web drivers, and went back to Apple drivers in HotS everything went to normal and I got 59 FPS(locked) 99% of the time.

However in Hearthstone the problem persisted. You know what cured it completely? Uninstalling Nvidia web drivers from the computer, whatsoever.

Now, everything with GPU behavior is completely fine.

Any thoughts? The only thing that comes to my mind is that people report that they get better performance in World of Warcraft Legion(Metal) on standard Apple drivers, than on Nvidia web Drivers, on the exactly same machine. But neither HotS and Hearthstone use it.
 
Nah, this is not even spooky behavior I have witnessed with Nvidia Web Drivers.

macOS Sierra 10.12.1, Latest web Drivers.

Game: Hearthstone, Heroes of the Storm.
Platform: Macbook Pro Mid 2012 with GT650M.
GPU active while playing a game: Intel HD4000

What is happening? Artifacts in the game while playing, and doing anything interactive on the screen, eg. placing mouse over the opponents portrait.

And again, this is while active is Nvidia web driver in the system, instead of standard Apple drivers. What is funnier, the artifacts happen, when the integrated GPU is active(same thing happens when discreet is active).

In HotS I get sometimes graphical lags and terrible stuttering on Nvidia web Drivers.

When I disabled the Nvidia web drivers, and went back to Apple drivers in HotS everything went to normal and I got 59 FPS(locked) 99% of the time.

However in Hearthstone the problem persisted. You know what cured it completely? Uninstalling Nvidia web drivers from the computer, whatsoever.

Now, everything with GPU behavior is completely fine.

Any thoughts? The only thing that comes to my mind is that people report that they get better performance in World of Warcraft Legion(Metal) on standard Apple drivers, than on Nvidia web Drivers, on the exactly same machine. But neither HotS and Hearthstone use it.

My thoughts are they have one college intern working in the Mac driver department and every few months they get bored of sitting in a tiny closet all day with no support and just a cMP to test on, so they quit and then that stupid job is listed on the sites again followed by people posting on forums 'Look, Nvidia is developing for Apple again. New shiny future!'
 
My thoughts are they have one college intern working in the Mac driver department and every few months they get bored of sitting in a tiny closet all day with no support and just a cMP to test on, so they quit and then that stupid job is listed on the sites again followed by people posting on forums 'Look, Nvidia is developing for Apple again. New shiny future!'
That doesn't explain why does Nvidia web drivers affect reliability(artifacts) on Integrated Intel GPU.
 
I mean, it's nice that nvidia does anything at all for OS X given the fact that their gpus haven't been included in that ecosystem for years. So I give cudos there. Unfortunately the quirky issues continue to accumulate and if anyone expects a major turn around I think you're betting on the wrong horse.
 

This is not NVIDIA's fault, for what it's worth.

https://developer.apple.com/library...s.html#//apple_ref/doc/uid/TP40005929-CH4-SW9

Starting in iOS 8 and macOS 10.10, the system offers library validation as a policy for the dynamic libraries that a process links against. The policy is simple: A program may link against any library with the same team identifier in its code signature as the main executable, or with any Apple system library. Requests to link against other libraries are denied.

Xcode, iBooks and other apps appear to have "library validation" enabled, which means they can only link with Apple-signed libraries.
 
This is not NVIDIA's fault, for what it's worth.

https://developer.apple.com/library...s.html#//apple_ref/doc/uid/TP40005929-CH4-SW9



Xcode, iBooks and other apps appear to have "library validation" enabled, which means they can only link with Apple-signed libraries.

I didn't have time to go through your link yet. However, do you mean since OSX 10.10, it's a known issue that Nvidia web driver can't work properly in some apps because of Apple's restriction?

If yes, then it doesn't matter if it is Nvidia's fault. You stated the fact that these apps can't work with Nvidia's web driver properly, and no work around yet. So, if we want to stay with Apple's OS, may be better to avoid those web driver required Nvidia GPU. Just in case more and more apps can't work with the Nvidia web driver properly.

There are only 3 ways to handle a problem, accept it, change it, or leave it. Now, we can't change it. And if we don't want to accept that we must run the apps with glitches. We can only leave it. Either leave the Apple new OS, or leave Nvidia web driver.

In this case, I will choose to leave the Nvidia web driver. As you pointed out in another post. We should keep using the most up to date OS and web driver, because Nvidia tends to only fix the bugs in the newest driver. Therefore, it's quite meaningless if we choose leave the Apple new OS (since this is a Mac forum, I assume this means that we will stay at the old OSX, but not go to something like Windows / Linux), because those known annoying bugs may stay forever.
 
I didn't have time to go through your link yet. However, do you mean since OSX 10.10, it's a known issue that Nvidia web driver can't work properly in some apps because of Apple's restriction?

If yes, then it doesn't matter if it is Nvidia's fault. You stated the fact that these apps can't work with Nvidia's web driver properly, and no work around yet. So, if we want to stay with Apple's OS, may be better to avoid those web driver required Nvidia GPU. Just in case more and more apps can't work with the Nvidia web driver properly.

There are only 3 ways to handle a problem, accept it, change it, or leave it. Now, we can't change it. And if we don't want to accept that we must run the apps with glitches. We can only leave it. Either leave the Apple new OS, or leave Nvidia web driver.

In this case, I will choose to leave the Nvidia web driver. As you pointed out in another post. We should keep using the most up to date OS and web driver, because Nvidia tends to only fix the bugs in the newest driver. Therefore, it's quite meaningless if we choose leave the Apple new OS (since this is a Mac forum, I assume this means that we will stay at the old OSX, but not go to something like Windows / Linux), because those known annoying bugs may stay forever.

iBooks seems to have enabled library validation in 10.12, as this is the first I've heard that it wasn't rendering correctly with the web driver from NVIDIA. So, yes, is you need to run iBooks then don't use the web driver until Apple changes their policy to let the web driver work. If that's too much of a burden for you, then sure, go back to your 4+ year old GPU that works with the Apple drivers. NVIDIA is at least making an effort to make their new cards work under macOS. I don't use iBooks and have had no problems with my TITAN X under 10.12.

My point about older versions of macOS was directed at the folks who like to talk about web driver issues that have been fixed for months. NVIDIA usually only delivers fixes for the latest version of macOS.
 
I've pretty much given up with OS X on my Mac Pro, configured it for Windows 10 only and it works really well. If I want to get a new GPU, it'll just work, albeit I'll lose the EFI boot menu, and possibly some performance because of CPU, RAM and PCIe negitiation issues, but it'll still be better.

If it doesn't work that well, can always save up, buy a new system, use the new card and sell the Mac Pro. Not really tied to OS X any more.

shake hand, we have the same thought. I've been running Win7 with my GTX1070 for few months now, it worked really really well.
 
https://forums.anandtech.com/thread...tle-dualshockers.2486734/page-3#post-38505214

Performance of Nvidia GPUs in first true DX12 title.

4.4 TFLOPs GPU(GTX 1060) on par with 5.9 TFLOPs GTX 980 Ti. Despite being exactly the same architecture. Ask yourself where does it come from. Is it GTX 1060 that fast, or... ?

Or, does real application/game performance have very little to do with theoretical metrics like TFLOPs? If the game/app was doing nothing but math computations, then the TFLOPs score matters. If it's a more balanced workload that leverages other parts of the GPU like texturing, ROP etc, then raw TFLOPs is often not the bottleneck. This is also why the smaller 1060 can compete with and often beat the bigger RX 480.
 
  • Like
Reactions: AidenShaw
Or, does real application/game performance have very little to do with theoretical metrics like TFLOPs? If the game/app was doing nothing but math computations, then the TFLOPs score matters. If it's a more balanced workload that leverages other parts of the GPU like texturing, ROP etc, then raw TFLOPs is often not the bottleneck. This is also why the smaller 1060 can compete with and often beat the bigger RX 480.
Well, not really, and not in DX12. It was designed to expose real GPU compute capabilities. Gaming performance should reflect compute performance of the GPUs, unless they are constrained by other factors such as design(ROPs, memory bus, etc). I suggest reading a bit Beyond3D forum, and developers who actually work on those games with those APIs.

Nvidia simply gimps performance of GTX 980 Ti in DX12 application through drivers. But you are free to believe Nvidia marketing malarkey.

1280 CC design with 192 Bit memory bus, and 48 ROPs active is faster than 2816 CC design, with 384 bit memory bus, and 96 ROPs.

It should be 25% slower.
 
Well, not really, and not in DX12. It was designed to expose real GPU compute capabilities. Gaming performance should reflect compute performance of the GPUs, unless they are constrained by other factors such as design(ROPs, memory bus, etc). I suggest reading a bit Beyond3D forum, and developers who actually work on those games with those APIs.

Nvidia simply gimps performance of GTX 980 Ti in DX12 application through drivers. But you are free to believe Nvidia marketing malarkey.

1280 CC design with 192 Bit memory bus, and 48 ROPs active is faster than 2816 CC design, with 384 bit memory bus, and 96 ROPs.

It should be 25% slower.

I love how you come up with conspiracy theories like NVIDIA gimping GM200 performance rather than accepting the simple fact that gaming performance, even with DX12, is not solely controlled or limited by raw TFLOPs. Do I need to link all the Pascal architecture reviews for you to understand? There's a massive increase in GPU clockspeeds, and a large increase in memory speeds. So, looking at raw shader core counts or memory bus widths doesn't give you anywhere near enough information to compare the two architectures. There's more to a GPU than "compute performance". There are plenty of compute applications that do nothing but math calculations, and in those cases performance maps pretty well to the raw TFLOPs score. Games are usually a lot more complex than that, and leverage all parts of the GPU.

But no, sure, NVIDIA must be gimping their previous GPUs (and AMD must be gimping their new GPUs) for the 1060 to be performing as well as it does.
 
I love how you come up with conspiracy theories like NVIDIA gimping GM200 performance rather than accepting the simple fact that gaming performance, even with DX12, is not solely controlled or limited by raw TFLOPs. Do I need to link all the Pascal architecture reviews for you to understand? There's a massive increase in GPU clockspeeds, and a large increase in memory speeds. So, looking at raw shader core counts or memory bus widths doesn't give you anywhere near enough information to compare the two architectures. There's more to a GPU than "compute performance". There are plenty of compute applications that do nothing but math calculations, and in those cases performance maps pretty well to the raw TFLOPs score. Games are usually a lot more complex than that, and leverage all parts of the GPU.

But no, sure, NVIDIA must be gimping their previous GPUs (and AMD must be gimping their new GPUs) for the 1060 to be performing as well as it does.
Well everything would be terrific, IF Pascal consumer GPUs would use new architecture, from GP100 chip. But they do not. Consumer Pascal GPUs are the same Maxwell architecture, just on 16 nm process.

First, lets look at the specifics of the Nvidia architectures:
S6yUbFc.jpg

Specifically: CUDA cores/SM chart. Maxwell has 128 cores, Pascal 64. Which means, those 64 cores have exactly the same amount of performance as 128 cores from Maxwell. But that is in GP100. Those 64 cores, have exactly the same amount of resources(Registers, Thread size, etc) like 128 cores from Maxwell.

Lets look how it looks in the diagram.
gp100_SM_diagram-624x452.png

This is how each SM looks like. 64 cores, no problem.

Lets look how fares here GP104?
gp104-gpu-block-diagram-100661282-orig.png

128 cores in each SM. They do not have double the resources of GP100 chip, they have exactly the same layout of the GPU, as Maxwell GM200 have.

Proof?
GTX-750-TI-REVIEW-7.png

This is actually from GM107, but the same architecture is in GM200. As we can see - 128 CUDA cores with each SM.

Maybe you asgorath have not read the post in question, I linked above, but there was a quote from AT review:
From AT's Pascal review: "In a by-the-numbers comparison then, Pascal does not bring any notable changes in throughput relative to Maxwell. CUDA cores, texture units, PolyMorph Engines, Raster Engines, and ROPs all have identical theoretical throughput-per-clock as compared to Maxwell. So on a clock-for-clock, unit-for-unit basis, Pascal is not any faster on paper."

This is exactly the same architecture, just on new node. And this is what I was pointing out not once on this forum, with saying that only GP100 is true new GPU from Nvidia. The only difference in SM's of Pascal and Maxwell is the number of ROPs for each SM. Nothing more. They are both exactly the same, thats why clock for clock, they will perform exactly the same way.

So no my friend, there is absolutely no conspiracy theory here. Nvidia is gimping the performance of GTX 980 Ti.

History repeats itself.
 
Last edited:
This is exactly the same architecture, just on new node. And this is what I was pointing out not once on this forum, with saying that only GP100 is true new GPU from Nvidia.

So no my friend, there is absolutely no conspiracy theory here.

That's fine, keep ignoring my point.

If the game is limited by nothing but raw shader math horsepower, then TFLOPs will give an accurate representation of how well a given GPU will run the game. The game runs faster on GTX 1060 than raw TFLOPs suggests. My conclusion is that the game is not limited by raw TFLOPs. Your conclusion is that I'm an idiot and NVIDIA and AMD are artificially limiting the performance in this game in a grand conspiracy to make the GTX 1060 look better than it actually is. I wonder which is more likely to be correct?

Is stuff like the delta memory compression taken into consideration in your raw TFLOPs measurement?

http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/8

I don't think it is, and it's one of the major new things in the Pascal architecture.

Oh look, there's a better implementation of asynchronous compute as well:

http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9

Let's just ignore all of the non-shader related improvements in Pascal, and the real benchmark data, and conclude that I don't know what I'm talking about.

Again, my fundamental point relates to the comparison between the 1060 and the RX 480. On paper, the RX 480 should destroy the 1060, but in nearly all game benchmarks the 1060 wins (and often wins by a significant margin). How do you explain this?
 
  • Like
Reactions: AidenShaw
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.