Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Not a nasty response. Just a rhetorical one to a response that doesn't offer any real solution.
It's a solution, it's just a radical one. It was also a mild attempt at humour, something which was clearly lost on you. If you want to try and qualify who has more of a right to be in the thread, I have a Mac Pro and a GTX 1080. You do not. Perhaps it's you that's in the wrong place.
 
Yes, I found no humor in the reply.

At no point did I say you are unqualified to be here. I just stated that if he/she wanted to go to Windows, he/she wouldn't be posting here.

Having owned three Mac Pros, I lurk around and help others who may have questions that I may have answers to and learn what I can from others. I may no longer have a Mac Pro, but I still run macOS almost exclusively.
 
  • Like
Reactions: TheStork and Zorn
The reply might be funnier if we weren't going on a year now with no movement on a web driver for OS X, or without any indication of whether this is even anywhere on the road map.
 
  • Like
Reactions: TheStork
I feel that there is negative indication from the Nvidia CEO. I suppose some people might interpret it differently, but I see absolutely nothing positive there.

Basically nvidia is too expansive compared to AMD, decision is based on profit margins. And yeah nvidia is not going to waste their time on apple when they are in bed with AMD.

Nvidia is dominating performance at this point and going very well in PC sales . They don't need apple. Apple does not want to spend $ on their hardware and the apple user is the looser in all this.

At least the 980ti still has great performance. And by the time that is poor, people should have moved off 2012 mac pros...
 
Nvidia is dominating performance at this point and going very well in PC sales . They don't need apple. Apple does not want to spend $ on their hardware and the apple user is the looser in all this.

I'm not disagreeing, but what puzzles me is that Apple is quite willing to spend the dollars for some components:
- the screen in the 5K iMac is among the best available in the market (5K resolution, wide color, etc.)
- the SSD drive in the latest MacBook Pro is among the fastest, if not the fastest, in the laptop market
(https://forums.macrumors.com/threads/pcie-m-2-nvme-on-macpro.2030791/page-2#post-24307207)
 
I'm not disagreeing, but what puzzles me is that Apple is quite willing to spend the dollars for some components:
- the screen in the 5K iMac is among the best available in the market (5K resolution, wide color, etc.)
- the SSD drive in the latest MacBook Pro is among the fastest, if not the fastest, in the laptop market
(https://forums.macrumors.com/threads/pcie-m-2-nvme-on-macpro.2030791/page-2#post-24307207)
Well, that is because in theory GPUs with similar compute performance, regardless of brand should perform equally in every scenario. What makes difference, in compute, is the software performance. CUDA gives zero-to-none abstraction layer for software, from hardware. The application works as close to to hardware as possible - thats why its so fast. OpenCL, or for this matter any other Compute API builds layer of abstraction because of optimization. Its much cheaper to develop properly functioning, and performing application using CUDA, than it is with OpenCL, or any other compute API. If there would be viable solution from anywhere else, we would be seeing it by now. It does not matter in the end however, because for developers its hard to waste money and time in optimization of their software for other platforms, and other APIs.

AMD hardware is better than you guys believe it is. The only thing it lacks is CUDA. And brand name.

Apple has its own compute API - Metal. Will it work as fast as CUDA? Well it shows potential. The benefit of it is that, when you would optimize your application for API, rather than hardware, equally performing GPUs should see no difference in overall performance, so the Application would be completely brand agnostic.

But that requires optimization of drivers. Nvidia does not give a **** about when their hardware is excluded from Mac ecosystem. Thats why they will not optimize their drivers. They are working on Metal drivers, but... its looks like the path of least resistance for them. The lowest amount of cost, and effort to do it. Thats how they approach this. Why? Because they believe that CUDA should be go to compute API on Mac platform.


Nvidia is a brand. They have enormous reception, and recognition in the world. But their business practices are often times "upstream" with practices of other companies, which would want to do business with Nvidia. One of those things resulted in Nvidia being phased out from Apple ecosystem. Another thing happened that resulted in Intel not renewing the IP licensing deal with Nvidia, and going straight to AMD, for the same thing.
 
Guys, I had a dream last night that I woke up this morning and NVIDIA had released an updated webdriver with Pascal support... Then I actually woke up and realized it was all fake. NOOOOOO!
 
  • Like
Reactions: JacobSyndeo
Well, that is because in theory GPUs with similar compute performance, regardless of brand should perform equally in every scenario. What makes difference, in compute, is the software performance. CUDA gives zero-to-none abstraction layer for software, from hardware. The application works as close to to hardware as possible - thats why its so fast. OpenCL, or for this matter any other Compute API builds layer of abstraction because of optimization. Its much cheaper to develop properly functioning, and performing application using CUDA, than it is with OpenCL, or any other compute API. If there would be viable solution from anywhere else, we would be seeing it by now. It does not matter in the end however, because for developers its hard to waste money and time in optimization of their software for other platforms, and other APIs.

If your compute application is multiplying some numbers in a tight loop with no memory accesses, then sure, I can believe this. However, GPU performance is not as simple as raw TFLOPs, no matter how often you try and make the case that it is. Why else does a GTX 1060 compete with and often beat an RX 480 that has about 30% more raw horsepower?

I'm going to ignore the rest because it's just speculation on your part with no actual facts to back it up. Have you written a Metal application and tested the relative performance of the NVIDIA drivers/GPUs? No? Then please stop saying that NVIDIA hasn't optimized their Metal drivers (and OpenGL/CL are dead and I'd bet that Apple told AMD/NVIDIA to stop working on it and focus on Metal instead).
 
If your compute application is multiplying some numbers in a tight loop with no memory accesses, then sure, I can believe this. However, GPU performance is not as simple as raw TFLOPs, no matter how often you try and make the case that it is. Why else does a GTX 1060 compete with and often beat an RX 480 that has about 30% more raw horsepower?

I'm going to ignore the rest because it's just speculation on your part with no actual facts to back it up. Have you written a Metal application and tested the relative performance of the NVIDIA drivers/GPUs? No? Then please stop saying that NVIDIA hasn't optimized their Metal drivers (and OpenGL/CL are dead and I'd bet that Apple told AMD/NVIDIA to stop working on it and focus on Metal instead).
In what it competes? You again bring Gaming as your point?


Well RX 480 is 10% slower than GTX 1070, because it has 10% lower compute performance. Applications are not using CUDA, and have layer of abstraction. Thats why overall performance is so close.

About Metal, its not my opinion. If you would read other threads, where I provided a lot factual information, before it was even possible to verify, you would know why, I am stating this, this way. But of course, you are free to not believe in my words.
 
If you would read other threads, where I provided a lot factual information, before it was even possible to verify, you would know why, I am stating this, this way. But of course, you are free to not believe in my words.

So... since apparently you have "insider information," do you have any word on Mac Pascal drivers?
 
So... since apparently you have "insider information," do you have any word on Mac Pascal drivers?
I think you have it blew in the face in the post on the subject ;).

If there is no Nvidia hardware on Mac, and therefore CUDA, forget about proper Mac Drivers. What they can support, they will support, but... Like I have written, its done with least effort and least possible amount of investment.

One more thing. There is few things in Metal that are put deliberately to gimp Nvidia performance. I cannot write about this more.

Connect the dot, with what JHH wrote about this.
 
Last edited:
In what it competes? You again bring Gaming as your point?

Oh I'm sorry, I didn't realize this thread was only talking about compute and that nobody cares about gaming. Perhaps you should update the thread title to reflect what topics are relevant to be discussed in here.

I've said it before and I'll say it again now: at this point, I would not recommend anyone buy an NVIDIA GPU to run under macOS until NVIDIA releases a web driver that supports Pascal, or until Apple releases a product that uses a Pascal GPU. So, that probably means don't ever buy another NVIDIA GPU to run under macOS. It's sad that this is what it's come to, after so many years of NVIDIA enabling better GPUs via their web drivers.

You can keep talking smack about how terrible the NVIDIA drivers are or how NVIDIA is only interested in CUDA or whatever point you're trying to make, but at the end of the day it's all irrelevant. Apple is on the AMD train (probably because they're selling their GPUs for so little money) until they can replace it all with their A* chips. Apple clearly doesn't care about raw GPU performance, and will continue to provide lackluster upgrades until their A* chips can compete (or at least not be a huge step down). I'm no longer their target market, and as such I have moved on and am running Windows 10 on my Hackintoshes about 99.9% of the time these days.
 
Oh I'm sorry, I didn't realize this thread was only talking about compute and that nobody cares about gaming. Perhaps you should update the thread title to reflect what topics are relevant to be discussed in here.
Now your moving the goal post. Unless that is the only thing that "Professionals" care about on this forum. So Nvidia in this view has only two things: gaming, and CUDA. Everywhere else, their competitors offer better products. Nice to know.

I've said it before and I'll say it again now: at this point, I would not recommend anyone buy an NVIDIA GPU to run under macOS until NVIDIA releases a web driver that supports Pascal, or until Apple releases a product that uses a Pascal GPU. So, that probably means don't ever buy another NVIDIA GPU to run under macOS. It's sad that this is what it's come to, after so many years of NVIDIA enabling better GPUs via their web drivers.
I don't believe there is any Mac currently that is worth attention, in any way, shape or form, apart from 13 inch MacBook Pro. Although, it is very, very expensive which diminishes a lot of its value.

You can keep talking smack about how terrible the NVIDIA drivers are or how NVIDIA is only interested in CUDA or whatever point you're trying to make, but at the end of the day it's all irrelevant. Apple is on the AMD train (probably because they're selling their GPUs for so little money) until they can replace it all with their A* chips. Apple clearly doesn't care about raw GPU performance, and will continue to provide lackluster upgrades until their A* chips can compete (or at least not be a huge step down). I'm no longer their target market, and as such I have moved on and am running Windows 10 on my Hackintoshes about 99.9% of the time these days.
Where did I stated, that believing in CUDA is a bad thing? Where did I bashed them for this? Or this was your perception? Secondly, if you can achieve the same thing, with both brands, why would you waste your money on something more expensive, just because it has different brand? You do not see that what Apple achieves with AMD the same thing they could with competing Nvidia parts?

Come back to me, when you have anything with which you can diminish my point, rather than with your weak arguments.

This forum is funniest of them all. When people try to share about information they have, they are accused about being fanboys, shills, etc.

You want to know what is my point? I do not care anymore about Mac, because I switched to Windows. Do I like Nvidia, or do I hate them? I hate their business practices, which can be put in nicest way possible: immoral. But that does not change the fact, that I can buy their hardware still. Does that make me AMD fanboy? Does what I wrote make me AMD fanboy?

Only in Nvidia "supporters" eyes, I guess.

I suggest, pulling back with preconceptions about my post, and reading it with OPEN MIND.

P.S. Why do you, and other people for which Nvidia is so important read in my posts things which are not written in them?
 
Last edited:
Now your moving the goal post. Unless that is the only thing that "Professionals" care about on this forum. So Nvidia in this view has only two things: gaming, and CUDA. Everywhere else, their competitors offer better products. Nice to know.

I don't believe there is any Mac currently that is worth attention, in any way, shape or form, apart from 13 inch MacBook Pro. Although, it is very, very expensive which diminishes a lot of its value.

You continually argue that raw TFLOPs is the only metric that matters. I continually reply that real-world applications, even professional applications and not just games, are often limited by things other than raw TFLOPs and thus it's best to take such a comparison with a grain of salt. Yes, one can write a compute application that extracts the full potential of an AMD GPU and it will run poorly on an NVIDIA GPU. Conversely, you can write a compute application that extracts the full potential of an NVIDIA GPU and it will run poorly on an AMD GPU.

Where did I stated, that believing in CUDA is a bad thing? Where did I bashed them for this? Or this was your perception? Secondly, if you can achieve the same thing, with both brands, why would you waste your money on something more expensive, just because it has different brand? You do not see that what Apple achieves with AMD the same thing they could with competing Nvidia parts?

Come back to me, when you have anything with which you can diminish my point, rather than with your weak arguments.

This forum is funniest of them all. When people try to share about information they have, they are accused about being fanboys, shills, etc.

You want to know what is my point? I do not care anymore about Mac, because I switched to Windows. Do I like Nvidia, or do I hate them? I hate their business practices, which can be put in nicest way possible: immoral. But that does not change the fact, that I can buy their hardware still. Does that make me AMD fanboy? Does what I wrote make me AMD fanboy?

Only in Nvidia "supporters" eyes, I guess.

I suggest, pulling back with preconceptions about my post, and reading it with OPEN MIND.

Again, the point I've made countless times at this stage is that NVIDIA continues to offer vastly superior performance per watt on average than AMD. So, it's not like the AMD GPUs are equivalent to the NVIDIA ones. You can cherry-pick a single compute test where the RX 480 performs well, but it doesn't change the overall fact that the GTX 1060 beats it in general.

If all you did was share factual information, then I wouldn't feel the need to respond. Instead, you insist on sharing your heavily biased opinions phrased as facts, and that's where I draw the line. Unless you work for Apple or NVIDIA, you simply cannot know anything about their relationship, the state of the NVIDIA drivers, and so on. As such, things like "Thats why they will not optimize their drivers" or "Because they believe that CUDA should be go to compute API on Mac platform" are the reason why you get so much push-back when you post on this thread, because there is no evidence to suggest that either of those things is true.
 
  • Like
Reactions: JacobSyndeo
You continually argue that raw TFLOPs is the only metric that matters. I continually reply that real-world applications, even professional applications and not just games, are often limited by things other than raw TFLOPs and thus it's best to take such a comparison with a grain of salt. Yes, one can write a compute application that extracts the full potential of an AMD GPU and it will run poorly on an NVIDIA GPU. Conversely, you can write a compute application that extracts the full potential of an NVIDIA GPU and it will run poorly on an AMD GPU.
Its not so simple. In theory, both GPU architectures are pretty similar in the way they achieve, what they do. But the details, which differentiate both architectures are what matters, and what makes unable to optimize in universal way for both companies, thats why software vendors take "middle-ground" approach. They are programming applications in a way to not gimp performance on one vendor or another. For example, optimizing fully for Nvidia would make software perform on AMD hardware worse then it should, even with 100% utilization. Its because of way Nvidia hardware executes instructions. Too complex thing to dumb it down, so I am gonna leave this here.

Second factor here to look at is that Nvidia hardware is slightly easier to fully utilize, without gimping performance on their competitor, however, it has a backfire. AMD GPUs are not fully utilized. I have not seen so far applications that can utilize 100% of AMD hardware, apart from... Console games. Oh, and maybe only Final Cut Pro X. Is there anything else? No. Unfortunately.

Third factor is that AMD GPUs are simply harder to fully utilize, but it has other side. Full utilization, and hardware specific features, are not gimping performance from their competitor. They are just exploiting hardware capabilities to the fullest. I was supposed to avoid gaming examples, but from development, and factual point of view I have to point it out: Gaming Evolved titles. Perfectly working on AMD hardware, perfectly working on Nvidia hardware. Just showing what AMD hardware really can do. Without gimping performance on counterpart.

Its that simple.

Last factor. So you admit, that Power of GPU is only exploited by software and its up to developer competence to exploit it. If they are not doing it, its stupid to blame hardware company for it, or pump your ego with using one brand over another?

Best part: I was actually writing about this factor in my post which started this, maybe you missed it...?

Again, the point I've made countless times at this stage is that NVIDIA continues to offer vastly superior performance per watt on average than AMD. So, it's not like the AMD GPUs are equivalent to the NVIDIA ones. You can cherry-pick a single compute test where the RX 480 performs well, but it doesn't change the overall fact that the GTX 1060 beats it in general.

If all you did was share factual information, then I wouldn't feel the need to respond. Instead, you insist on sharing your heavily biased opinions phrased as facts, and that's where I draw the line. Unless you work for Apple or NVIDIA, you simply cannot know anything about their relationship, the state of the NVIDIA drivers, and so on. As such, things like "Thats why they will not optimize their drivers" or "Because they believe that CUDA should be go to compute API on Mac platform" are the reason why you get so much push-back when you post on this thread, because there is no evidence to suggest that either of those things is true.
With that Performance per watt I would look no further than to Radeon Pro 460. 35W GPU competing with 50(GTX 1050 Mobile), and 60W(GTX 1050 Ti) GPUs, from Nvidia. Who has the better performance per watt, if Radeon Pro is 5% behind GTX 1050, and 15% behind GTX 1050 TI, but uses 40% less power?

You take GTX 1060, and RX 480 as an examples, in gaming. Have we seen comparisons in compute applications on both GPUs? By the performance difference between GTX 1070, and RX 480, and the amount of fuel they burn, I would say that RX 480 being faster in compute, than GTX 1060, would have similar performance per watt. And RX 470, using similar amount of power, and offering similar compute performance, would be also equal to GTX 1060.

Who is spreading FUD then? Me, or you with your cherry picked, gaming scenarios? Is gaming really what Pros on this forum care about? Or compute performance, in real world applications, because you live from it?

I suggest watching that film comparing GTX 1070 and RX 480 in compute. Both GPUs are within 10%, of each other. One costs much less. But the performance per watt, in this particular example is on Nvidia side.

P.S. I wonder how much faster than GTX 1050 Ti would be Radeon Pro WX 5100, consuming 75W of power, but offering around 50% more compute horsepower. We could have seen, already the differences between 5.7 TFLOPs RX 480 and 6.5 TFLOPs GTX 1070. Did we not? Its 10%. Oh yes, gaming...

P.S.2 Its funny, that my post about situation of Nvidia on Mac has suddenly been spun into AMD vs Nvidia.
 
  • Like
Reactions: jblagden
One more thing. There is few things in Metal that are put deliberately to gimp Nvidia performance. I cannot write about this more.

Why would Apple do that? If Apple is the only one supporting Metal and changing those "few things" is non-trivial, then Apple has effectively chosen not to keep its options open between Nvidia & AMD, but has tied up its own hands and committed to AMD for the foreseeable future.

I was looking briefly at the computational side of Metal, in particular the AI aspects. With even a dual GPU, in particular if you can't upgrade them later, Apple is really not in the game at all of training AI. But Metal could be used to perform tasks with networks trained with TensorFlow, which to my knowledge only runs on CUDA:
https://developer.apple.com/library.../doc/uid/TP40017385-Intro-DontLinkElementID_2
 
Why would Apple do that? If Apple is the only one supporting Metal and changing those "few things" is non-trivial, then Apple has effectively chosen not to keep its options open between Nvidia & AMD, but has tied up its own hands and committed to AMD for the foreseeable future.

I was looking briefly at the computational side of Metal, in particular the AI aspects. With even a dual GPU, in particular if you can't upgrade them later, Apple is really not in the game at all of training AI. But Metal could be used to perform tasks with networks trained with TensorFlow, which to my knowledge only runs on CUDA:
https://developer.apple.com/library.../doc/uid/TP40017385-Intro-DontLinkElementID_2
What I mean is that there are features in Metal, that are not possible on Nvidia hardware.

That is all I can say about this.

And yes, Nvidia is not coming to Mac for foreseeable future.

"...How you treat others, you will be treated the same way..."
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.