But this is a Mac forum.... and we need Pascal drivers for Mac or none of this matters to us....
Don't get me wrong, this info is great. But we can't use Pascal cards without drivers.
You can, just not with macOS
But this is a Mac forum.... and we need Pascal drivers for Mac or none of this matters to us....
Don't get me wrong, this info is great. But we can't use Pascal cards without drivers.
If you are interested in only Nvidia hardware - no, there is not any reason.Then there's no reason to be on this forum at all, is there?
If you are interested in only Nvidia hardware - no, there is not any reason.
Is there a reason for such a nasty response? I have a Mac Pro.. I just don't run macOS on it any more..Then there's no reason to be on this forum at all, is there?
It's a solution, it's just a radical one. It was also a mild attempt at humour, something which was clearly lost on you. If you want to try and qualify who has more of a right to be in the thread, I have a Mac Pro and a GTX 1080. You do not. Perhaps it's you that's in the wrong place.Not a nasty response. Just a rhetorical one to a response that doesn't offer any real solution.
indication of whether this is even anywhere on the road map
I feel that there is negative indication from the Nvidia CEO. I suppose some people might interpret it differently, but I see absolutely nothing positive there.
Nvidia is dominating performance at this point and going very well in PC sales . They don't need apple. Apple does not want to spend $ on their hardware and the apple user is the looser in all this.
Well, that is because in theory GPUs with similar compute performance, regardless of brand should perform equally in every scenario. What makes difference, in compute, is the software performance. CUDA gives zero-to-none abstraction layer for software, from hardware. The application works as close to to hardware as possible - thats why its so fast. OpenCL, or for this matter any other Compute API builds layer of abstraction because of optimization. Its much cheaper to develop properly functioning, and performing application using CUDA, than it is with OpenCL, or any other compute API. If there would be viable solution from anywhere else, we would be seeing it by now. It does not matter in the end however, because for developers its hard to waste money and time in optimization of their software for other platforms, and other APIs.I'm not disagreeing, but what puzzles me is that Apple is quite willing to spend the dollars for some components:
- the screen in the 5K iMac is among the best available in the market (5K resolution, wide color, etc.)
- the SSD drive in the latest MacBook Pro is among the fastest, if not the fastest, in the laptop market
(https://forums.macrumors.com/threads/pcie-m-2-nvme-on-macpro.2030791/page-2#post-24307207)
Well, that is because in theory GPUs with similar compute performance, regardless of brand should perform equally in every scenario. What makes difference, in compute, is the software performance. CUDA gives zero-to-none abstraction layer for software, from hardware. The application works as close to to hardware as possible - thats why its so fast. OpenCL, or for this matter any other Compute API builds layer of abstraction because of optimization. Its much cheaper to develop properly functioning, and performing application using CUDA, than it is with OpenCL, or any other compute API. If there would be viable solution from anywhere else, we would be seeing it by now. It does not matter in the end however, because for developers its hard to waste money and time in optimization of their software for other platforms, and other APIs.
In what it competes? You again bring Gaming as your point?If your compute application is multiplying some numbers in a tight loop with no memory accesses, then sure, I can believe this. However, GPU performance is not as simple as raw TFLOPs, no matter how often you try and make the case that it is. Why else does a GTX 1060 compete with and often beat an RX 480 that has about 30% more raw horsepower?
I'm going to ignore the rest because it's just speculation on your part with no actual facts to back it up. Have you written a Metal application and tested the relative performance of the NVIDIA drivers/GPUs? No? Then please stop saying that NVIDIA hasn't optimized their Metal drivers (and OpenGL/CL are dead and I'd bet that Apple told AMD/NVIDIA to stop working on it and focus on Metal instead).
If you would read other threads, where I provided a lot factual information, before it was even possible to verify, you would know why, I am stating this, this way. But of course, you are free to not believe in my words.
I think you have it blew in the face in the post on the subject .So... since apparently you have "insider information," do you have any word on Mac Pascal drivers?
In what it competes? You again bring Gaming as your point?
Now your moving the goal post. Unless that is the only thing that "Professionals" care about on this forum. So Nvidia in this view has only two things: gaming, and CUDA. Everywhere else, their competitors offer better products. Nice to know.Oh I'm sorry, I didn't realize this thread was only talking about compute and that nobody cares about gaming. Perhaps you should update the thread title to reflect what topics are relevant to be discussed in here.
I don't believe there is any Mac currently that is worth attention, in any way, shape or form, apart from 13 inch MacBook Pro. Although, it is very, very expensive which diminishes a lot of its value.I've said it before and I'll say it again now: at this point, I would not recommend anyone buy an NVIDIA GPU to run under macOS until NVIDIA releases a web driver that supports Pascal, or until Apple releases a product that uses a Pascal GPU. So, that probably means don't ever buy another NVIDIA GPU to run under macOS. It's sad that this is what it's come to, after so many years of NVIDIA enabling better GPUs via their web drivers.
Where did I stated, that believing in CUDA is a bad thing? Where did I bashed them for this? Or this was your perception? Secondly, if you can achieve the same thing, with both brands, why would you waste your money on something more expensive, just because it has different brand? You do not see that what Apple achieves with AMD the same thing they could with competing Nvidia parts?You can keep talking smack about how terrible the NVIDIA drivers are or how NVIDIA is only interested in CUDA or whatever point you're trying to make, but at the end of the day it's all irrelevant. Apple is on the AMD train (probably because they're selling their GPUs for so little money) until they can replace it all with their A* chips. Apple clearly doesn't care about raw GPU performance, and will continue to provide lackluster upgrades until their A* chips can compete (or at least not be a huge step down). I'm no longer their target market, and as such I have moved on and am running Windows 10 on my Hackintoshes about 99.9% of the time these days.
Now your moving the goal post. Unless that is the only thing that "Professionals" care about on this forum. So Nvidia in this view has only two things: gaming, and CUDA. Everywhere else, their competitors offer better products. Nice to know.
I don't believe there is any Mac currently that is worth attention, in any way, shape or form, apart from 13 inch MacBook Pro. Although, it is very, very expensive which diminishes a lot of its value.
Where did I stated, that believing in CUDA is a bad thing? Where did I bashed them for this? Or this was your perception? Secondly, if you can achieve the same thing, with both brands, why would you waste your money on something more expensive, just because it has different brand? You do not see that what Apple achieves with AMD the same thing they could with competing Nvidia parts?
Come back to me, when you have anything with which you can diminish my point, rather than with your weak arguments.
This forum is funniest of them all. When people try to share about information they have, they are accused about being fanboys, shills, etc.
You want to know what is my point? I do not care anymore about Mac, because I switched to Windows. Do I like Nvidia, or do I hate them? I hate their business practices, which can be put in nicest way possible: immoral. But that does not change the fact, that I can buy their hardware still. Does that make me AMD fanboy? Does what I wrote make me AMD fanboy?
Only in Nvidia "supporters" eyes, I guess.
I suggest, pulling back with preconceptions about my post, and reading it with OPEN MIND.
Its not so simple. In theory, both GPU architectures are pretty similar in the way they achieve, what they do. But the details, which differentiate both architectures are what matters, and what makes unable to optimize in universal way for both companies, thats why software vendors take "middle-ground" approach. They are programming applications in a way to not gimp performance on one vendor or another. For example, optimizing fully for Nvidia would make software perform on AMD hardware worse then it should, even with 100% utilization. Its because of way Nvidia hardware executes instructions. Too complex thing to dumb it down, so I am gonna leave this here.You continually argue that raw TFLOPs is the only metric that matters. I continually reply that real-world applications, even professional applications and not just games, are often limited by things other than raw TFLOPs and thus it's best to take such a comparison with a grain of salt. Yes, one can write a compute application that extracts the full potential of an AMD GPU and it will run poorly on an NVIDIA GPU. Conversely, you can write a compute application that extracts the full potential of an NVIDIA GPU and it will run poorly on an AMD GPU.
With that Performance per watt I would look no further than to Radeon Pro 460. 35W GPU competing with 50(GTX 1050 Mobile), and 60W(GTX 1050 Ti) GPUs, from Nvidia. Who has the better performance per watt, if Radeon Pro is 5% behind GTX 1050, and 15% behind GTX 1050 TI, but uses 40% less power?Again, the point I've made countless times at this stage is that NVIDIA continues to offer vastly superior performance per watt on average than AMD. So, it's not like the AMD GPUs are equivalent to the NVIDIA ones. You can cherry-pick a single compute test where the RX 480 performs well, but it doesn't change the overall fact that the GTX 1060 beats it in general.
If all you did was share factual information, then I wouldn't feel the need to respond. Instead, you insist on sharing your heavily biased opinions phrased as facts, and that's where I draw the line. Unless you work for Apple or NVIDIA, you simply cannot know anything about their relationship, the state of the NVIDIA drivers, and so on. As such, things like "Thats why they will not optimize their drivers" or "Because they believe that CUDA should be go to compute API on Mac platform" are the reason why you get so much push-back when you post on this thread, because there is no evidence to suggest that either of those things is true.
One more thing. There is few things in Metal that are put deliberately to gimp Nvidia performance. I cannot write about this more.
What I mean is that there are features in Metal, that are not possible on Nvidia hardware.Why would Apple do that? If Apple is the only one supporting Metal and changing those "few things" is non-trivial, then Apple has effectively chosen not to keep its options open between Nvidia & AMD, but has tied up its own hands and committed to AMD for the foreseeable future.
I was looking briefly at the computational side of Metal, in particular the AI aspects. With even a dual GPU, in particular if you can't upgrade them later, Apple is really not in the game at all of training AI. But Metal could be used to perform tasks with networks trained with TensorFlow, which to my knowledge only runs on CUDA:
https://developer.apple.com/library.../doc/uid/TP40017385-Intro-DontLinkElementID_2