Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What I mean is that there are features in Metal, that are not possible on Nvidia hardware.

Doing things that give you maximal performance but are only possible with one supplier is quite different from intentionally hurting the performance with another supplier.


And yes, Nvidia is not coming to Mac for foreseeable future.

Looks like it's time to move on from Mac then. I'm not approving Nvidia's business practices (I don't really know anything about them), but at this point a lot of the computational stuff is done in CUDA and benefits from multiple GPUs.
 
Looks like it's time to move on from Mac then. I'm not approving Nvidia's business practices (I don't really know anything about them), but at this point a lot of the computational stuff is done in CUDA and benefits from multiple GPUs.
Its up to you. Because currently there is no Mac that is worth the money asked by Apple, I moved to Windows. And the more I spend time with Windows 10, the less I want to go back to Mac. Yes, its different, but overall experience is not annoying like it was, when I was moving to Mac.

So anyway, good luck.
 
Its up to you. Because currently there is no Mac that is worth the money asked by Apple, I moved to Windows. And the more I spend time with Windows 10, the less I want to go back to Mac. Yes, its different, but overall experience is not annoying like it was, when I was moving to Mac.

So anyway, good luck.

Thanks. I'll probably go to Linux in a slow transition of keeping (old) Mac & (new) Linux computers side-by-side until I find replacements for my current programs & get comfy enough to use it exclusively.
 
I'll take it. Even if its only purpose is to inform a PC purchase.

But we'd appreciate it if he mentioned if the drivers are available for Mac OS X. Even if I already know, I still want to know if my knowledge is current. That way, I don't think the drivers just recently became available.
 
Last edited:
  • Like
Reactions: aaronhead14
Its not so simple. In theory, both GPU architectures are pretty similar in the way they achieve, what they do. But the details, which differentiate both architectures are what matters, and what makes unable to optimize in universal way for both companies, thats why software vendors take "middle-ground" approach. They are programming applications in a way to not gimp performance on one vendor or another. For example, optimizing fully for Nvidia would make software perform on AMD hardware worse then it should, even with 100% utilization. Its because of way Nvidia hardware executes instructions. Too complex thing to dumb it down, so I am gonna leave this here.

Second factor here to look at is that Nvidia hardware is slightly easier to fully utilize, without gimping performance on their competitor, however, it has a backfire. AMD GPUs are not fully utilized. I have not seen so far applications that can utilize 100% of AMD hardware, apart from... Console games. Oh, and maybe only Final Cut Pro X. Is there anything else? No. Unfortunately.

Third factor is that AMD GPUs are simply harder to fully utilize, but it has other side. Full utilization, and hardware specific features, are not gimping performance from their competitor. They are just exploiting hardware capabilities to the fullest. I was supposed to avoid gaming examples, but from development, and factual point of view I have to point it out: Gaming Evolved titles. Perfectly working on AMD hardware, perfectly working on Nvidia hardware. Just showing what AMD hardware really can do. Without gimping performance on counterpart.

Its that simple.

Last factor. So you admit, that Power of GPU is only exploited by software and its up to developer competence to exploit it. If they are not doing it, its stupid to blame hardware company for it, or pump your ego with using one brand over another?

Best part: I was actually writing about this factor in my post which started this, maybe you missed it...?


With that Performance per watt I would look no further than to Radeon Pro 460. 35W GPU competing with 50(GTX 1050 Mobile), and 60W(GTX 1050 Ti) GPUs, from Nvidia. Who has the better performance per watt, if Radeon Pro is 5% behind GTX 1050, and 15% behind GTX 1050 TI, but uses 40% less power?

You take GTX 1060, and RX 480 as an examples, in gaming. Have we seen comparisons in compute applications on both GPUs? By the performance difference between GTX 1070, and RX 480, and the amount of fuel they burn, I would say that RX 480 being faster in compute, than GTX 1060, would have similar performance per watt. And RX 470, using similar amount of power, and offering similar compute performance, would be also equal to GTX 1060.

Who is spreading FUD then? Me, or you with your cherry picked, gaming scenarios? Is gaming really what Pros on this forum care about? Or compute performance, in real world applications, because you live from it?

I suggest watching that film comparing GTX 1070 and RX 480 in compute. Both GPUs are within 10%, of each other. One costs much less. But the performance per watt, in this particular example is on Nvidia side.

P.S. I wonder how much faster than GTX 1050 Ti would be Radeon Pro WX 5100, consuming 75W of power, but offering around 50% more compute horsepower. We could have seen, already the differences between 5.7 TFLOPs RX 480 and 6.5 TFLOPs GTX 1070. Did we not? Its 10%. Oh yes, gaming...

P.S.2 Its funny, that my post about situation of Nvidia on Mac has suddenly been spun into AMD vs Nvidia.

I agree 100%. The GPU brand that's best for you really depends on what you're doing. Generally, Nvidia GPUs are best for gaming and AMD GPUS are best for creative work like video editing. Though there are exceptions. Some professionals use Nvidia GPUs for scientific work which requires a lot of processor power, and Adobe makes good use of CUDA in its apps.
 
I agree 100%. The GPU brand that's best for you really depends on what you're doing. Generally, Nvidia GPUs are best for gaming and AMD GPUS are best for creative work like video editing. Though there are exceptions. Some professionals use Nvidia GPUs for scientific work which requires a lot of processor power, and Adobe makes good use of CUDA in its apps.

The dominance of the NVIDIA Quadro brand for professional users would suggest otherwise, at least for Windows. Under macOS, sure, AMD is the obvious choice because Apple appears to be all-in on AMD for the time being.
 
  • Like
Reactions: jblagden
I agree 100%. The GPU brand that's best for you really depends on what you're doing. Generally, Nvidia GPUs are best for gaming and AMD GPUS are best for creative work like video editing. Though there are exceptions. Some professionals use Nvidia GPUs for scientific work which requires a lot of processor power, and Adobe makes good use of CUDA in its apps.
Nvidia did great job at the time being releasing CUDA. Thats why they built their mindshare, and their professional brand upon. There wasn't anything like that in 2005-2006, that was giving such boost to performance of professional applications.

Everything is about software performance currently. Apart from CUDA, I have not seen to this day any software that would absolutely exploit full performance from hardware, apart from FCPX.

Current state of software is simple. Either you invest tons of money, time and effort in developing and optimizing the software for hardware, or you simply take CUDA, push it to your software, and focus, time, money and effort into developing best experience in your professional software. In essence - you do not give a s*** about optimizing the software this way.

What would you pick, if you would be software developer?
 
Nvidia did great job at the time being releasing CUDA. Thats why they built their mindshare, and their professional brand upon. There wasn't anything like that in 2005-2006, that was giving such boost to performance of professional applications.

Everything is about software performance currently. Apart from CUDA, I have not seen to this day any software that would absolutely exploit full performance from hardware, apart from FCPX.

Current state of software is simple. Either you invest tons of money, time and effort in developing and optimizing the software for hardware, or you simply take CUDA, push it to your software, and focus, time, money and effort into developing best experience in your professional software. In essence - you do not give a s*** about optimizing the software this way.

What would you pick, if you would be software developer?

You simply have to choose the best hardware for what you are doing. If you are editing in FCP or are otherwise tied to the mac, you choose AMD. If you are doing machine learning, right now the best choice is Nvidia. If you are working in Adobe's software, you are probably better off with Nvidia.

Nobody cares about "exploiting the full performance from the hardware" they just want the hardware that works the best with the task they are doing. If an employee at a company tells their boss, "I need X to do my job" then thats what they usually get.
 
You simply have to choose the best hardware for what you are doing. If you are editing in FCP or are otherwise tied to the mac, you choose AMD. If you are doing machine learning, right now the best choice is Nvidia. If you are working in Adobe's software, you are probably better off with Nvidia.

Nobody cares about "exploiting the full performance from the hardware" they just want the hardware that works the best with the task they are doing. If an employee at a company tells their boss, "I need X to do my job" then thats what they usually get.
About that Machine Learning Nvidia wanted to lock the entire industry to their ecosystem, with CUDA, but THANKFULLY, they were not able to do so.

Most of software is on open source, or OpenCL, which is weird if you think about it, at first look, but when you will deep dive about the reasons its not so weird.

About hardware, well there is no competition right now in Machine Learning, but that has changed, AMD announced the MIX GPUs, and most of software is on OpenSource.

About what you have written, I cannot agree more. In fact that was the point of my posts.
 
The dominance of the NVIDIA Quadro brand for professional users would suggest otherwise, at least for Windows. Under macOS, sure, AMD is the obvious choice because Apple appears to be all-in on AMD for the time being.

I did some research, and I found out that there's long list of programs that use CUDA: http://www.nvidia.com/object/gpu-applications.html?All
Here's a shorter list of well-known programs and programs from well-know companies which use CUDA:

Apple Final Cut
Adobe Photoshop
Adobe After Effects
Adobe Premiere Pro
Adobe SpeedGrade
AutoDesk AutoCAD
AutoDesk AutoCAD Design Suite
AutoDesk 3ds Max
AutoDesk Flame Premium
AutoDesk Maya
AutoDesk Moldflow
AutoDesk Motion Builder
AutoDesk MudBox
AutoDesk Smoke
Avid Media Composer
Avid Motion Graphics
Blackmagic DaVinci Resolve
 
  • Like
Reactions: aaronhead14
Final Cut absolutely does not use CUDA, that list appears to just be about "GPU acceleration" which is broader than CUDA.
 
  • Like
Reactions: itdk92
About that Machine Learning Nvidia wanted to lock the entire industry to their ecosystem, with CUDA, but THANKFULLY, they were not able to do so.

Nvidia created CUDA because there was no framework for doing GPU computing at its inception. CUDA predates OpenCL.

If you want to do GPU accelerated machine learning using Google's TensorFlow, you can only do it on Nvidia. There is a push to get these tools on an open source platform, but Nvidia's tools are simply better and more widely supported so its a work in progress. This is basically the case for a lot of machine learning methods.

I don't doubt that AMD can make some useful hardware here, but they have a lot catching up to do on the software side.
 
Nvidia created CUDA because there was no framework for doing GPU computing at its inception. CUDA predates OpenCL.

If you want to do GPU accelerated machine learning using Google's TensorFlow, you can only do it on Nvidia. There is a push to get these tools on an open source platform, but Nvidia's tools are simply better and more widely supported so its a work in progress. This is basically the case for a lot of machine learning methods.

I don't doubt that AMD can make some useful hardware here, but they have a lot catching up to do on the software side.
Agreed.
 
Rumors have the 1080 Ti coming out in march. This looks like Nvidia's tradition of putting out their own hardware right before AMD's launch. It will be interesting to see how cut down it is based on how fast they think Vega will be.
 
  • Like
Reactions: jblagden
https://videocardz.com/66140/is-nvidia-geforce-gtx-1080-ti-launching-soon

Edit: Also: Architecting an Energy-Efficient DRAM System for GPUs
https://nilcsutah.github.io/pubs/hpca17.pdf
This paper proposes an energy-efficient, highthroughput DRAM architecture for GPUs and throughput processors. In these systems, requests from thousands of concurrent threads compete for a limited number of DRAM row buffers. As a result, only a fraction of the data fetched into a row buffer is used, leading to significant energy overheads. Our proposed DRAM architecture exploits the hierarchical organization of a DRAM bank to reduce the minimum row activation granularity. To avoid significant incremental area with this approach, we must partition the DRAM datapath into a number of semi-independent subchannels. These narrow subchannels increase data toggling energy which we mitigate using a static data reordering scheme designed to lower the toggle rate. This design has 35% lower energy consumption than a die-stacked DRAM with 2.6% area overhead. The resulting architecture, when augmented with an improved memory access protocol, can support parallel operations across the semi-independent subchannels, thereby improving system performance by 13% on average for a range of workloads.
 
Last edited:
About that Machine Learning Nvidia wanted to lock the entire industry to their ecosystem, with CUDA, but THANKFULLY, they were not able to do so.

Most of software is on open source, or OpenCL, which is weird if you think about it, at first look, but when you will deep dive about the reasons its not so weird.

About hardware, well there is no competition right now in Machine Learning, but that has changed, AMD announced the MIX GPUs, and most of software is on OpenSource.

About what you have written, I cannot agree more. In fact that was the point of my posts.

NVidia developed CUDA prior to the existence of OpenCL. Early versions of OpenCL were also not very stable across different hardware. I don't how you can claim that NVidia tried to lock in the entire industry when no one else showed up for several years.
 
  • Like
Reactions: jblagden
NVidia developed CUDA prior to the existence of OpenCL. Early versions of OpenCL were also not very stable across different hardware. I don't how you can claim that NVidia tried to lock in the entire industry when no one else showed up for several years.
Machine Learning exploded quite later, than Nvidia developed CUDA. And their efforts for locking the industry into CUDA ecosystem were also going from few years ago(2-3), not from the beginning of CUDA(2005).
 
Machine Learning exploded quite later, than Nvidia developed CUDA. And their efforts for locking the industry into CUDA ecosystem were also going from few years ago(2-3), not from the beginning of CUDA(2005).

Maybe I misinterpreted you. What efforts were you referring to then? They have been writing libraries specific to CUDA for a while. They may have tried to work with OpenCL for a little while, but it's not a very good framework. A lot of these companies tend to give away the software in order to sell hardware. Intel does the same to a degree with their shift toward community licensing of their MKL library.
 
Maybe I misinterpreted you. What efforts were you referring to then? They have been writing libraries specific to CUDA for a while. They may have tried to work with OpenCL for a little while, but it's not a very good framework. A lot of these companies tend to give away the software in order to sell hardware. Intel does the same to a degree with their shift toward community licensing of their MKL library.
CUDA is the proprietary API. Just because of that it is not available to anyone else pushing software to include CUDA, is effort to lock industry to this API, and therefore - Nvidia hardware.
Even if CUDA is good, there is at least two open source initiatives that are better.
 
CUDA is the proprietary API. Just because of that it is not available to anyone else pushing software to include CUDA, is effort to lock industry to this API, and therefore - Nvidia hardware.
Even if CUDA is good, there is at least two open source initiatives that are better.

Can you name both of them?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.