Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
He might be comparing Vega 64 ($500) vs TitanX ($3000)? That would be $15k vs $90k.

Edit: Nevermind, he said 150k vs 900k, not sure what's compared here. Or it's getting too late.
Oh, I misread that. Aiden is buying 30 not 300 GPUs. My mistake. It should be 90 000$ vs 15 000$.
 
  • Like
Reactions: AidenShaw
Which is why you ultimately are wasting money. Because once you port your CUDA code using HIP to any other paltform, and you are good to go with it - everywhere.
You are too invested in your ATI cheerleading to actually think about understanding the problem

Suppose that I port my Pascal CUDA 9 code through your tools to run on ATI Polaris GPUs, and then hand fix the breakage, and hand optimize for what the tools can't do.

What do I do when CUDA 9.2 arrived with better Volta support? Because the issue isn't my CUDA code alone, but my CUDA code plus cuFFT and cuDNN and all of the other CUDA libraries that I've pulled in.

You say "you are good to go with it - everywhere" - but can I really move it back to CUDA?

If I have Nvidia cards, then updating the CUDA libraries to 9.2 updates all of my code for Volta. Obviously, some major new Volta features will require some tweaks to exploit - but many of the lower level improvements will be incorporated into the libraries and the Volta hardware features will be exploited without source changes (although some source tweaks for Volta might make things even faster).

How do I add Volta support to my bastardized translated code? Do I go back to the original Pascal CUDA code, and redo all the pain of translating and fixing? And will that even add Volta support, or will the translating tools choke when they see Volta APIs? (And lord knows how to merge feature updates in the translated code back to the original.)

If a translator tool creates a fork of the original source code for whatever reason - that's a serious reason to avoid the translator tool.
 
you are good to go with it - everywhere
This doesn't mean anything if you can't go back.
[doublepost=1538183528][/doublepost]
This doesn't mean anything if you can't go back.
Oh, I misread that. Aiden is buying 30 not 300 GPUs. My mistake. It should be 90 000$ vs 15 000$.
$150K vs $90K vs $15K is all rounding error.

Over $500K I have to fill out an extra justification form and get a VP's signoff. Not a problem - my direct manager is a VP.

Koyoot - just drop the price per GPU as an argument - you'll lose the argument for anything but the amateur users who have a single system with a single GPU.... (I have a Quadro in my home PC that I bought on my own...)
 
Last edited:
Which is why you ultimately are wasting money. Because once you port your CUDA code using HIP to any other paltform, and you are good to go with it - everywhere. AI and ML are very well supported on AMD platform, and Vega 64 is offering 95% of performance of GV100 chip, but at the same time costs 1/6th the price. Buy 30 GPUs at once and you get the difference(900 000$ vs 150 000$).

At least read about the functionality on ROCm and AMD platform.

AI and ML makes full use of the Tensor cores on the Volta chip, and I wasn't aware that Vega 64 could do 95% of 110 TFLOPs or whatever the number is for the TITAN V. Isn't Vega 64 more like 14 TFLOPS for this kind of thing?

On paper, AMD does have impressive raw compute numbers. In practice, AMD has very little market share in this space. Perhaps the correct conclusion you should be drawing is that raw theoretical performance has nothing to do with how successful NVIDIA has been? As others have indicated, the huge amount of support you can get on the CUDA side, whether it's working directly with NVIDIA developers or the wide range of highly optimized libraries, seems to be the reason why NVIDIA has basically won this fight already.
 
SGI was the graphics leader with their expensive proprietary stuff. It did not end well.
 
This doesn't mean anything if you can't go back.

Koyoot - just drop the price per GPU as an argument - you'll lose the argument for anything but the amateur users who have a single system with a single GPU.... (I have a Quadro in my home PC that I bought on my own...)
If you are doing this kind of stuff - what the f*** do you need Apple for?

This is not the platform for you. It boggles my mind how long can you guys rumble on this forum how bad Apple is for your needs, and not do anything with it.

The choice is simple. Either you guys adapt your code for OpenCL or Metal or you move to another platform. Simple as it can be.
 
pocl is looking for OSX backend maintainers. Apple tools should not be needed going forward.
 
If you are doing this kind of stuff - what the f*** do you need Apple for?

This is not the platform for you. It boggles my mind how long can you guys rumble on this forum how bad Apple is for your needs, and not do anything with it.

The choice is simple. Either you guys adapt your code for OpenCL or Metal or you move to another platform. Simple as it can be.

CUDA runs on macOS just fine though? I know quite a few people who develop their CUDA code on a Mac laptop, and then run it on their datacenter etc. This would be much easier if Apple actually shipped a modern laptop with an NVIDIA GPU in it.
 
  • Like
Reactions: 09872738
Well... there are no nVidia drivers for Mojave yet. And there is no information if nVidia has any intentions of supporting Mojave at all. For now, nVidia even removed macOS drivers (for newer cards) for High Sierra and earlier from its website
 
Well... there are no nVidia drivers for Mojave yet. And there is no information if nVidia has any intentions of supporting Mojave at all. For now, nVidia even removed macOS drivers (for newer cards) for High Sierra and earlier from its website

CUDA will be supported in Mojave:

https://devtalk.nvidia.com/default/topic/1042279/cuda-10-and-macos-10-14/

The recently released macOS 10.14 (Mojave) is not supported by CUDA 10. Developers may not be able to use Xcode 10 to build GPU applications with CUDA 10 or run applications with the CUDA 10 driver. CUDA 10 supports macOS 10.13.6 and Xcode 9.4. For CUDA developers who are on macOS 10.13, it is recommended to not upgrade to Mojave. Support for Mojave will be added in a future release of CUDA.

So if you're running on a system that is supported by the drivers included with the OS, it will work. If NVIDIA is going to support 10.14 with CUDA, then I think it's a fair assumption that they will release a web driver for it as well.
 
  • Like
Reactions: 09872738
Thanks for information - the post is rather new (18 hrs at the point of this writing), so I was not aware of nVidias plans.

Good to hear, though. I'm currently on High Sierra on my dev machine waiting for the Mojave webdrivers.
 
Posted 18 hours ago, many thanks for linking it. Excellent they will continue to support it.

The usual workflow is exactly as mentioned. Prototyping and testing things locally and then deploy to the target system. If the new Mac Pro will bring back PCIe and the other systems eGPU support, it's fine. I'd like to see a NVIDIA GPU in the next MBP, but could live with eGPU support.
 
Posted 18 hours ago, many thanks for linking it. Excellent they will continue to support it.

The usual workflow is exactly as mentioned. Prototyping and testing things locally and then deploy to the target system. If the new Mac Pro will bring back PCIe and the other systems eGPU support, it's fine. I'd like to see a NVIDIA GPU in the next MBP, but could live with eGPU support.

Interesting, I just did a Google search for "CUDA on macOS Mojave" or something and that popped right up. Good timing I guess!
 
So wait a minute... Xcode 9.4 supported CUDA, but Xcode 10 doesn't? That means Apple ripped CUDA support out of Xcode on purpose. Additionally, Apple also deliberately disabled NVIDIA eGPU support. Furthermore, NVIDIA Web Drives for Mojave have yet to materialize. It sounds like NVIDIA is dead to Apple.

I really don't give a crap about the whole NVIDIA vs AMD battle, I just want to see the damn boot screen when I start my computer so I can use FileVault.
 
  • Like
Reactions: splifingate
@nbritton NVIDIA is dead to apple ;)
apple has not shipped a mac with one of there cards in years, the NVIDIA drivers where from NVIDIA not apple (and i always assumed it was to piss apple off :D)

it's still super early days in osx10.14
it's still super erly days with RTX cards
RTX is a jump from old gpu's so, wait and see
 
So wait a minute... Xcode 9.4 supported CUDA, but Xcode 10 doesn't? That means Apple ripped CUDA support out of Xcode on purpose. Additionally, Apple also deliberately disabled NVIDIA eGPU support. Furthermore, NVIDIA Web Drives for Mojave have yet to materialize. It sounds like NVIDIA is dead to Apple.

I really don't give a crap about the whole NVIDIA vs AMD battle, I just want to see the damn boot screen when I start my computer so I can use FileVault.

CUDA has always been an add on to Xcode from Nvidia. Apple has never built it in or done CUDA themselves.

Apple can't rip out what they never bundled in.
 
CUDA has always been an add on to Xcode from Nvidia. Apple has never built it in or done CUDA themselves.

Apple can't rip out what they never bundled in.


If Apple charges a premium for their products "Apple Tax" don't you think consumers should have more choice? The Mac Pro, even though it has a contemporary artistic shape is still a tower like a traditional Windows desktop. It uses basically the same basic setup and principles found in a Windows/Linux system, medium sized tower, removable memory, CPU, decent sized motherboard, plenty of PCI slots, beefy PSU, ect... So there really isn't excuse to not offer support for the leading graphics manufacturer nVidia, other than playing the market in their favor.
 
  • Like
Reactions: Synchro3
If Apple charges a premium for their products "Apple Tax" don't you think consumers should have more choice?

Almost every company that has a ‘luxury’ product line up has less choice and more emphasis on style and quality.

It’s the companies with cheap products that offer more options.

The less known and less quality the company is the more options you get. For example, you want a laptop with super mega SLI graphics and three m.2 slots then you buy some chunky plastic laptop from Eurocom.
 
  • Like
Reactions: h9826790
Almost every company that has a ‘luxury’ product line up has less choice and more emphasis on style and quality.

That's certainly debatable. Back in my late 90's/early to mid 2,000's yesteryears I used to build desktops with others as an enthusiast. Sure AMD, formerly known as ATi had some good cards, such as the 8,000 and 9,000 series, I had a 9500 Pro and a 9800 Pro that I really liked. But overall nVidia had always had better quality/stability, especially when it came to their drivers. Remember the Detonator drivers, when you'd see double digit performance increases? ATi always lacked in that department and still does to this day "fine wine", especially their historic Catalyst drivers. A new driver would fix one problem, but at the same time create another problem with a different application, which lead to consumers desperately using beta and hacked up 3rd party drivers to get all their programs to work properly. Overall the drivers from nVidia where far superior to ATi's offerings. Also, today AMD cannot compete with Nvidia's high end offerings. So it's fair to say quality and driver wise nVidia has higher quality offerings than AMD.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.