Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Jim-H

macrumors member
Original poster
Dec 31, 2013
42
1
Would like to know what options are available to run CUDA on an external TB enclosure. Only concerned about using the GPUs for compute, not to drive a display.

Ideally, I'd like to connect a few nVidia Titan Black GPUs to the nMP for use with the Octane render engine in Cinema 4D, TFD, etc. Are there any options available now, anything coming out soon?
 
Would like to know what options are available to run CUDA on an external TB enclosure. Only concerned about using the GPUs for compute, not to drive a display.

Ideally, I'd like to connect a few nVidia Titan Black GPUs to the nMP for use with the Octane render engine in Cinema 4D, TFD, etc. Are there any options available now, anything coming out soon?

I don't know of any PCIe 16x external enclosure since TB can't go that high. I believe the max is x4 which would cripple the transfer rate of your Titan GPU.

And beside, considering the price of TB enclosure, it would be more economical to buy a cheap amd APU or i3 PC with a 16x slot for your Titan and use it as an autonomous render node.
 
I don't know of any PCIe 16x external enclosure since TB can't go that high. I believe the max is x4 which would cripple the transfer rate of your Titan GPU.

And beside, considering the price of TB enclosure, it would be more economical to buy a cheap amd APU or i3 PC with a 16x slot for your Titan and use it as an autonomous render node.

There answer here is a lot of "it depends".

If the card is spending most of it's time rendering, and not as much time transferring, a 4x PCIe speed won't make much of a difference.

If the card is continuously streaming data, you're going to have a problem.

My guess with a 3D render engine is that the amount of data you're streaming is medium to low compared to the actual max bandwidth of the card. But the amount of computation being done is high. External Titans might not actually be so bad, especially considering if you are running in a 4 way SLI system, you're running them at 4x anyway.
 
OP asked about CUDA which is compute, not render. CUDA isn't available on OS X AFAIK, so you'd have to dual boot into Windows, and then you'd have to rely on Windows Thunderbolt aware drivers. Don't know the state of that, but it's not a question here unless CUDA libraries are available on OS X, and they are Thunderbolt aware (I think the answer is no).

Seems like beating a dead horse at any rate.
 
OP asked about CUDA which is compute, not render. CUDA isn't available on OS X AFAIK, so you'd have to dual boot into Windows, and then you'd have to rely on Windows Thunderbolt aware drivers. Don't know the state of that, but it's not a question here unless CUDA libraries are available on OS X, and they are Thunderbolt aware (I think the answer is no).

Seems like beating a dead horse at any rate.
You can install nVidia's CUDA drivers on OS X. CUDA works on OS X.
 
There answer here is a lot of "it depends".

If the card is spending most of it's time rendering, and not as much time transferring, a 4x PCIe speed won't make much of a difference.

If the card is continuously streaming data, you're going to have a problem.

My guess with a 3D render engine is that the amount of data you're streaming is medium to low compared to the actual max bandwidth of the card. But the amount of computation being done is high. External Titans might not actually be so bad, especially considering if you are running in a 4 way SLI system, you're running them at 4x anyway.

It will still cost less and be more practical to use a cheap dedicated render node and distribute the job. And for some task, like video encoding, you'll need to move those frame in and out of your external TB enclosure constantly vs just copying part of the file to an external node and let it run independently. Even rendering benefit from this and it's the reason why external node rendering is supported in the major 3D package. Even Blender does it.
 
It will still cost less and be more practical to use a cheap dedicated render node and distribute the job.

Even if I use my PC workstation with a couple of good compute cards in it as a GPU render node, I still would like the ability to get fast previews using CUDA on the nMP. This doesn't sound like it will be cost effective.
 
I don't know the answer to what is asked here, but take a look at the
"Sonnet Technologies Echo Express III-D Thunderbolt 2 Expansion Chassis"... It has one x8 mechanical (x8 electrical) PCIe 2.0, one x16 mechanical (x8 electrical) PCIe 2.0, and one x8 mechanical (x4 electrical) PCIe 2.0 connection, and it connects to the nMP using TB2.
 
I don't know the answer to what is asked here, but take a look at the
"Sonnet Technologies Echo Express III-D Thunderbolt 2 Expansion Chassis"... It has one x8 mechanical (x8 electrical) PCIe 2.0, one x16 mechanical (x8 electrical) PCIe 2.0, and one x8 mechanical (x4 electrical) PCIe 2.0 connection, and it connects to the nMP using TB2.

PCIe 3.0 16x is 32gb/s. TB2 is 20gb/s max and this depends on what else you have plugged to the channel.

That box also cost $979.00

For $979.00 I can build 2 to 3 Rendering nodes with amd apu based CPU each with a PCIe 3.0 16x slot. You can fit your video card of choice in them and have them render while you're doing other stuff on your workstation.
 
PCIe 3.0 16x is 15.75 GB/s. TB2 is 2 GB/s max and this depends on what else you have plugged to the channel.

That box also cost $979.00

For $979.00 I can build 2 to 3 Rendering nodes with amd apu based CPU each with a PCIe 3.0 16x slot. You can fit your video card of choice in them and have them render while you're doing other stuff on your workstation.

Fixed. ;)
 
PCIe 3.0 16x is 32gb/s. TB2 is 20gb/s max and this depends on what else you have plugged to the channel.

That box also cost $979.00

For $979.00 I can build 2 to 3 Rendering nodes with amd apu based CPU each with a PCIe 3.0 16x slot. You can fit your video card of choice in them and have them render while you're doing other stuff on your workstation.

PCIe 3.0 is 7.88 Gbps per lane - so 16 lanes is 126.08 Gbps.

http://en.wikipedia.org/wiki/PCI_Express#PCI_Express_3.x
 
One of the most important and distinctive benefits of Octane Render is its ability to render interactive previews while you work AS GOOD AS THE FINAL RENDER in real time. Depending of course on your GPU horsepower, which is why the OP is asking about connecting directly to the nMP.

I use this software on a hackintosh with the latest build of Mavericks, so we know there are of course CUDA drivers for OSX that support these cards.

Just to make it clear for me - what flowrider was saying was a joke that just installing the CUDA drivers won't magically give the AMD GPUs CUDA capabilities?? Or do you know for a fact that you can not run BOTH the inbuilt AMD Dxx GPUs AND external nvidia cards concurrently?? That is still the unanswered question for me!

Not to mention having a series of other PCs rigged up severely damages the ‘small imprint’ factor of the nMP ;) You could have all this power and more within one PC (or hackintosh) box.
 
Or do you know for a fact that you can not run BOTH the inbuilt AMD Dxx GPUs AND external nvidia cards concurrently??

When I first got my Mac Pro 5,1, it came with the 5770 base AMD card in it. I dropped an nVidia Q4000 that I'd already purchased some time prior into it, just to see what would happen. The two co-existed just fine, even with the CUDA drivers pushing the Q4000 for Premiere Pro rendering.

I have no idea if OS X can access GPUs via TBolt (yet?). But if/when it can, there should be no reason why the CUDA driver won't work just fine.
 
When I first got my Mac Pro 5,1, it came with the 5770 base AMD card in it. I dropped an nVidia Q4000 that I'd already purchased some time prior into it, just to see what would happen. The two co-existed just fine, even with the CUDA drivers pushing the Q4000 for Premiere Pro rendering.

I have no idea if OS X can access GPUs via TBolt (yet?). But if/when it can, there should be no reason why the CUDA driver won't work just fine.

Awesome thanks Jason!!!
 
OP asked about CUDA which is compute, not render. CUDA isn't available on OS X AFAIK, so you'd have to dual boot into Windows, and then you'd have to rely on Windows Thunderbolt aware drivers. Don't know the state of that, but it's not a question here unless CUDA libraries are available on OS X, and they are Thunderbolt aware (I think the answer is no).

Seems like beating a dead horse at any rate.

CUDA/OpenCL can be used for render. That's one of the primary use cases for pros.

It can do the ray tracing sort of rendering OpenGL can't do.

Also CUDA totally is available under OS X. I've done CUDA app development in OS X. So unless that was all a fever dream, it definitely works in OS X.
 
One of the most important and distinctive benefits of Octane Render is its ability to render interactive previews while you work AS GOOD AS THE FINAL RENDER in real time. Depending of course on your GPU horsepower, which is why the OP is asking about connecting directly to the nMP.

I use this software on a hackintosh with the latest build of Mavericks, so we know there are of course CUDA drivers for OSX that support these cards.

Just to make it clear for me - what flowrider was saying was a joke that just installing the CUDA drivers won't magically give the AMD GPUs CUDA capabilities?? Or do you know for a fact that you can not run BOTH the inbuilt AMD Dxx GPUs AND external nvidia cards concurrently?? That is still the unanswered question for me!

Not to mention having a series of other PCs rigged up severely damages the ‘small imprint’ factor of the nMP ;) You could have all this power and more within one PC (or hackintosh) box.

They don't need to be near your desk, you know... :D

But my main beef with that expansion box is the castrated bandwidth which in my opinion won't be enough to give you real time rendering or even something similar in your preview.
 
Would like to know what options are available to run CUDA on an external TB enclosure.

cheap PC with good nvidia then IP over thunderbolt?

depends on your program and whether or not its network nodes work crossplatform and if the non-cuda master can drive cuda on a node..

sounded like it might work up until the last bit.. i'm thinking, as of right now, you're s.o.l ;)
 
cheap PC with good nvidia then IP over thunderbolt?

depends on your program and whether or not its network nodes work crossplatform and if the non-cuda master can drive cuda on a node..

sounded like it might work up until the last bit.. i'm thinking, as of right now, you're s.o.l ;)

Octane with Cinema 4D will net/team render cross platform, not sure about IP over thunderbolt though. Your render machines will need thunderbolt of course, which would limit your custom build options.
 
Octane with Cinema 4D will net/team render cross platform, not sure about IP over thunderbolt though. Your render machines will need thunderbolt of course, which would limit your custom build options.

yeah, the connection should work soon:

Thunderbolt Networking with PC -- MacRumors blog -- Apr7

to do it via thunderbolt, you'd pretty much have to use PC since i'm assuming the OP would like to choose his card..

doing it without thunderbolt, you could go oMP---nMP

that said, i still don't think it would work.. it definitely wont work with indigo because i just tried it between my macPro (master) and macbookPro (slave) which has an nvidia 330 in it with a cuda driver..

with that setup, i can still only choose openCL for gpu acceleration whereas if i start the render on my laptop, i can choose openCL or Cuda..
..........

regardless, that's just fun&games/trying to find any sort of workaround whatsoever ;)

there are basically only 3 real solutions- none of which are implemented.. nvidia lets cuda run on other cards, some sort of gpu expansion chassis via thunderbolt which actually works, apple/nvidia collaborate on a nmp nvidia gpu.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.