Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

--AG--

macrumors member
Original poster
Dec 20, 2012
36
14
Hi

Any chance it will be possible to run a tesla-card in an external PCIe box? Is currently using a Quadro K5000 (for mac) in my old MP and is coding CUDA for my projects. Downside, for me, with the new Pro is the use of ati cards and hence no CUDA. An external PCIe box over thunderbolt would be practical since I could possibly use it with my laptop and my stationary computer.
 
Hi

Any chance it will be possible to run a tesla-card in an external PCIe box? Is currently using a Quadro K5000 (for mac) in my old MP and is coding CUDA for my projects. Downside, for me, with the new Pro is the use of ati cards and hence no CUDA. An external PCIe box over thunderbolt would be practical since I could possibly use it with my laptop and my stationary computer.

I don't see why not.

I don't know what the performance will be like. It shouldn't be horrible.

But I'd consider switching to OpenCL, as it sounds like you're coding CUDA?
 
Hi

Any chance it will be possible to run a tesla-card in an external PCIe box? Is currently using a Quadro K5000 (for mac) in my old MP and is coding CUDA for my projects. Downside, for me, with the new Pro is the use of ati cards and hence no CUDA. An external PCIe box over thunderbolt would be practical since I could possibly use it with my laptop and my stationary computer.

There's already PCIe adapters via Thunderbolt that you can buy. They are like $100 I believe. People have already used them on their Macbook Airs for gaming. Don't forget that you'll need a PSU to power the card. It's kind of a messy box with wires all over the place. Maybe a more all-in-one elegant solution will come out. Especially with the Mac Pros pushing external upgrades.
 
.... Would be nice if any nvidia card would work in such a setup/box, e.g. a k20 or k40. I don't need the graphical outputs, is just doing calculations with the gpu.

calculations with what data and how long between data set updates? You'll have about 10% of the throughput those card have in their normal environment of a x16 PCI-e v3.0 socket.

That is one reason why vendors are not stumbling over themselves to get this enabled. Corner case usages may fit but generally it is going to have problems. Throw in the performance drop in several cases with the $1,000 price on the external box and the $/performance has issues. Again not doing much to drive up demand quickly.

Thunderbolt is far more designed to move the output of GPUs not the input.
 
calculations with what data and how long between data set updates? You'll have about 10% of the throughput those card have in their normal environment of a x16 PCI-e v3.0 socket.

That is one reason why vendors are not stumbling over themselves to get this enabled. Corner case usages may fit but generally it is going to have problems. Throw in the performance drop in several cases with the $1,000 price on the external box and the $/performance has issues. Again not doing much to drive up demand quickly.

Thunderbolt is far more designed to move the output of GPUs not the input.

Correct. In addition the graphics drivers are not Thunderbolt enabled. OWC has a page on this. So basically don't expect good graphics cards in a TB enclosure, ever. The bandwidth is too slow and the drivers aren't and probably will never be there.
 
Correct. In addition the graphics drivers are not Thunderbolt enabled.

I think the notion implied that if just skipped the graphics drivers then minimally could get this jumpstarted as a CUDA only device. The "thunderbolt" enabling is basically related to being PCI-e driver being "hot plug" enabled... which CUDA as well as graphics pipeline normally aren't. So loosing the graphics issue doesn't really change the root cause impediment.

There probably is some kluge/hack that someone could though out there for it to "happen to work", but not likely going to see this on any Sonnet certified config list any time soon.
 
calculations with what data and how long between data set updates? You'll have about 10% of the throughput those card have in their normal environment of a x16 PCI-e v3.0 socket.

That is one reason why vendors are not stumbling over themselves to get this enabled. Corner case usages may fit but generally it is going to have problems. Throw in the performance drop in several cases with the $1,000 price on the external box and the $/performance has issues. Again not doing much to drive up demand quickly.

Thunderbolt is far more designed to move the output of GPUs not the input.

I don't need much data transfer between CPU and gpu. I perform simulations on the gpu with local variables and only collect statistics of the generated data to the CPU. I basically only need horsepower in a box, not to worried about the transfer rate of thunderbolt.
 
I don't need much data transfer between CPU and gpu. I perform simulations on the gpu with local variables and only collect statistics of the generated data to the CPU. I basically only need horsepower in a box, not to worried about the transfer rate of thunderbolt.

I don't think that is going to be the mainstream demographic. The question is whether there is enough demand from the remaining subset to get the work done.

If Nvidia is willing to jump into the custom GPU card market with Apple (i.e., sell Apple just the GPU chip itself and license the 'Pro' Quadro brand, drivers, and reference designs ) , they are probably going to be far more interested in getting their GPU inside the next Mac Pro design "bake off" that happens. Likewise in the general PC market it is just easier for them to tell folks to go buy "box with slots" so it works with normal drivers.

If Nvidia isn't going to participate in Mac Pro design bake-offs and Thunderbolt picks up speed adoption breadth then the drivers will eventually arrive. Right now though I suspect it is something they just don't see much return on investment if in this camp.

Right now Nvidia is still partially living off of the "free" money they are getting from Intel on the IP licensing. I think they are in probably in the second camp. When broad spectrum PC market TB market gets big enough they'll bother but until then not pressed. One of the major points of hooking customers onto a proprietary language was/is to dedicate the hardware terms where it runs. It is there just as much, if not more, to ease Nvidia's burdens than users' ones.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.