Hi guys,
I have been thinking about a 780 running headless, purely for speeding up processing of a couple of CUDA enabled filters that I use regularly.
I am limited as to whom I can purchase hardware from, so buying a flashed 780 from MVC, for example, isn't an option.
One of my two most frequently used filters only runs in Windows. I run it in Bootcamp using my Mac version 680 (thus 5GT Link Speed) as the CUDA device.
If I were to run a headless 780 as a second card (I am aware of the power issues), how much is the 2.5 Link Speed going to impact CUDA computes in Bootcamp? Am I right in thinking that because people run multiple cards in an expansion chassis connected by a single x16 HBA, that this throughput is less important in GPU computing situations?
Thanks for your help guys.
I have been thinking about a 780 running headless, purely for speeding up processing of a couple of CUDA enabled filters that I use regularly.
I am limited as to whom I can purchase hardware from, so buying a flashed 780 from MVC, for example, isn't an option.
One of my two most frequently used filters only runs in Windows. I run it in Bootcamp using my Mac version 680 (thus 5GT Link Speed) as the CUDA device.
If I were to run a headless 780 as a second card (I am aware of the power issues), how much is the 2.5 Link Speed going to impact CUDA computes in Bootcamp? Am I right in thinking that because people run multiple cards in an expansion chassis connected by a single x16 HBA, that this throughput is less important in GPU computing situations?
Thanks for your help guys.