Hi,
I was wondering what the double precision performance of an NVIDIA GTX Titan is under OSX. As posted in several places (e.g., by Tutor in post #564 https://forums.macrumors.com/threads/1333421/) you have to enable the CUDA double precision option in the NVIDIA Control Panel under Windows. Doing so will increase the FP64 performace from ~230 Gflops to ~1500 Gflops.
I need double precision accuracy for the work I do and ~230 Gflops by no means bad is only twice as fast as my CPU and not really worth $1000 but ~1500 Gflops well now we are talking
So it would be a big help to me if somebody who has a Titan installed in his Mac could post a screen shot of CUDA-Z under OSX or even better run testing_dgemm which is part of the NVIDIA CUDA drivers.
A big thanks in advance!
Michiel.
I was wondering what the double precision performance of an NVIDIA GTX Titan is under OSX. As posted in several places (e.g., by Tutor in post #564 https://forums.macrumors.com/threads/1333421/) you have to enable the CUDA double precision option in the NVIDIA Control Panel under Windows. Doing so will increase the FP64 performace from ~230 Gflops to ~1500 Gflops.
I need double precision accuracy for the work I do and ~230 Gflops by no means bad is only twice as fast as my CPU and not really worth $1000 but ~1500 Gflops well now we are talking
So it would be a big help to me if somebody who has a Titan installed in his Mac could post a screen shot of CUDA-Z under OSX or even better run testing_dgemm which is part of the NVIDIA CUDA drivers.
A big thanks in advance!
Michiel.