Hello,
I'm not sure, but Octane render using single precision. This is what I read all over the Octane's forum.
But since I do not speak English very well, I could be wrong.
Have you ever compared the performance of the Titan, with and without the double precision?
I don't believe that it's an all or nothing matter, but I do find that with the latest version of OctaneRender that higher SP is generally faster, and when complemented by higher memory speed - is much faster. See also my posts #s 863, 866 and 868 here:
https://forums.macrumors.com/threads/1333421/ . Think of the difference as DP vs SP biasing. No Nvidia CUDA card completely lacks either DP or SP functions and I cannot turn DP functionality completely off and even if I could turn that functionality completely off, I don't think that I like what I'd get. Also:
a regular
*/GTX Titan has 2688 CUDA cores clocked at 875 MHz,
a regular GTX 780 has 2304 CUDA cores that clocked at 863 MHz,
a regular GTX 680 has 1536 CUDA cores clocked at 1006 MHz, and
a regular GTX 580 has only 512 CUDA cores (but) clocked at 1544 MHz.
[
http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units ]. Barefeats has tested the performance of a number of CUDA cards in performing the OctaneRender Benchmark Scene [
http://www.barefeats.com/gputitan.html ]. Among the cards that he tested that are mentioned above here's how they landed:
Titan (then the fastest - 95 sec.),
GTX 780 (then 2nd fastest - 110 sec.),
GTX 580C (C=Classified - it's clocked higher than a regular GTX 580, but they each have the same number of cores; note that he did not test a regular GTX 580) (then 3rd fastest - 160 sec.) ,
a GTX 680C (overclocked - then fifth fastest - 176 sec.) and
a GTX 680 (regular - then sixth fastest - 189 sec.).
Prior to November 7, 2013 but after the GTX Titan's arrival, the Titan was the gold standard for CUDA capable cards in OctaneRender. I have coined the term "TE" which stands for Titan Equivalency [ see post # 865 here:
https://forums.macrumors.com/threads/1333421/ ]. Basically, it's how fast a CUDA GPU(s) renders that OctaneRender Benchmark Scene in relation to a Titan. A regular Titan has a TE = 1 [95/95 = 1]; a regular GTX 780 earned a TE = .86 [ 95/110 = .86 ], a regular GTX 580C earned a TE = .59 [ 95/160 = .59 ], a regular GTX 680C earned a TE = .54 [ 95/176 = .54 ] and a regular GTX 680 earned a TE = .50 [ 95/189 = .50 ]. This was useful to me in evaluating the performance that I can expect from my CUDA rigs because OctaneRender scales perfectly linear for GPU additions. For example, if you have two regular GTX 680s in the same system and allocate both of them to rendering that OctaneRender Benchmark Scene, you'll get the performance that I get with one regular Titan [.5x2 = 1]. So if you use four regular 680s, you'll get the performance of two regular Titans and if you have two regular GTX 780s in the same system and allocate both of them to rendering that OctaneRender Benchmark Scene, you'll get the performance that I get with 1.72 regular Titans [ .86x2 = 1.72 - I know it's not good to cut up your GTXs ].
*/ Regular = a reference design w/o any user applied clock tweaking.
Sorry for this little diversion, but back to your point, I suggest that core speed matters (but Nvidia faces the same type of constraints as does Intel - more cores require more power, generate more heat, and necessitated more cooling), memory speed matters, the effectiveness of core and memory cooling matters, the number of cores and amount of memory matters, and the number/ratio of DP and SP functions matter. But the balance of things do change and even their current state may change.
Here's a part of what I posted on March 13, 2013 here [ post # 512
https://forums.macrumors.com/threads/1333421/ ]:
Which cards to buy?
The way Octane has been coded to handle CUDA is exemplified by the following statement from their from one of their FAQ's from their website: "If you are interested in purchasing a new graphics card to use with Octane Render, the Geforce GTX570 or GTX 580 currently have the best Performance to Price ratio. The latest generation of Nvidia GPUs (Kepler) is supported, but currently works slower than their Fermi equivalents. We are still optimizing the performance of Octane on the Kepler GPUs. The GeForce line is higher clocked and renders faster than Quadro and Tesla GPUs, but the latter GPUs often have more memory. A powerful multi-core CPU is not required as Octane does not use the CPU for rendering, but a faster CPU will improve the scene voxelizing speed." Moreover, Octane's manual states:
OctaneRender™ runs best on Fermi (e.g. GTX 480, GTX 580, GTX 590) and Kepler (e.g. GTX 680, GTX 690) GPUs, but also supports older CUDA enabled GPU models. GeForce cards are fast and cost effective, but have less VRAM than Quadro and Tesla cards. OctaneRender scales perfectly in a multi GPU configuration and can use different types of Nvidia cards at once e.g. a GeForce GTX 260 combined with a Quadro 6000. The official list of NVIDIA CUDA enabled products is located at https://developer.nvidia.com/object/cuda-gpus. {Emphasis added.}
Then later on May 10, 2013, I posted here [ post # 629
https://forums.macrumors.com/threads/1333421/ ]:
It now seems that Otoy has rewritten Octane renderer to take better advantage of the higher single precision floating point peak performance of the Kepler cards, for here is what the user manual now says: "We recommend to use GPUs based on the Kepler architecture as these cards have more memory and consume less power than Fermi GPUs, but are just as fast with OctaneRender™." (Compare to post # 512, above.) So the scramble to find top-end Fermi cards for Octane should now end. So those who own GTX 600 series cards and want to use Octane can now rejoice.
So that's why I suggest that you consider that it's the biasing of the software to better utilized SP core functions; DP core functions still matter - just not nearly as much as they used to, unless you're rendering with older versions of the software. But we may see the balances shift as the software matures and other functions get added, changed or shelved.
Finally, don't forget the Titanator, aka the GTX 780 Ti, that will wear many robes affecting its power display [ See, e.g., posts #s nos. 863, 866, 868 here:
https://forums.macrumors.com/threads/1333421/ ]. That looks to be the best card for now for OctaneRender.