This is changing within Adobe; they're embracing OpenCL for some of their Suite applications. The first of which was Photoshop, which started using OpenCL acceleration for certain tasks in... CS5.5 I think? Maybe it was CS6. One of the revisions of Premiere Pro CS6 made OpenCL available for certain AMD/ATI processors that were found on iMacs.
Thanks for putting that information out here.
I neither invest in the stock of Nvidia, nor ATI. I have 4 overclocked GTX 680s and 1 overclocked GTX690. I also have GTX 295s, 480s, 580s, and Titans. Moreover, I have, as well, a slew of ATI cards in the 49xx, 59xx, 69xx and most recently the 79xx family. ATI cards do not take any advantage of CUDA and if ATI tried to do so, Nvidia would drag ATI into court because CUDA is proprietary and Nvidia owns it, whereas
Open CL is, well, open to all. That's why Nvidia cards take advantage of both technologies, but ATI can take advantage of only Open CL. But if you asked me which technology would I want to reign supreme, it would, of course, be the open one, not the proprietary one.
GTX 6xx cards have much lower
double precision floating point compute capability than a similarly numbered GTX 5xx card (e.g., 680 vs. 580 or 670 vs. 570). GTX 6xx cards have higher
single precision floating point compute capability than a similarly numbered GTX 5xx card. The Titan is in a class all of it's own, being the likely best amalgam possible of the GTX580, GTX680, and Tesla K20, all wrapped up in one. So if you wrote an application that takes greater advantage of greater single precision capability, the GTX 6xx cards would be favored, and if you wrote an application that takes greater advantage of greater double precision capability, the GTX 5xx cards would be favored. I don't expect any application to remain static, so there will likely be change.
When CS[6.5|7|Next|Whatever-the-Hell-They're-calling-it] comes out in May, it's supposed to have far broader support for OpenCL acceleration on all of the AMD processors.
Thanks for putting that information out here. I hope that this Adobe release comes out on time.
I've read that same thing here a few times, but it doesn't appear to be true. Over on the Adobe forums, a couple of guys have devised a fairly impressive benchmark for Premiere Pro. The 6XX line cards all perform significantly better than the 5XX cards do. I haven't looked into the specifics, but it's generally accepted that the 6-series cards are the ones to get for CUDA processing with Adobe apps.
First, do the 6xx cards advantage over the 5xx cards in Premiere spring from what you pointed out above, namely: "One of the revisions of Premiere Pro CS6 made OpenCL available for certain AMD/ATI processors that were found on iMacs." Remember that Nvidia cards do both CUDA and Open CL. In other words, you could be seeing that the 6xx's have an advantage because of their higher single precision floating point peak expressing itself by better Open CL performance.
Secondly, I always try to pick the right tool for the job at hand. I don't use a hammer where a screw needs to be set. If a card is optimized for single precision applications and has a higher single precision floating point peak performance value than another card, then that card with the higher single precision floating point peak performance value will most likely get its computations for a single precision application done faster. The Tesla K10 is such a card. The Tesla K20 is designed to be the leader in double precision applications. Here's what Nvidia recommends the Tesla K10 (the high end single precision Wonder) for: among others, signal, image and video processing and video analytics. And that's what I have deployed my GTX 680s and 690 for
because that is what they do best. My statement which you quoted earlier: "The problem for ATI is that there are fewer applications that take advantage of Open CL (like Lux 3d Render does) than there are applications that take advantage of CUDA (like certain apps in Adobe CS and the vast majority of 3d app).", was intended to show the breadth of CUDA's applicability, not as an assessment that the GTX 6xx series perform worse at CS 6 chores than the GTX 5xx series, especially since CS 6 also takes advantage of Open GL. Along that same line, I have dedicated my 3 ATI 5970s to video production tasks because they excel at that. I doubt that many Mac users have the patience to get more than one high end video card installed in their systems or care to go through what is takes to the install
an additional high end card - so the issue when upgrading the GPU in a Mac Pro usually is which
one high end card should I get. If one of them asked me, "Should I get a GTX 6xx or a GTX 5xx because I can't afford that Titan?," I 'd ask them, "What do you intend to use it for?" If they responded, "Only for video production." Then, I'd say, "Go with the 6xx series." However, if they responded with, "Animation/3d and video production" or "Animation/3d" or "Computational physics" or "Biochemistry simulations" or "Computational Physics" or "Computational finance" or other double precision applications, then I'd now respond, "Go with the 5xx series because its known for its better CUDA double precision floating point peak performance than has the GTX 6xx line."
Again, thanks jas. I know that at least one person is reading what I write. You have shown me that I could have been more precise by using "greater double precision floating point peak performance" instead of just using the word "compute ability." and you made me flesh out some important distinctions. So I'll make this correction: "The GTX 5xx line is known for its better CUDA double precision floating point peak performance than has the GTX 6xx line. See post #590,
below ."
P.S. I'll update this with some pics to show you the numbers supporting what I'm talking about.