I find it a bit difficult discussing with you when you just cherry-pick parts of my post and ignore the others. Please, if you already go through the trouble of quoting my posts, comment on the whole of the post and not on some out-of context sentences.
You yourself only quote certain parts, I still responded to you entire post as a whole
First, I explicitly mentioned that particular workflows can be one-sided and thus cater to that or other GPU architecture.
Second, the 780 Ti performs barely better than m295x in FCPX is a) because FCPX does not utilise the GPU to its full potential (which I have also mentioned before) and maybe also partly b) because Nvidia Fermi is not that good at heavy computation workflows — the b) is also clearly seen in LuxMark. I can assure you that one can easily write an OpenCL program in which the 780 Ti will wipe the floor with m295x.
You're confusing Kepler which the GTX 6xx and 7 series are, with Firmi GTX 4xx, and 5xx.
Can you please show me an OpenCL application on where the 780 Ti wipes the floor with a similar performing OpenCL AMD card?
It would be very interesting to see.
No, I was talking about CAD workflows only (again, see my post). Scientific computation — or any other nontrivial usage of the GPU as parallel processing computer — is more difficult to access, because GPUs with different architectures have different performance characteristics in different areas (which I have also explicitly said before).
Yes I saw your post, and as I pointed out prior to that the same 780 Ti, and even the Titan were horrible at Maya and Lightworks. Whether or not the Software is incompetently made means little, as it just means those GPUs are not the right ones for the job. I never stated that something is then wrong with them.
I completely understand the architectural, and even software differences. It's why I am adamant that a gaming test where a card performs exceptionally well at does not mean that same card is suitable for a different work flow.
Not to mention that my initial point still holds: there is a BIG difference in 'performance' (what can the GPU do and how good it is at it?) and 'performance' (how will the GPU perform in application X?). You seem very set into lumping these things together.
I lump them together because they're one and the same.
'performance' (what can the GPU do and how good it is at it?)
What a GPU can do is relative to what you do with it, which
'performance' (how will the GPU perform in application X?)
If a GPU is very good at Mari, which is an industry leading 3D painting application, it shows that 'That' GPU is good at 3d Painting.
If a GPU is particularly good at Direct X gaming it has no bearing on if it'll be good at using MARI. The same applies in reverse as well.
It's why since the beginning I've been adamant that testing should focus on that method, rather than running Direct X gaming benchmarks and simply drawing from that that X GPU, is better than Y GPU. As it's not always the case, and varies from application.
You've already stated the same
GPUs with different architectures have different performance characteristics in different areas
Why is it so hard for you to accept that testing for specific use cases is better overall than simply relying on Gaming. Not just gaming, but gaming within Windows and using Direct X, which has no bearing on OS X applications since Direct X does not run on the Operating System.
The iMac which this entire thread is about is a prime example of that. It's very good at FCPX, Compressor, Motion, Mari, and the Entire Adobe Suite which uses OpenCL in OS X.
I do know that the m295x doesn't hold a candle to a card like the 780 Ti in gaming, but just because the GTX is significantly better at gaming does not mean it's significantly better in all applications or uses.