If this would be correct, which is basically not correct - Gaming cards should fly in this test. They don't. How come?
Because for this test you need optimized professional, signed drivers, Radeon Pro, and Quadro/Tesla grade to get highest performance. None: Titan Xp, and Vega FE have them. Which I have pointed out, and which you omitted. That is why you see Quadro P2000, which is cut down GTX 1060, with professional drivers to outpace GTX Titan Xp. Which has 3 times higher compute performance.
You have claimed yourselves as professionals. How come you do not know this, or do forget about this?
I guess I shouldn't be surprised at your inability to actually read and understand my posts, so let me try and break it down for you.
I said "Many of those SPECviewperf tests". Specifically MANY, not ALL. The results listed exactly one subtest where the Quadro P2000 beat the Titan Xp, namely the "sw-03" test. I'm not familiar with the details of that test, but again, SPECviewperf measures graphics performance for workstation applications. It even says it right on their website:
https://www.spec.org/gwpg/gpc.static/vp12.1info.html
The SPECviewperf 12 benchmark is the worldwide standard for measuring graphics performance based on professional applications. The benchmark measures the 3D graphics performance of systems running under the OpenGL and Direct X application programming interfaces. The benchmark’s workloads, called viewsets, represent graphics content and behavior from actual applications.
Notice how many times they mention graphics in that one paragraph? They did not mention compute a single time, if I'm reading it correctly. So, once again, graphics tests are rarely limited by the raw TFLOPs of the computing units on the GPU, and are often limited by other factors.
NVIDIA's Quadro cards are known to have higher geometry throughput than their gaming cards, so for them it's not just about driver differences. That is, the Quadro cards are actually using different GPU hardware (or specifically, I believe the gaming cards have the Quadro special stuff disabled in hardware). So yes, while optimized workstation drivers that have been signed and blessed for use with the various workstation applications are important, the performance results listed are not solely based on that (it's more that the application vendors are telling their customers "version XXX.XX of the driver is known to work well with our application").
sw-03 appears to be a very geometry-heavy test:
https://www.spec.org/gwpg/gpc.static/sw03.html
The sw-03 viewset was created from traces of Dassault Systemes’ SolidWorks 2013 SP1 application. Models used in the viewset range in size from 2.1 to 21 million vertices.
Again, this could very easily be explained by the better geometry throughput of the Quadro hardware for NVIDIA, or perhaps by special workstation optimizations that are only enabled for Quadro cards. I did not claim one way or the other, simply that these tests are generally not limited by raw compute TFLOPs.
Finally, when did I claim I was a professional? A professional what, exactly?
So, in summary, I still maintain that SPECviewperf is not a compute test and thus is unlikely to be limited by raw TFLOPs, despite your insistence that this is the only metric that matters. There are many other things that can be a bottleneck, especially for graphics tests/workloads.