You keep moving the goal posts here. First, graphics workloads are professional. Apple is advertising the iMac Pro for VR, featuring a Vega graphics chip. There are many other "professional" workloads that stress graphics performance.
Second, Vega FE/Vega RX 64 with a TDP of 295 W with GTX 1080 graphics performance is pathetic efficiency wise. Sure, a downclocked chip may slightly improve this. But even if there is zero performance loss between the 295 W RX 64 and the 210 W Vega Nano, it will still be less efficient than the 180 W GTX 1080.
But the graphics workloads in professional applications are run on compute kernels, not geometry kernels.
Im sometimes dumbfounded about level of knowledge on this forum.
CUDA is purely Compute Kernel, OpenCL is purely compute Kernel. How can you not know this, and believe that professional applications are running on geometry kernels?
Have you seen a professional application that works on DirectX?
The 295W GPU is on the level of performance of GTX 1080, at least according to AMD, in current state of drivers in gaming, and still will be faster than Titan Xp in compute oriented applications, which I think is most important from professionals.
AMD doesn't have de facto coding language that most apps use a la Cuda. They've tried many things and DX12/Vulkan/Metal seems to be bearing some fruit on gaming side, but not in Pro app side. Cuda is just so much easier to approach than openCL ever was.
So definitely this is a software problem. AMD can't help it as long as Cuda is closed and de facto language for Windows & Unix Pro apps.
This could turn over in Mac platform, because Apple showed Nvidia the finger three years ago. But for Windows AMD won't have a good recipe for success before they can get a decent Cuda support. And that wont happen any time soon, I suppose.
Metal 2 and some gaming consoles are AMD's last resort because there the software is not the obstacle. On Windows side they can only overclock the chip.
You are mistaking Compute Kernels, with Geometry Kernels.
There is NOTHING in CUDA that would stop it from running on AMD GPUs, because it is agnostic Compute Kernel, just like OpenCL. But there is no AMD-CUDA Compiler, apart from HIP.
Nvidia has the benefit from it being heavily optimized for their architecture, done by Nvidia Engineers. In AMD's case, OpenCL requires you to do a lot of work on optimization, if you are a software developer. Its easier to developers to buy CUDA graphics, use CUDA, and save money and time.
AidenShaw said:
You should apply to be the next Trump White House communications director. Your belief in alternative facts makes you a shoe-in.
Yes, obviously. At least I know how GPUs work, and what affects performance. Maybe you should learn a thing or two about them also?
The same thing you were saying when I was claiming, that purely based on high-level architecture analysis of Ryzen CPUs I claimed that they will be on Haswell/Broadwell level of IPC. You claimed that I believe in alternative facts, and am doing AMD's PR. It turned out, that I was correct. What makes you believe that this time I will not be correct?