There has been no point since 2012 in which AMD has offered consistently better performance across a wide variety of professional apps than Nvidia.
...on macOS. The "woeful" performance of Nvidia seems to be specific to Apple software.
These are absolutely ridiculous statements to make.
AMD graphics have no problem battling their Nvidia counterparts in GPU compute benchmarks.
If you're going to be comparing CUDA optimised based benchmarks to unoptimised OpenCL benchmarks to argue your point, then of course the Nvidia solutions are going to look great. But that's about as fair as using Mantle-optimised gaming benchmarks to make Nvidia graphics look bad.
And no; GK/M/P104's poor OpenCL performance is
not due to macOS. I have no idea where or how this meme originated from. OpenCL, and thus general GPU compute, performance is poor on those chips because Nvidia cuts out a huge chunk of FP64 compute capable silicon on those chips to make more room for FP32 and FP16 silicon. FP64 (double precision floating point) is essential for compute and scientific computing, while FP32/FP16 is more important for gaming performance (in addition to things like pixel file, triangulation, tesselation performance and so on).
Conversely, all AMD GPUs are equipped with [too much] FP64 ALUs. This obviously means that they have to sacrifice FP32 and FP16 performance (less gaming performance), but it means that all their GPUs can easily be converted to GPU compute accelerators quite easily and cheaply.
Why do you think AMD GPUs are so popular for crypto-currency mining?
...aside from their "Pro" computer being built on gaming cards. You can't criticise Nvidia's options as being for "G4merz!" as if Apple is offering "workstation" hardware as an alternative. They're not. The choice is fast Nvidia gaming GPUs, or slow AMD gaming GPUs.
Except that the AMD GPUs are plenty fast for OpenCL compute benchmarks.
The only significant deficit AMD GPUs have is in gaming potential.
But again, this is Apple -- they do not care about gaming performance.
Yes, but they also don't win on actual getting work done benchmarks, either. That's why the 2013 has been such an unmitigated failure.
To your first point: not true. To your second point: well, that's up for debate.
If AMD GPUs were so horrible, then Apple would not have gone with AMD for the 2016 MacBook Pro. AMD GPUs have proven themselves to have fantastic perf/w when it comes to GPU compute (read: productivity), so I think we can reject the hypothesis that the FirePro GPUs were responsible for the MP6,1's low popularity.
I would bet that the MP6,1's challenges lay more in its high price, lack of upgradability, dependence on Thunderbolt 2, lack of CPU power, and high production costs.
Apple tried only supporting the technologies they want to support, making expensive targeted FCPX appliances, and telling everyone who didn't fit that narrow product to go buy a Windows workstation. That was the 2013 strategy. Unfortunately for Apple, Nvidia is better at making GPUs than Apple is at making Pro hardware and software.
Here's a few other things that
aren't of critical importance to enough of the Pro market that you could fund development of a workstation by fixating on them:
- Small desk footprint
- Dead silence when running outside of a sound insulated cabinet
- Low power draw
- OpenCL
- macOS
- Final Cut
Wow... Just wow...
Here's a few things that
are of critical importance to enough of the Pro market that you
can fund development of a workstation by prioritising them:
- Onboard, bulk data storage.
- The ability to swap out and replace the GPU every 8-12 months, without replacing any other part of the machine.
- Multiple Nvidia GPUs
- CUDA
It turns out there aren't enough FCPX users to fund the development of a specialist workstation, and the rest of the pro app world is sufficiently Nvidia based, that they're not going to go all in on OpenCL unless it's better on Nvidia hardware than CUDA.
You clearly don't know Apple.
Apple will never, nor have they ever, supported another company's proprietary compute standard. Apple is well known for sometimes supporting proprietary hardware when they're in their infancy (3.5" floppy disks, USB, Thunderbolt...), but Apple has always eventually tried to push open standards in both hardware and software.
The day they support CUDA is the day they support Flash.
You're suggesting Apple just let the machine sit idle for 4 (to 5, 6,7?) years because? Maybe the 100% thermal failure rate of the D700 indicates that there's something more to the story than "on paper this should fit into the thermal constraints".
100% thermal failure rate? C'mon man, now that's just hyperbole.
If you love CUDA and Nvidia so much, you know what you can do? Just buy a PC and install Windows on there.

Problem solved. You've already stated that pros don't need macOS or OpenCL, so I honestly think you would be a whole lot more happier if you just bought a GTX 1080 Ti and played your DX12, Nvidia Gameworks, Nvidia Hairworks riddled games on your Nvidia G-Sync powered $999 TN-panel monitor.
I honestly sometimes scratch my head as to why some people around here fall for the Nvidia PR spin. I mean -- ok sure, I was being a bit facetious with my cheap shots above -- but I used to run Nvidia hardware too, and have no issue recommending Nvidia to my friends who want to have the best gaming performance regardless of cost. But this notion that because Nvidia dominates gaming benchmarks in Windows/DX12 then they must absolutely dominate everywhere else is just silly.