Its not so simple. In theory, both GPU architectures are pretty similar in the way they achieve, what they do. But the details, which differentiate both architectures are what matters, and what makes unable to optimize in universal way for both companies, thats why software vendors take "middle-ground" approach. They are programming applications in a way to not gimp performance on one vendor or another. For example, optimizing fully for Nvidia would make software perform on AMD hardware worse then it should, even with 100% utilization. Its because of way Nvidia hardware executes instructions. Too complex thing to dumb it down, so I am gonna leave this here.
Second factor here to look at is that Nvidia hardware is slightly easier to fully utilize, without gimping performance on their competitor, however, it has a backfire. AMD GPUs are not fully utilized. I have not seen so far applications that can utilize 100% of AMD hardware, apart from... Console games. Oh, and maybe only Final Cut Pro X. Is there anything else? No. Unfortunately.
Third factor is that AMD GPUs are simply harder to fully utilize, but it has other side. Full utilization, and hardware specific features, are not gimping performance from their competitor. They are just exploiting hardware capabilities to the fullest. I was supposed to avoid gaming examples, but from development, and factual point of view I have to point it out: Gaming Evolved titles. Perfectly working on AMD hardware, perfectly working on Nvidia hardware. Just showing what AMD hardware really can do. Without gimping performance on counterpart.
Its that simple.
Last factor. So you admit, that Power of GPU is only exploited by software and its up to developer competence to exploit it. If they are not doing it, its stupid to blame hardware company for it, or pump your ego with using one brand over another?
Best part: I was actually writing about this factor in my post which started this, maybe you missed it...?
With that Performance per watt I would look no further than to Radeon Pro 460. 35W GPU competing with 50(GTX 1050 Mobile), and 60W(GTX 1050 Ti) GPUs, from Nvidia. Who has the better performance per watt, if Radeon Pro is 5% behind GTX 1050, and 15% behind GTX 1050 TI, but uses 40% less power?
You take GTX 1060, and RX 480 as an examples, in gaming. Have we seen comparisons in compute applications on both GPUs? By the performance difference between GTX 1070, and RX 480, and the amount of fuel they burn, I would say that RX 480 being faster in compute, than GTX 1060, would have similar performance per watt. And RX 470, using similar amount of power, and offering similar compute performance, would be also equal to GTX 1060.
Who is spreading FUD then? Me, or you with your cherry picked, gaming scenarios? Is gaming really what Pros on this forum care about? Or compute performance, in real world applications, because you live from it?
I suggest watching that film comparing GTX 1070 and RX 480 in compute. Both GPUs are within 10%, of each other. One costs much less. But the performance per watt, in this particular example is on Nvidia side.
P.S. I wonder how much faster than GTX 1050 Ti would be Radeon Pro WX 5100, consuming 75W of power, but offering around 50% more compute horsepower. We could have seen, already the differences between 5.7 TFLOPs RX 480 and 6.5 TFLOPs GTX 1070. Did we not? Its 10%. Oh yes, gaming...
P.S.2 Its funny, that my post about situation of Nvidia on Mac has suddenly been spun into AMD vs Nvidia.