Not only that, but I wasn't aware that the AMD VEGA cards cannot use all of their shaders at once. No wonder they under perform compared to Nvidia cards, well according to this guy that's the case, not sure if it's true. Makes you wonder if the 580 is unable to use all of it's shaders either:
https://www.reddit.com/r/hardware/comments/98c0ey/what_makes_a_good_gpu/
"
middle_twix
18 points·
2 days ago·edited 2 days ago
Depends on the architecture. For instance, a 1080ti has 3584 cuda cores and a Vega 64 has 4096, but the 1080ti outperforms the Vega. Even at the same clocks, the 1080ti outperforms it.
Drivers and features of the silicon have a lot to do with this, Vega is a much, much bigger card than a 1080ti but underperforms. This is partly due to how much of the transistor budget in Vega is taken up by features, such as RPM and primitive shader discard. Those features arent used by many games, and is a big technology that AMD was betting on that never took off. One thing Nvidia has going for them is how widely adopted their architectures are, and therefore most engines and non opensource APIs are coded with a longer CUDA based pipeline in mind. Also, their cards have more ROPS per CU, which is why their geometry engine is much, much more advanced than AMD's GCN architectures, which traditionally have many more CUs than Nvidia, but the pipeline is shorter and are much smaller. This is also why AMD cards are better at compute, because any paralleled task can take advantage of the numerous SPs. And also why its hard to code games for AMD cards, because you cant use all the pipelines without heavy optimization.
Vega, for instance, cant keep all of its stream processors filled. So a 1080ti can use all of its CUDA cores in a game, while the Vega might only use 3800ish of its cores. This is also why the slightly cut down flagship AMD cards usually perform just as well as the full card. This has been a thing since the 7950 and 7970, a 7950 performs within 10% of a 7970, but the 7950 has almost 300 less stream processors.
That is a current gen example. For gaming, a long pipeline and many ROPs make a card "good". Driver support plus game implementation is also very important. On paper, a Vega 64 should destroy a 1080 ti. Absolutely slaughter it. But in real performance, because lack of optimization, the GCN scheduler, and all of the underused features that has plagued GCN for a long time, the 1080 ti slaughters it.
Off topic, this is also a reason why AMD cards are often referred to as "fine wine", because there is much room for driver optimization and over the years the AMD driver team has been able to squeeze more and more performance out of their GCN cards.
So if we took brands and specific architectures out of this, looking at a gpu to see if its good or not entails many things, for many different use cases.
For gaming, a more advanced geometry engine (Low SP/CUDA to ROP ratio) and high clock speeds help. Also good game engine support is a major deciding factor. If games were optimized for more SPs and TMUs, then of course go with the trend as it will net you more performance.
For compute, look for lots of CUDA/SPs cores. Memory bandwidth is also very important for this, especially for AI and raytracing.
I would look more into the architecture of the GPU itself than its specs. In Kepler vs Tahiti, Tahiti won on paper and in performance. Tahiti had 500 more cores than Keplar and had about the same clock speed potential. Driver support was also pretty good on both cards and Tahiti didnt have many special features that took up lots of the CU, so across the board at launch, they were neck and neck. 7 years later, the Tahiti smokes it. Vega vs Pascal, on paper Vega murders it, but in practice Pascal wins, for reasons as I stated above.
Hope this is what you were looking for. I had to condense it a bit but hopefully that got my point across lol."