Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I might buy an ex-mining card if the price is rock bottom. Not much to complain about if it lasts a couple months.
Same here, now that ETH is slowly but surely showing signs of tapering off in the coming months.
 
Last edited:
The hardware is not necessarily different, but certain features might be enabled by drivers and firmware.

Okay, we can argue about the semantics of what's going on, my fundamental point still stands: Quadro hardware is different to GeForce hardware, which is why a GP106-level Quadro will destroy a GP102-level GeForce in some workstation tests. That's why there's a whole separate Quadro product line that NVIDIA can charge a hell of a lot more money for.
 
And enabling a feature for engineering applications might result in a performance hit for creative programs and games.
 
It's not just drivers though, here's a comparison of GeForce vs Quadro hardware features from NVIDIA:

http://nvidia.custhelp.com/app/answers/detail/a_id/37/~/difference-between-geforce-and-quadro-gpus

For example:

- Antialiased points and lines.
- Logic operations.
- Clip regions/planes.
- Two-sided lighting.

AA lines and two-sided lighting are very common in workstation modelling apps like Solidworks. So, if AA lines and two-sided lighting are 100x slower on a GeForce card, that would easily explain why a GP106-level Quadro card would beat a GP102-based Titan Xp (note that I don't know the exact delta, just that there is a real hardware difference). Quadro cards are absolutely not simply GeForce cards with a different driver.

I'm sure there are driver-level differences where the Quadro cards get certain optimizations targeted at the workstation applications while GeForce cards do not, but if you'd done any research on this you'd know that there are real hardware level differences as well. All of this is in line with my original statement that SPECviewperf is not a compute test that is limited by raw TFLOPs.
All of those differences are not on hardware level but software(partially). I am speechless.

Those are hardware features that are disabled in consumer grade GPUs, but drivers, and different name of the GPU enables them. Silicon in both GPUs is the same. But the drivers are completely different. And drivers activate those features.

Those are exact driver level differences which actually differentiate consumer grade drivers from signed professional drivers.

Which actually was the point from beginning.

P.S. You can try to "hack" the Quadro drivers to run on consumer GPUs, and you will get those features enabled.
 
All of those differences are not on hardware level but software(partially). I am speechless.

Those are hardware features that are disabled in consumer grade GPUs, but drivers, and different name of the GPU enables them. Silicon in both GPUs is the same. But the drivers are completely different. And drivers activate those features.

Those are exact driver level differences which actually differentiate consumer grade drivers from signed professional drivers.

Which actually was the point from beginning.

P.S. You can try to "hack" the Quadro drivers to run on consumer GPUs, and you will get those features enabled.

I'm confused -- you claim they're all software features, and then suggest that the hardware acceleration for two-sided lighting is only enabled by the Quadro drivers? Doesn't that imply that there is actually a difference on the hardware side of things? I don't really understand what you're arguing about at this point, I stand by my original assertion that SPECviewperf is not a compute test that is limited by raw TFLOPs.
 
I'm confused -- you claim they're all software features, and then suggest that the hardware acceleration for two-sided lighting is only enabled by the Quadro drivers? Doesn't that imply that there is actually a difference on the hardware side of things? I don't really understand what you're arguing about at this point, I stand by my original assertion that SPECviewperf is not a compute test that is limited by raw TFLOPs.
My point was that in SpecPerf test we see difference in performance of GPUs, because of usage of professional drivers. You looked only on comparison of TFLOPs between GPUs, and forgot about the rest of the post:
Vega does not have signed, professional drivers, like Quadro/Radeon Pro Duo has.

How come then, the Quadro P5000 is faster than 12.78 TFLOPs GPU, despite having over 9 TFLOPs of compute power?

DRIVERS. How can you not factor something like this?
I was writing from the start about drivers affecting performance, that is why Quadro P5000 with 9.2 TFLOPs is faster in tests than Titan Xp, that has 12.8 TFLOPs.
 
http://www.pcgameshardware.de/Vega-...ase-AMD-Radeon-Frontier-Edition-1232684/2/#a5

As I have posted earlier, the GPU is able to run at 1.075v and 1.6 GHz rock stable as PCGH claims it to be. Typical overdoing of voltage, by AMD.

Power consumption is the same as it was with 1.2v and 1.4 GHz(around 280W).

Also:
8a02PfJ.png
 
My point was that in SpecPerf test we see difference in performance of GPUs, because of usage of professional drivers. You looked only on comparison of TFLOPs between GPUs, and forgot about the rest of the post:

I was writing from the start about drivers affecting performance, that is why Quadro P5000 with 9.2 TFLOPs is faster in tests than Titan Xp, that has 12.8 TFLOPs.

But the reason why a Quadro P5000 is faster than a Titan Xp is different hardware that only Quadros have enabled. If you want to believe that this is because of signed drivers, okay, I can't convince you otherwise. A quick web search suggests that much of this actually comes from hardware differences at the board level, i.e. the Quadro cards enable (at least some/many of) these features at the hardware level. But, as usual, I'm tired of arguing in circles with you, so I'll leave it at that. The underlying point here is that I find it very hard to believe that AMD can fix their problems with a driver update.
 
  • Like
Reactions: tuxon86
But the reason why a Quadro P5000 is faster than a Titan Xp is different hardware that only Quadros have enabled. If you want to believe that this is because of signed drivers, okay, I can't convince you otherwise. A quick web search suggests that much of this actually comes from hardware differences at the board level, i.e. the Quadro cards enable (at least some/many of) these features at the hardware level. But, as usual, I'm tired of arguing in circles with you, so I'll leave it at that. The underlying point here is that I find it very hard to believe that AMD can fix their problems with a driver update.
I am not sure Vega FE needs to improve performance that much.

RX Vega could be quite faster for consumer use with execution paths not targeting engineering applications.
 
I am not sure Vega FE needs to improve performance that much.

RX Vega could be quite faster for consumer use with execution paths not targeting engineering applications.

Vega was supposed to beat 1080 Ti though, right? They are miles behind right now, and I'm not holding my breath for them to "fix" this with a driver update.
 
Vega was supposed to beat 1080 Ti though, right? They are miles behind right now, and I'm not holding my breath for them to "fix" this with a driver update.
That is negative hype. Vega FE beats Titan Xp at engineering applications, so I can very well see the gaming performance being brought down because of that configuration.
 
But the reason why a Quadro P5000 is faster than a Titan Xp is different hardware that only Quadros have enabled. If you want to believe that this is because of signed drivers, okay, I can't convince you otherwise. A quick web search suggests that much of this actually comes from hardware differences at the board level, i.e. the Quadro cards enable (at least some/many of) these features at the hardware level. But, as usual, I'm tired of arguing in circles with you, so I'll leave it at that. The underlying point here is that I find it very hard to believe that AMD can fix their problems with a driver update.
The design is the same. Unless you want to believe that Nvidia put 250 000 000$ into toilet developing two separate GPUs with the same basic specs.

If you will hack drivers of Quadro GPUs to run with GTX 1080 you will get the same features enabled. Because thats how it works. It is the same for AMD and their FirePro/Radeon Pro line. Similar hack you have to do with Radeon GPUs.

All in all its drivers that enable those features for applications.
That is negative hype. Vega FE beats Titan Xp at engineering applications, so I can very well see the gaming performance being brought down because of that configuration.
In current state Vega behaves just like Fiji, but has lower memory bandwidth, effective(303 GB/s vs 373 GB/s), hence the per clock decrease against Fiji in games.
 
But the reason why a Quadro P5000 is faster than a Titan Xp is different hardware that only Quadros have enabled. If you want to believe that this is because of signed drivers, okay, I can't convince you otherwise. A quick web search suggests that much of this actually comes from hardware differences at the board level, i.e. the Quadro cards enable (at least some/many of) these features at the hardware level. But, as usual, I'm tired of arguing in circles with you, so I'll leave it at that. The underlying point here is that I find it very hard to believe that AMD can fix their problems with a driver update.

There's little reason to believe that there's an actual hardware difference (besides ECC VRAM on some GPUs). It wouldn't make much sense to develop custom silicon for GPUs with such low quantities.

In the past there have been different ways to use the Quadro drivers on GeForce cards, e.g. by modifying the driver or by altering the hard-straps in the GPU board (thus changing the device ID), which notably increases the performance in professional CAD applications.
I know this has worked for many GPUs up to Kepler generation, didn't find similar hacks for Maxwell/Pascal yet.
 
  • Like
Reactions: xsmi123
There's little reason to believe that there's an actual hardware difference (besides ECC VRAM on some GPUs). It wouldn't make much sense to develop custom silicon for GPUs with such low quantities.

In the past there have been different ways to use the Quadro drivers on GeForce cards, e.g. by modifying the driver or by altering the hard-straps in the GPU board (thus changing the device ID), which notably increases the performance in professional CAD applications.
I know this has worked for many GPUs up to Kepler generation, didn't find similar hacks for Maxwell/Pascal yet.

Right, it's the same silicon in general, but as you indicated different straps on the board turn on different paths in the hardware for Quadro. That's all I'm talking about. Anti-aliased lines and two-sided lighting run much, much faster in hardware on Quadro cards, and I do not consider that a driver/software feature. There are also obviously driver software optimizations for workstation apps in the Quadro drivers, but I would consider that a different thing.
 
A positive Vega FE review! ;)


Now its clear to me how that cooling system on the iMac Pro is going to work. Can't wait to see the fancy Jonny Ive video with him talking about the great new design of the iMac pro where he has to keep getting up to refill it with liquid nitrogen.
 
A new Bristol Ridge video from the usual source.

Rain in the second half.


It would seem that the Bristol Ridge drivers are still not optimized. GCN2 vs GCN3.

 
Last edited:
Yep, it appears that the drivers are not ready(?) for Bristol Ridge, or simply nobody at AMD cared about them.
 
A few howlers I have read on this thread regarding mining. Just want to dispel some myths.

- Yes Ethereum mining is dead for new players. Even a rig with one GPU will likely not make the money back because the difficulty is too high. It's more affordable to simply buy Eth from an exchange.

- Mining cards are undervolted and underclocked on the GPU because all that horsepower isn't used for mining. The memory is overclocked and has to be low latency in the first place.

The heat produced by a mining card is 15-20c degrees less than when using gaming settings.

So there shouldn't be an issue buying an RX 580 that has been used for mining for a few months. Unless the card had a fault in the first place.

Radeons are being dumped quite hard right now. You can pick up used ones on CEX cheaper than eBay. Keep an eye on the following link and you'll see them appear.

There's a couple of 580s and a 480 at time of writing.

https://uk.m.webuy.com/search?&sortOn=sellprice_desc&stext=Radeon&section=&is=1

They ship internationally.
 
Last edited:
Vega was supposed to beat 1080 Ti though, right? They are miles behind right now, and I'm not holding my breath for them to "fix" this with a driver update.

No. RX Vega was supposed to beat the 1080. There were no 1080ti when the demos were making the rounds.
 
No. RX Vega was supposed to beat the 1080. There were no 1080ti when the demos were making the rounds.

You sure about that? Pretty sure koyoot linked this back in April:

http://wccftech.com/amd-radeon-rx-vega-performance-gtx-1080-ti-titan-xp/

According to Don, the Radeon RX Vega performance compared to the likes of NVIDIA’s GeForce GTX 1080 Ti and the Titan Xp looks really nice.

Unless you're suggesting "really nice" means "we lose by 30%" I guess?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.