Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Long enough... People who are into buying 3rd party gpus tends to replace them by skipping a gen or two which mean the card as only to work for a year or two, maybe 3 at most before the user will move to the next shinny thing.
I don't know, during the last craze card vendors got flooded with RMA's and some of the mining-oriented boards are being sold with 180-day warranties.
 
Unless they've bought the cheapest of the cheap,those fan are rated for longuer than a few months of useage.Will some fail before their time, yes. Just like when you buy a used car you never really know how it was driven and what the internal looks like. But the majority of people who buy used car don't have any problem and it's the same with the videocards. If the deal is good and you don't mind the risk then why not.
 
Unless they've bought the cheapest of the cheap,those fan are rated for longuer than a few months of useage.Will some fail before their time, yes. Just like when you buy a used car you never really know how it was driven and what the internal looks like. But the majority of people who buy used car don't have any problem and it's the same with the videocards. If the deal is good and you don't mind the risk then why not.
I might buy an ex-mining card if the price is rock bottom. Not much to complain about if it lasts a couple months.

It would be nice to be able to get proper replacement fans for a good price.
 
Aren't those mainly for reference cards?

I have not check as I'm not into WC myself. I was just saying that this solution do exist.

More importantly, what if you don't want watercooling?

Then you take your chances with the fans that come with the card I guess.

Even more importantly, are you going to spend $$$ making a dubious card work?

Unless the card has overheated, like Aiden said, it is probably still perfectly fine. The weak point, that you yourself brought up, was the fans which may be failing hence why I gave you a possible solution.
 
I have not check as I'm not into WC myself. I was just saying that this solution do exist.



Then you take your chances with the fans that come with the card I guess.



Unless the card has overheated, like Aiden said, it is probably still perfectly fine. The weak point, that you yourself brought up, was the fans which may be failing hence why I gave you a possible solution.
It does not seem like a workable solution for the majority of people interested in dumped cards.
 
The one i linked was a 3 fan design, not a blower.
As i said previously, those who buy ex mining card do so knowing full well the risk.
That has nothing to do. I looked at the specifications for a 2 fan model, and you have to look at the small print where it says it was designed with reference cards in mind.
 
That has nothing to do. I looked at the specifications for a 2 fan model, and you have to look at the small print where it says it was designed with reference cards in mind.

So? Buy a reference model then or not, it's your choice. I'm only telling you that the possibility exist. You don't seem to be willing to manage the risk of buying an ex mining card so I wonder what point you're trying to make...
 
So? Buy a reference model then or not, it's your choice. I'm only telling you that the possibility exist. You don't seem to be willing to manage the risk of buying an ex mining card so I wonder what point you're trying to make...
The reference models were likely more stressed by heat during mining.

The point is: is there a good solution for fixing broken custom GPU coolers?
 
Or, like I've mentioned many times in the past, these (and plenty of other) tests are not solely limited by raw compute horsepower (i.e. TFLOPs). Many of those SPECviewperf tests render models with 10s of millions of triangles, and should be considered more of a graphics test than a raw compute test.
If this would be correct, which is basically not correct - Gaming cards should fly in this test. They don't. How come?

Because for this test you need optimized professional, signed drivers, Radeon Pro, and Quadro/Tesla grade to get highest performance. None: Titan Xp, and Vega FE have them. Which I have pointed out, and which you omitted. That is why you see Quadro P2000, which is cut down GTX 1060, with professional drivers to outpace GTX Titan Xp. Which has 3 times higher compute performance.

You have claimed yourselves as professionals. How come you do not know this, or do forget about this?
 
If this would be correct, which is basically not correct - Gaming cards should fly in this test. They don't. How come?

Because for this test you need optimized professional, signed drivers, Radeon Pro, and Quadro/Tesla grade to get highest performance. None: Titan Xp, and Vega FE have them. Which I have pointed out, and which you omitted. That is why you see Quadro P2000, which is cut down GTX 1060, with professional drivers to outpace GTX Titan Xp. Which has 3 times higher compute performance.

You have claimed yourselves as professionals. How come you do not know this, or do forget about this?

I guess I shouldn't be surprised at your inability to actually read and understand my posts, so let me try and break it down for you.

I said "Many of those SPECviewperf tests". Specifically MANY, not ALL. The results listed exactly one subtest where the Quadro P2000 beat the Titan Xp, namely the "sw-03" test. I'm not familiar with the details of that test, but again, SPECviewperf measures graphics performance for workstation applications. It even says it right on their website:

https://www.spec.org/gwpg/gpc.static/vp12.1info.html

The SPECviewperf 12 benchmark is the worldwide standard for measuring graphics performance based on professional applications. The benchmark measures the 3D graphics performance of systems running under the OpenGL and Direct X application programming interfaces. The benchmark’s workloads, called viewsets, represent graphics content and behavior from actual applications.

Notice how many times they mention graphics in that one paragraph? They did not mention compute a single time, if I'm reading it correctly. So, once again, graphics tests are rarely limited by the raw TFLOPs of the computing units on the GPU, and are often limited by other factors.

NVIDIA's Quadro cards are known to have higher geometry throughput than their gaming cards, so for them it's not just about driver differences. That is, the Quadro cards are actually using different GPU hardware (or specifically, I believe the gaming cards have the Quadro special stuff disabled in hardware). So yes, while optimized workstation drivers that have been signed and blessed for use with the various workstation applications are important, the performance results listed are not solely based on that (it's more that the application vendors are telling their customers "version XXX.XX of the driver is known to work well with our application").

sw-03 appears to be a very geometry-heavy test:

https://www.spec.org/gwpg/gpc.static/sw03.html

The sw-03 viewset was created from traces of Dassault Systemes’ SolidWorks 2013 SP1 application. Models used in the viewset range in size from 2.1 to 21 million vertices.

Again, this could very easily be explained by the better geometry throughput of the Quadro hardware for NVIDIA, or perhaps by special workstation optimizations that are only enabled for Quadro cards. I did not claim one way or the other, simply that these tests are generally not limited by raw compute TFLOPs.

Finally, when did I claim I was a professional? A professional what, exactly?

So, in summary, I still maintain that SPECviewperf is not a compute test and thus is unlikely to be limited by raw TFLOPs, despite your insistence that this is the only metric that matters. There are many other things that can be a bottleneck, especially for graphics tests/workloads.
 
I will sum up your post with one question to you.

If SpecPerf is testing Graphics, how come cut down GTX 1060 is faster than GTX Titan Xp in this test? 1024 CUDA core GPU, faster than 3840 CUDA core GPU.

Drivers. Professional drivers are paying the difference here. Which was the point from the beginning, not TFLOPs performance, to which you attached your point of view.

Back to topic:
https://www.3dcenter.org/news/der-release-fahrplan-der-herstellerdesigns-zur-radeon-rx-vega
https://www.3dcenter.org/news/weitere-deaktivierte-features-im-aktuellen-vega-treiber

Basically what this means: AMD Vega FE launched without any of new features activated, running at 50% of effective memory bandwidth, launched without BIOS, and proper Voltage regulation in BIOS, there is no memory voltage control in the BIOS.

What the hell was AMD thinking?

This is the most unready product to launch, to actually launch I have ever witnessed.
 
Last edited:
This is the most unready product to launch, to actually launch I have ever witnessed.

An alternative scenario is this is basically as ready as Vega is going to get, and it is simply a poor product. I am getting more skeptical that somehow Vega RX is going to right all of Vega FE's wrongs, and the poor efficiency and graphics performance are simply what Vega is.
 
An alternative scenario is this is basically as ready as Vega is going to get, and it is simply a poor product. I am getting more skeptical that somehow Vega RX is going to right all of Vega FE's wrongs, and the poor efficiency and graphics performance are simply what Vega is.
Well the performance per clock in compute versus Fiji is higher, so AMD changed the L2 cache size, and the Register File sizes, and the way the GPU handles them, which always was the problem in GCN so we should see improvements in graphics performance also.

What we see in graphics performance is decrease per clock versus Fiji.

That is why I am closer to believeing that something is wrong with the card, and it should not be launched because it is not ready, yet. But I guess, we will see in August.
 
I will sum up your post with one question to you.

If SpecPerf is testing Graphics, how come cut down GTX 1060 is faster than GTX Titan Xp in this test? 1024 CUDA core GPU, faster than 3840 CUDA core GPU.

Drivers. Professional drivers are paying the difference here. Which was the point from the beginning, not TFLOPs performance, to which you attached your point of view.

It's not just drivers though, here's a comparison of GeForce vs Quadro hardware features from NVIDIA:

http://nvidia.custhelp.com/app/answers/detail/a_id/37/~/difference-between-geforce-and-quadro-gpus

For example:

- Antialiased points and lines.
- Logic operations.
- Clip regions/planes.
- Two-sided lighting.

AA lines and two-sided lighting are very common in workstation modelling apps like Solidworks. So, if AA lines and two-sided lighting are 100x slower on a GeForce card, that would easily explain why a GP106-level Quadro card would beat a GP102-based Titan Xp (note that I don't know the exact delta, just that there is a real hardware difference). Quadro cards are absolutely not simply GeForce cards with a different driver.

I'm sure there are driver-level differences where the Quadro cards get certain optimizations targeted at the workstation applications while GeForce cards do not, but if you'd done any research on this you'd know that there are real hardware level differences as well. All of this is in line with my original statement that SPECviewperf is not a compute test that is limited by raw TFLOPs.
 
It's not just drivers though, here's a comparison of GeForce vs Quadro hardware features from NVIDIA:

http://nvidia.custhelp.com/app/answers/detail/a_id/37/~/difference-between-geforce-and-quadro-gpus

For example:

- Antialiased points and lines.
- Logic operations.
- Clip regions/planes.
- Two-sided lighting.

AA lines and two-sided lighting are very common in workstation modelling apps like Solidworks. So, if AA lines and two-sided lighting are 100x slower on a GeForce card, that would easily explain why a GP106-level Quadro card would beat a GP102-based Titan Xp (note that I don't know the exact delta, just that there is a real hardware difference). Quadro cards are absolutely not simply GeForce cards with a different driver.

I'm sure there are driver-level differences where the Quadro cards get certain optimizations targeted at the workstation applications while GeForce cards do not, but if you'd done any research on this you'd know that there are real hardware level differences as well. All of this is in line with my original statement that SPECviewperf is not a compute test that is limited by raw TFLOPs.
The hardware is not necessarily different, but certain features might be enabled by drivers and firmware.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.