It is not up to me, ask reviewers for this .How about adding the Titan or Titan Black?
It is not up to me, ask reviewers for this .How about adding the Titan or Titan Black?
I have had Kepler, Maxwell, Pascal. I'm not going to blind myself from accepting that Nvidia's performance per clock is inefficient and that they are trying to play a megahertz war against AMD.
It's good that they can clock up speeds and reduce power consumption, but let's not lose sight of the fact that this horse was beaten to death during the Intel vs AMD megahertz war when Intel tried the same gimmick with the Pentium 4. Very high clock speed but **** performance clock for clock against their own older gens.
[doublepost=1468862914][/doublepost]
How about adding the Titan or Titan Black?
Also in not every GPU rendering benchmark GTX 1080 is slower than previous gen.
https://www.computerbase.de/2016-06...5/#abschnitt_gpucomputing_und_videowiedergabe
I have had Kepler, Maxwell, Pascal. I'm not going to blind myself from accepting that Nvidia's performance per clock is inefficient and that they are trying to play a megahertz war against AMD.
It's good that they can clock up speeds and reduce power consumption, but let's not lose sight of the fact that this horse was beaten to death during the Intel vs AMD megahertz war when Intel tried the same gimmick with the Pentium 4. Very high clock speed but **** performance clock for clock against their own older gens.
I am not seeing any cases here where the GTX 1080 loses to any other Nvidia GPU.
CPUs are very different from GPUs. Generally, CPUs focus on single threaded tasks while GPUs are massively parallel. Both AMD and Nvidia increased their clock speeds relative to the previous generation while also improving efficiency (especially Nvidia). Given todays market for mobile products I doubt neither Nvidia or AMD have lost sight of the fact that their GPUs need to scale down to small power envelopes.
If you took a GTX 1070 and clocked it at 1Ghz it's no more powerful than a two year old budget card that had a similar clock speeds and similar or better power envelope.
The current Pascals are basically equivalent to budget cards that Nvidia were able to give high clock speeds because of the 16nm process.
I have no shame admitting that as a 1070 owner. I just have to deal with it. Nvidia's "progress" should be credited to the fabrication plants more than to themselves.
Thats simply not true. The GTX 1070 and 1080 are the most efficient consumer GPUs that exist. Even if you scaled down the clocks to make it say a 100 W card it would beat all previous 100 W cards like the gtx 960.
You are simply wrong. GTX 1070 clocked at 1 GB would be between GTX 970 and GTX 980.It's 100% true.
A 1070 running at 1Ghz would not be significantly better than a 770 or 960 running at the same clock speed. Dozens of benchmarks in reviews prove this simply by looking at the performance scaling.
The only time when this does not hold true is when more VRAM is needed.
I think that GPU computing has less of an issue with pipeline bubbles compared to general purpose CPUs.It's good that they can clock up speeds and reduce power consumption, but let's not lose sight of the fact that this horse was beaten to death during the Intel vs AMD megahertz war when Intel tried the same gimmick with the Pentium 4. Very high clock speed but **** performance clock for clock against their own older gens.
It's 100% true.
A 1070 running at 1Ghz would not be significantly better than a 770 or 960 running at the same clock speed. Dozens of benchmarks in reviews prove this simply by looking at the performance scaling.
The only time when this does not hold true is when more VRAM is needed.
You are simply wrong. GTX 1070 clocked at 1 GB would be between GTX 970 and GTX 980.
It would also provide 3.8 TFLOPs of compute power.
GTX 1080: Freq: 1733 Mhz, FPS: 100.3, FPS/Mhz=0.058
GTX 1070: Freq: 1683 Mhz, FPS: 82.1, FPS/Mhz=0.049
GTX 960: Freq: 1228 Mhz, FPS: 32.9, FPS/Mhz=0.027
GTX 770: Freq: 1130 Mhz, FPS: 33.2, FPS/Mhz=0.029
I'd just like to be able to afford to buy an RX480 (equivalent to $330 for 4 GB version) or 1080 (equivalent to $1000 for 8 GB version (http://kakaku.com/pc/videocard/itemlist.aspx?pdf_Spec103=420)) here in Tokyo without requiring a second mortgage and that would ACTUALLY WORK under macOS in a cMP. At the moment, nearly all this thread—in a Mac forum remember—is pure speculation based on Windows. What we need is cards for Macs or at least drivers and efi roms.
What are you getting at...? So the new 10x0 cards by nvidia are not only faster per clock but more importantly by absolute terms they are way more powerful than anything out there.Of course I exaggerate a little because the 1070 is on average more like 120% better than a 960. So at the same clock speed the 1070 would be a little better, but certainly nothing to write home about.
What are you getting at...? So the new 10x0 cards by nvidia are not only faster per clock but more importantly by absolute terms they are way more powerful than anything out there.
FPS/clock is useless metric anyway. It is doesn't matter. I think what you should be thinking about is FPS/W and based on what I've read so far it seems that the GTX 1060 is actually going to be better at that metric than the RX480. So AMD is once again losing in pretty much every measurable metric. Which of course, from a customer point of view is a real shame because over time NVIDIA will also start showing slow performance increases like Intel with no real competition.
You do realise you are on an Apple forum right?This forum hates proprietary corporate standards because it results in monopolies, less user choice, reduced performance, lockdowns on user options, and eventually the corporations want your user data so you can be tied to the CUDA powered car/house/robot, etc
You do realise you are on an Apple forum right?
Regarding OpenCL:
"Throughout our entire suite of tests, the GeForce GTX 1080 outpaced all of the other GPUs we tested, in every application / game and at every resolution, with the sole exception being the OpenCL-based benchmark, LuxMark."
Read more at http://hothardware.com/reviews/nvidia-geforce-gtx-1080-pascal-gpu-review?page=9#45rLSzGeMQkCMCoT.99
So you find the absolute worst possible scenario for 1080 and draw your assumptions based on that?
There is strong word that Vega 10 will be offered in variants: cut down, full die, and... Nano .Nano is good, but it's still 28nm and puts out a lot of heat. A 14nm Nano with HBM 2, now that would be very interesting.
And again, arguing about whatever is good is a little immaterial if we can't even use the Nano, or anything recent by AMD for that matter. This may change by the time Sierra comes out, but for now, the best we have is Maxwell.
And again, arguing about whatever is good is a little immaterial if we can't even use the Nano, or anything recent by AMD for that matter. This may change by the time Sierra comes out, but for now, the best we have is Maxwell.
This whole thread is about RX480, a card that can't be used in OSX properly at this moment.