Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jimj740

macrumors regular
Original poster
First: this is definitely a case of first world problems, the card is very nice... it is just that my expectations were wildly off leading to disappointment...

Background: I have a 2009 flashed to 5,1 firmware and upgraded to 2 x5680 3.3 Ghz processors and 1333 Mhz RAM.

I first upgraded my GT120 video card to a Sapphire 7950 OC, which I flashed and performed the resistor mod on to enable 5GT/sec operation. This card was very nice and performed quite well. Clocked at 925Mhz and 3GB VRAM, priced at $280. In retrospect I probably should have kept it...

I had heard from various reviews that the GTX680 was a faster card, and although not part of my workflow I like to render as a hobby so I was really interested in using CUDA with blender/cycles...

I found a EVGA GTX680 4GB card on craigslist new in box for $300, so I jumped on it. I flashed this too, and tweaked it to the SuperClocked specs of 1084/1150 with the 1552 VRAM setting.

In every benchmark I ran on my machine, this card is SLOWER than the 7950, usually by about 5-10%. No problem, at least I get CUDA right? Then comes the real disappointment:

My CPUs are too good! With 12/24 cores the CPU render time on the MikePan BMW benchmark was about 50 seconds... and my shiny new video card only brought it down to 45 seconds.

Interestingly the 680 seems to benchmark better in Windows, so perhaps it is a case of poor drivers for OSX.

So why did I bother posting this? Basically just to share my experience that there is little difference between the GTX680 SC and the 7950 OC, and to point out that CUDA may not have practical advantages over a 12/24 core processor.

I'd be interested in the experiences of other with respect to CUDA performance...

-JimJ
 
First: this is definitely a case of first world problems, the card is very nice... it is just that my expectations were wildly off leading to disappointment...

Background: I have a 2009 flashed to 5,1 firmware and upgraded to 2 x5680 3.3 Ghz processors and 1333 Mhz RAM.

I first upgraded my GT120 video card to a Sapphire 7950 OC, which I flashed and performed the resistor mod on to enable 5GT/sec operation. This card was very nice and performed quite well. Clocked at 925Mhz and 3GB VRAM, priced at $280. In retrospect I probably should have kept it...

I had heard from various reviews that the GTX680 was a faster card, and although not part of my workflow I like to render as a hobby so I was really interested in using CUDA with blender/cycles...

I found a EVGA GTX680 4GB card on craigslist new in box for $300, so I jumped on it. I flashed this too, and tweaked it to the SuperClocked specs of 1084/1150 with the 1552 VRAM setting.

In every benchmark I ran on my machine, this card is SLOWER than the 7950, usually by about 5-10%. No problem, at least I get CUDA right? Then comes the real disappointment:

My CPUs are too good! With 12/24 cores the CPU render time on the MikePan BMW benchmark was about 50 seconds... and my shiny new video card only brought it down to 45 seconds.

Interestingly the 680 seems to benchmark better in Windows, so perhaps it is a case of poor drivers for OSX.

So why did I bother posting this? Basically just to share my experience that there is little difference between the GTX680 SC and the 7950 OC, and to point out that CUDA may not have practical advantages over a 12/24 core processor.

I'd be interested in the experiences of other with respect to CUDA performance...

-JimJ

Not sure if you have read this thread about the GTX680. Might give you more insights that may help. https://forums.macrumors.com/threads/1572945/
 
I have the exact same setup as you and have run benchmarks with a 680, dual 680s and a Titan. I only feel real-world differences in Octane and After Effects.

What do you do with your MP?
 
I have the exact same setup as you and have run benchmarks with a 680, dual 680s and a Titan. I only feel real-world differences in Octane and After Effects.

What do you do with your MP?

I am a software developer, and use the machine mostly for multi-processor programming and virtualization/simulations.

My hobbies are photography and rendering, so premium video cards are definitely on the play side as opposed to the work side...

I had no experience with CUDA acceleration on the render side and have basically given up on AMD ever fixing their opencl tools so that blender/cycles will work with them - hence my desire for the CUDA card.

So if I understand you correctly, the Titan card made a noticeable improvement for the octane render engine. Most reviews peg this card at roughly twice the performance of a 680 - for about $1000. A bit expensive for my hobby...

-JimJ
 
I am a software developer, and use the machine mostly for multi-processor programming and virtualization/simulations.

My hobbies are photography and rendering, so premium video cards are definitely on the play side as opposed to the work side...

I had no experience with CUDA acceleration on the render side and have basically given up on AMD ever fixing their opencl tools so that blender/cycles will work with them - hence my desire for the CUDA card.

So if I understand you correctly, the Titan card made a noticeable improvement for the octane render engine. Most reviews peg this card at roughly twice the performance of a 680 - for about $1000. A bit expensive for my hobby...

-JimJ

Yeah, 2 680s and one Titan yielded almost identical results. I need the other slots though.
 
I'm using Blender Cycles only as CUDA and CPU benchmark and I've noticed that render results vary between Blender versions. For example: My OC'ed 680 2GB renders Mike pan in 1:10 min in 2.61 and in 1:14 in 2.69. With CPUs it's opposite: 2.61 - 1:44 on W3680 and 1:33 for 2.69.

Regarding other benchmarks, such as Heaven 4.0 and Valley, my OC'ed 7950 outperforms this 680 by more than 5%. But only on Medium and without AA. When I'm turning 8xAA on and setting High or Ultra, 680 shows its advantage.
7590 is clocked 1175/1500, 680 1150/1625.
 
So why did I bother posting this? Basically just to share my experience that there is little difference between the GTX680 SC and the 7950 OC, and to point out that CUDA may not have practical advantages over a 12/24 core processor.

I'd be interested in the experiences of other with respect to CUDA performance...

-JimJ

My specs are similar to yours only dual 680's. I render with Bunkspeed Shot which has 3 modes for rendering, CPU, GPU or Hybrid. Hybrid uses both CPU & GPU (CUDA) simultaneously for some smokin' fast renders.
 
GTX680 is for games. GK 110 (GTX780, Titan) is for what you want. GTX680 is as fast as a GTX470 in CUDA and compute. It is not surprising that this did not elevate you very far from the 7950 if at all. Play some games in Windows and that GTX 680 will blow away the Radeon. Sounds like games are not your thing. I'd return it for a card that does not hobble it's compute lib.
 
GTX680 SC and the 7950 OC, and to point out that CUDA may not have practical advantages over a 12/24 core processor.

There is a huge practical advantage to CUDA (and GPGPU in general)... it's the price. You got almost exactly the same rendering time with a $2000+ worth of dual CPU motherboard and 12 fast cores, as you did with just a $300 GTX 680. And as people have already pointed out, the 680 has crippled CUDA!
 
I am a software developer, and use the machine mostly for multi-processor programming and virtualization/simulations.

My hobbies are photography and rendering, so premium video cards are definitely on the play side as opposed to the work side...

I had no experience with CUDA acceleration on the render side and have basically given up on AMD ever fixing their opencl tools so that blender/cycles will work with them - hence my desire for the CUDA card.

So if I understand you correctly, the Titan card made a noticeable improvement for the octane render engine. Most reviews peg this card at roughly twice the performance of a 680 - for about $1000. A bit expensive for my hobby...

-JimJ

Checkout posts #s 898 and 899 here: https://forums.macrumors.com/threads/1333421/ . A $730 EVGA 780Ti SC ACX has a Titan Equivalency ("TE" - see post no. 865 here: https://forums.macrumors.com/threads/1333421/, using OctaneRender post version 1.0 ) of 1.32 and thus is equivalent to having 1.32x factory Titans for ($730/$1,000=) 73% of the cost of the Titan (Reference Design = RD). But beware because there are currently OCL problems with the Ti's GK110B on the Macs with Mavericks.

FYI:
1 GTX680C has a TE of 0.54; meaning that two GTX680Cs are 1.08x faster than a Titan.
1 GTX580C has a TE of 0.594; meaning that two GTX580Cs are 1.188x faster than a Titan.
1 GTX680 RD has a TE of 0.50; meaning that two GTX680 RDs are 1.0x faster than a Titan, i.e, just equal to.
1 GTX480C has a TE of .55; meaning that two GTX480Cs are 1.1x faster than a Titan.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.