... . On the CPU side I am a little cautious about it due to the fact I have seen things like the article on Anandtech where he managed to fry the CPU board along with the processor and I have a 09 Mac Pro as he did so I am a little worried about the same thing happening to my Mac Pro so if someone can link me to a detailed tutorial with either video of pictures it would be really helpful. ... .
I have 4 Mac Pros. One is 2010 look and act alike. When I acquired it, it was a refurb 2.23 GHz dual 2009. While Anand was waiting for his well done CPU board to be replaced, I installed 2 - x5580s successfully in mine. Since then I have installed all manner of 56xx pairs and singles in it (after applying the EFI hack) for various reasons. I don't have pics or video. I don't remember where you can find pics or videos by others on this forum, but I do recall having seen pics posted by others who later did the CPU swap and discussed it on this forum. In 2009, I helped gugucom, among others, with the CPU swap and as I recall he did post some pics on this forum.
Also would a Memory upgrade be of any benefit to me? I currently have 16 GB of 1066 MHz Memory in my Mac Pro 8*2GB.
Yes, especially for 3d and video apps and even more so if you go the Octane route because Octane taxes only the GPU; so while the GTX 580 GPU is taxed to the max rendering a 3d project, and especially if you still use your GT 120 [which you can also use for increased interactivity while setting up your 3d scenes - see Octane videos to get a better idea of what I'm referring to] , you could be rendering video
simultaneously with, e.g., Premiere CS6, using the CPU. For such work you'll never regret having maximum memory. Should you also go with upgrading the CPU to the 56xx series (after applying the EFI hack), your system can also take advantage of the faster 1333 MHz memory.
In terms of the GPU the first one to reply to my post noted that the Quadro line of Nvidia cards are more stable than the GeForce line so wouldnt the 580 suffer the same issues he noted?
Since I disagree with the first premise, i.e., that the Quadro line of Nvidia cards is more stable than the GTX, you should be easily able to discern that my answer to the projection regarding the GTX 580. What can make any video card unstable is overtaxing it, denying it the power it craves and overclocking it to the point that thermal issues arise.
Also you noted the requirement for outside power on a 580/680, in relation to that would I be able to get that additional power without installing something outside the case?
That was in reference to one specific cards that had lots of ram as far as GTXs go and it already had been "touched" by MacVideoCards. That a card is a GTX 580, 680 or whatever says nothing really about what you need to fully hook it up. There are sooo many varieties that it can make one's head spin. You should be able to find one with just 2 - 6 pin power connectors, but it'll probably not be one of the faster ones - those require more power; so they may require 1 - 6 pin and 1 - 8 pin, or as in the case of mine 1 - 6 and 2 - 8 pin connectors. But keep in mind that I now build my own systems so that doesn't pose a problem for me.
or would solutions like that lead to increased heat and noise which I should avoid?
One with common sense are you. You got it. As a general rule, more speed -> more heat -> more cooling needed -> more noise, unless you're building your own and pick the quietest fastest fans or go with H
20 or something more exotic. In the end, this is a personal/subjective matter however.
Also my displays are currently 2 Apple Cinema Displays a 24 inch and a 27 inch how would I run these 2 through a card like a 580 which seems to only have 1 display port. I remember seeing adaptors from dual dvi but remember them being fairly expensive.
Yes, adapters can be expensive; but keep in mind that you may be able to find a GTX 580, either already hacked, with 2 display ports, or you may chose to use another or additional video card for interactivity while setting up the 3d scenes and/or to use for CPU based rendering of 3d or video. I use GT 640s and ATI 4890s in this way.
Also since the 680 is available with 1 more GB of memory and seems to have a higher clock speed for memory shouldnt it be faster in relation to even 3D applications?
If you chose to explore going the Octane route, contact Otoy and get the latest info about how they're fairing optimizing it for the GTX 6xx series (and if you don't mind, please update all of us).
Also I noticed before someone had posted on these forums that OSX requires some sort of hack to see more than 2 GB of memory on a video card? Is that still the case and if so how do I get around that.
Yes. But don't rush it. Get with (i.e., PM) MacVideoCards when time allows. He knows his stuff and can lead you in a proper direction. Also, there is a thread on this forum where he has walked others step-by-step through the process of making that hack, but I cannot find it at the moment.
MacVideoCards seems to be down for the holidays so I will look him up after the holidays since I am trying to teach myself hopefully wont hit issues related to performance for a couple weeks.
Obviously a wise choice for a wise one.
Lastly you mention Octane as a renderer, are there any other renderers that benefit from a fast GPU, or for most of those would a fast GPU be ideal? ... .
In addition to the native renderer usually found in and specific to a particular 3d app, there are, among others:
3delight,
AIR,
Arion Render,
FinalRender Stage 2,
Fryrender,
Indigo Renderer,
LuxRender,
Maxwell Render,
Mental Ray / Iray,
Octane Render,
Pixar's RenderMan and
VRay.
Here is where Google can really help you out.
Some may not take any advantage of the GPU. Some are hybrids which take advantage of both the GPU and the CPU simultaneously and then there's Octane that relies completely on the GPU. To date, it's my understanding that Octane is the fastest of them all for that one reason. Moreover, with the hybrids some give you the advantage of being able to render much larger scenes than your video card may usually allow on its own, but come with the downside that since they're using the CPU also, you cannot do as much multitaking because the render is now also taxing the CPU.
An interesting
free multi-platform 3d app - Blender [
http://www.blender.org ], has a built in renderer (Cycles) that takes advantage of CUDA (and to a much lesser extent ATI Stream cores) [
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/GPU_Rendering ]. But it isn't nowhere near as fast as Octane; but then the same apples to all of the others I've listed above.
N.B. Keep in mind that you can use multiple GPUs with multiple renderers.