... . I am interested in a new build. I can also update my 2,1 at some point, but initially, I think I'd like to throw my money into some very fast CPUs.
If you're interested in a new build, then my recommendations above, still stand.
I use Luxology's Modo for rendering. The company has publicly elected to avoid developing for GPU-rendering (at least for the foreseeable future) and all rendering tasks rely on CPU power. Of course, I would expect that I'd budget $5-600 for a fast Graphics card and CUDA for other apps would be good to have.
What Luxology hasn't done yet relative to GPU-rendering, has already been done by others such as Otoy by their Octane software and their free community developed exporter plugins for: Autodesk 3D Studio Max, Autodesk Maya, Autodesk Softimage XSI, Blender, Maxon Cinema 4D, Sketchup and, most importantly for you, Modo for Luxology users [ See
http://render.otoy.com/forum/viewtopic.php?f=34&t=22098 and
http://render.otoy.com/features.php click each term on the left black panel at the top of the page for more info]. That's why I still would recommend going the CUDA route first.
A second issue I deal with is that Modo's network rendering is buggy and inefficient. A slave machine of equal power only contributes about a 50% of its potential power during rendering. So, I will benefit by having as many CPU cores and threads under one hood. Having said that, how does your recommendation stand?
True generally that having many CPU cores and CPU threads under one hood are a good thing, but not so much so if you go the Octane route,"Octane Render uses the untapped muscle of the modern GPU compared to traditional, CPU based engines. With current GPU technology, Octane Render can produce final images
10 to 50 times faster than CPU unbiased render engines, or even more with multiple GPUs (depending on the GPU(s) used)." With Octane, it's more important to have many
GPU cores and
GPU threads preferably, but not necessarily, under one hood. So my recommendations above, still stand.
Would there be even a 4 socket system that could run osx? From what I have read, that means e4500 series? Pretty pricey.
Generally, an E5-4600 series system will run Linux (the fastest OS for the four socket systems {and importantly Xeon Phi runs under a version of Linux installed on the card}) and Windows server (the priciest version of Windows),
but not OSX yet. So all in all - the 4 socket systems are usually much more expensive than going the CUDA route first.
What about a dual e2687w system? is that faster than what you recommended?
I also have an E5-2687W system. My E5-2687W system is slower than the EVGA SR-2 under OSX, but is faster than most other EVGA-SR2s under Windows. It's just much more expensive for the 5 - 20% increase in performance over tweaked X5680s that you'd see. However, please note that my comparisons of the relative speed of an E5-2687W system compares non-tweaked E5-2687Ws against tweaked X5680s. So the good news is that the E5-2687Ws are fast untweaked, but the bad news is that for their much higher price they can be tweaked just a very small amount (1-4%). In fact for that $1.8k price differential (between dual X5680s and dual 2687Ws), you could purchase 2 GTX 580s and an external chassis from BHPhoto Video to house another PCI-E dual slot encumbering card [
http://www.bhphotovideo.com/c/produ...A211A_NB_3_SLOT_PCIe_EXPANSION_ENC_EXC34.html ] and still pocket some change. Also, remember that CUDA is currently being used by Adobe and others in the creative app market to accelerate compute intensive functions and with apps like Octane, CUDA cards scale linearly.
If I did end up one day with a Phi, would I need to plan now to provide sufficient power in the power supply?
Unlike for CUDA cards the Xeon Phi literature says,"There is no auxilliary 2x4 or 2x3 power connector on the card." So the answer to your question in included in my recommendation,
above, namely, get a 1200 watt or greater power supply to power the motherboard and get a FSP Group Booster X5 450W Independent/Supplementary SLI Certified CrossFire Ready Active PFC Dedicated Multi GPU Power supply to power {or at least help power} your video cards, thereby, relieving the main PSU of that chore so that it can better support the power requirements of the CPUs, memory, drives and PCI-e cards, like the Xeon Phi, that don't require more power through separate PCI-e card plugs.
My main point might be better made with some questions: How much would it cost you to buy 10 to 50 additional cpus and put them to work in a rendering environment? Could you do so for less than the price of an EVGA GTX 690 ($1K) or a GTX 580 (
<$700 for the most expensive ones that I've seen in the last 4 months)? Then, what would it cost you to buy an additional 10 to 50 cpus and put them to work in a rendering environment? Don't forget the space, thermal and electrical requirements and the cost of electricity for all of those systems. Cost is why I recommend going the CUDA/Octane route with an EVGA SR2 build (or an EVGA SRX build if you want to use Sandy [or a little later Ivy] bridge chips
*/) until Xeon Phi can stand steadily on its own legs and not lean excessively against the side of your jacket holding your wallet. I'll freely admit it, "I'm cheap."
*/ "Developers ... [have] one of several routes [to program the Xeon Phi]:
[1] Using pragmas to augment existing codes so they offload work from the host processor to the Intel Xeon Phi coprocessors(s)[;]
[2] Recompiling source code to run directly on coprocessor as a separate many-core Linux SMP compute node[;]
[3] Accessing the coprocessor as an accelerator through optimized libraries such as the Intel MKL (Math Kernel Library)[; or]
[4] Using each coprocessor as a node in an MPI cluster or, alternatively, as a device containing a cluster of MPI nodes." [
http://www.drdobbs.com/parallel/programming-intels-xeon-phi-a-jumpstart/240144160?pgno=1 ]
Since the Xeon Phi has it's own Linux OS, it can be programmed to behave as a separate system, in which case whether your CPUs are Sandy/Ivy bridge or Westmeres/Nehalems shouldn't matter. However, if your app developer programs the Xeon Phi to act as a coprocessor, then you might be better off with an EVGA SRX (which uses the E5 chips) for better compatibility.