Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Octanerender performance of EVGA GTX480SC/1.5g re-visited.

Here's how my memory tweaked EVGA GTX480SC/1.5Gs perform on Octanerender 1.20, earning a TE of .613 per card. So 2 tweaked EVGA GTX480SC/1.5Gs are equivalent in Octane rendering performance to 1.226 reference design (RD) Titans and all four of them are equivalent to 2.45 RD Titans.
 

Attachments

  • OR1.20_1xGTX480Capture.JPG
    OR1.20_1xGTX480Capture.JPG
    68 KB · Views: 195
  • OR1.20_2xGTX480Capture.JPG
    OR1.20_2xGTX480Capture.JPG
    67.2 KB · Views: 157
Tutor, I'm still pondering my options to maximize my CPU related performance right now and would like to hear your opinion on this one:

For a workstation only rig I'd prefer sth along the e5 10-12core systems (> Maxwell Renderer), since I need some good single clock performance as well (adobe CS) I'm leaning towards 10c because my wallet says so!

Anyway, because I could use a new 'real' working-laptop as well, my MBA doesn't qualify as that, I was thinking about getting a refurb 2.4/2.7 early '12 rMBP refurb (quadcore) along sth like the 4930K (could even try oc'ing a wee bit :D). I could use the rMBP for a render node if need be (I actually doubt that tbh) then. This would save me some real bucks and adds lots of flexibilty / mobility to the setup. Question would be if the 1gig 650m videocard would cripple my possibilities..I think the 16gig ram are quite sufficent. You think that a good plan for an amateur enthusiast...
 
Tutor, I'm still pondering my options to maximize my CPU related performance right now and would like to hear your opinion on this one:

For a workstation only rig I'd prefer sth along the e5 10-12core systems (> Maxwell Renderer), since I need some good single clock performance as well (adobe CS) I'm leaning towards 10c because my wallet says so!

Anyway, because I could use a new 'real' working-laptop as well, my MBA doesn't qualify as that, I was thinking about getting a refurb 2.4/2.7 early '12 rMBP refurb (quadcore) along sth like the 4930K (could even try oc'ing a wee bit :D). I could use the rMBP for a render node if need be (I actually doubt that tbh) then. This would save me some real bucks and adds lots of flexibilty / mobility to the setup. Question would be if the 1gig 650m videocard would cripple my possibilities..I think the 16gig ram are quite sufficent. You think that a good plan for an amateur enthusiast...

It seems like a reasonably good plan to me. I've purchased a lot of my deskside and portable Macs as refurbs and have never regretted any of those purchases. A 1 gig 650m videocard in a laptop is better than any of mine and thus I cannot conceive of that being crippling for an enthusiast system.
 
Last edited:
It seems like a reasonably good plan to me. I've purchased a lot of my deskside and portable Macs as refurbs and have never regretted any of those purchases. A 1 gig 650m videocard in a laptop is better than any of mine and thus I cannot conceive of that being crippling for an enthusiast system.

Thanks! Not going to pretend that adding a laptop as a node is sth I look forward too, but since I doubt that I actually need it anyway it's still a good option to have I guess. My licence includes 5 nodes so I may end up adding my MBA just as well. No, just kidding. Always thought the vram of the node has to be able to fit in the whole scene, that's where I got curious whether 1gig is sufficent or not.
 
I'm looking into upgrading the processor in my Mac Pro 5,1 (currently using the 2.8 Quad) and was looking into replacing it with an i7. Somewhere on the forum it appears like some users replaced their CPU with an i7-980x. It looks like that's the last set of CPUs that use the LGA1366 socket – is that right?

If I want to use a newer six-core CPU would I have to also change the socket? Is that even possible?

Basically, my machine isn't really used for work anymore, so I'm okay moving away from the XEONs and finding something that's faster and more affordable.

Forgive me if this is common knowledge, it's not my area of expertise!
 
Last edited:
I'm looking into upgrading the processor in my Mac Pro 5,1 (currently using the 2.8 Quad) and was looking into replacing it with an i7. Somewhere on the forum it appears like some users replaced their CPU with an i7-980x. It looks like that's the last set of CPUs that use the LGA1366 socket –*is that right?

i7 will work. They don't support ECC ram however so it's possible you would have to swap that out. I have heard that some have not had to do this though.
 
i7 will work. They don't support ECC ram however so it's possible you would have to swap that out. I have heard that some have not had to do this though.

I actually swapped out the ECC ram a while back. But I'm just not sure if my only choice is the i7-980x. It's a nearly four year old CPU as far as I can tell. Would prefer to get something newer.
 
I actually swapped out the ECC ram a while back. But I'm just not sure if my only choice is the i7-980x. It's a nearly four year old CPU as far as I can tell. Would prefer to get something newer.

You can't use anything newer, but there are 4 Hex cores Core i7 that will fit: the 970, 980, 980x and 990x. Speed is not that different so you may want to go with the first deal found (I went for a 980). Note that RAM will run at 1066 MHz with those Core i7 (not 1333 MHz as with some Xeon).
 
If I want to use a newer six-core CPU would I have to also change the socket? Is that even possible?

The socket is an integrated part of the motherboard, you can't change this on any mac or pc. As newer CPUs are released so are their compatible motherboards - unfortunately with Apple you get what you're given and there's no scope for swapping things out as each model is a very specifically designed system.
 
With the aid of Tiamo, those of us with early Mac Pros got a new lease.

Apples knows full well that GPU parallel processing power is growing extremely faster than CPU parallel processing power and at lower costs. Why did Apple give the nMP two GPUs and not (or plus) 2 CPUs? And why has Apple, prematurely, pronounced our early Mac Pros, i.e., the 2006 and 2007 Mac Pros, DOA for Mavericks, even if we had purchased the top end of each line (as I had done)?

I had hackintoshed my three 2007 Mac Pros, until Tiamo [ https://forums.macrumors.com/threads/1598176/ ] gave us what Apple could have given us and that is an easy way to run Mavericks on our early Mac Pros. Sure, Mavericks doesn't increase the speed of our 2006/2007 hardware, but we may be able to upgrade that hardware to better suit our needs and what about new applications (or older ones) that will be optimized for Mavericks. Here's my analysis of one reason why this is important and I've included an example of how one of my 2007 Mac Pro zombies performs in OctaneRender under Mavericks.

Nvidia claims that a Tesla K20(X) possesses the compute power of ten E5-2687W V1 single CPU systems. So assuming Nvidia to be correct in that comparison, a GTX Titan has as many, and faster, SP cores and when unlocked, has as many (and when tweaked, faster) DP cores. The DP cores made more of a difference in 3d rendering in OctaneRender in its version 1.0 than in later versions that Otoy has optimized to run better on the Kelper GPUs (more SP units, but less/less powerful DP units, than Fermi). Barefeats performed an OctaneRender test with a Titan and that Titan rendered the OctaneRender Benchmark scene in 95 secs (or 1 min. & 35 secs.). One of my DOA 2007 Mac Pros that has a GTX580C/3G and a GTX680/4G renders that same benchmark scene in 70 secs (or 1 min. & 19 secs.) or as fast as 1.20 Titans (95 / 79 = 1.20253164556962). So excuse me if I finding it hard to figure out why I shouldn't be allowed to run Mavericks on a top of the line 2007 system that I can give, at a minimum, the parallel compute power of 12 E5-2687W V1 single CPU systems (10 x 1.2 = 12) and if I put two of my Titans in one of my 2007 Mac Pros, it would yield the parallel compute power of, at least, 20 E5-2687W V1 single CPU systems. Moreover, compare how my 2007 Mac Pro 2,1 scores in Geekbench 3 vs. a 2008 Mac Pro 3,1 with the same number of processors running at the same 3.0 GHz speed [ http://browser.primatelabs.com/geekbench3/compare/201294?baseline=254478 ]. Maybe somebody else, other than Apple, can better explain to me why I should retire my 2007 Mac Pros or leave them with Lion as the base for future software applications, when any 2008 Mac Pro get an open invitation to upgrade to Mavericks.

BTW - I've unhackintoshed all three of my 2007 Mac Pros and did the brain dead easy Mavericks upgrade Tiamo has revealed.
 

Attachments

  • MP2007GTX680_580AboutScreen.png
    MP2007GTX680_580AboutScreen.png
    70.2 KB · Views: 158
  • MP2007GTX680_580.png
    MP2007GTX680_580.png
    58.6 KB · Views: 139
  • MP2007GTX680_580onOctane.png
    MP2007GTX680_580onOctane.png
    992.4 KB · Views: 163
Last edited:
How might the a D300 and D700 GPU card in the nMP perform as an OCL rendering card?

If you're wondering how cards that identify themselves in Luxmark as D300 and D700 perform in LuxRender, then visit this site:

http://www.luxrender.net/luxmark/ .

Under "Search" on the left hand side of the screen, click on "Results Search," then in the Search Results window that pops up, scroll down to Apple - AMD Radeon HD - FirePro D300 Compute Engine [20 units @ 1100MHz] and click on it to view the results; and next scroll down to Apple - AMD Radeon HD - FirePro D700 Compute Engine [32 units @ 1200MHz] and click on it to view the results. There you'll find that thse D300s score an average of 1,553 points on the Sala render benchmark and that those D700s score an average of 19,357 on the LuxBall HDR render and an average of 2,462 points on the Sala render benchmark. You can then make comparisons with other similar GPUs.
 
Last edited:
Apples knows full well that GPU parallel processing power is growing extremely faster than CPU parallel processing power and at lower costs. Why did Apple give the nMP two GPUs and not (or plus) 2 CPUs? And why has Apple, prematurely, pronounced our early Mac Pros, i.e., the 2006 and 2007 Mac Pros, DOA for Mavericks, even if we had purchased the top end of each line (as I had done)?

I had hackintoshed my three 2007 Mac Pros, until Tiamo [ https://forums.macrumors.com/threads/1598176/ ] gave us what Apple could have given us and that is an easy way to run Mavericks on our early Mac Pros. Sure, Mavericks doesn't increase the speed of our 2006/2007 hardware, but we may be able to upgrade that hardware to better suit our needs and what about new applications (or older ones) that will be optimized for Mavericks. Here's my analysis of one reason why this is important and I've included an example of how one of my 2007 Mac Pro zombies performs in OctaneRender under Mavericks.

Nvidia claims that a Tesla K20(X) possesses the compute power of ten E5-2687W V1 single CPU systems. So assuming Nvidia to be correct in that comparison, a GTX Titan has as many, and faster, SP cores and when unlocked, has as many (and when tweaked, faster) DP cores. The DP cores made more of a difference in 3d rendering in OctaneRender in its version 1.0 than in later versions that Otoy has optimized to run better on the Kelper GPUs (more SP units, but less/less powerful DP units, than Fermi). Barefeats performed an OctaneRender test with a Titan and that Titan rendered the OctaneRender Benchmark scene in 95 secs (or 1 min. & 35 secs.). One of my DOA 2007 Mac Pros that has a GTX580C/3G and a GTX680/4G renders that same benchmark scene in 70 secs (or 1 min. & 19 secs.) or as fast as 1.20 Titans (95 / 79 = 1.20253164556962). So excuse me if I finding it hard to figure out why I shouldn't be allowed to run Mavericks on a top of the line 2007 system that I can give, at a minimum, the parallel compute power of 12 E5-2687W V1 single CPU systems (10 x 1.2 = 12) and if I put two of my Titans in one of my 2007 Mac Pros, it would yield the parallel compute power of, at least, 20 E5-2687W V1 single CPU systems. Moreover, compare how my 2007 Mac Pro 2,1 scores in Geekbench 3 vs. a 2008 Mac Pro 3,1 with the same number of processors running at the same 3.0 GHz speed [ http://browser.primatelabs.com/geekbench3/compare/201294?baseline=254478 ]. Maybe somebody else, other than Apple, can better explain to me why I should retire my 2007 Mac Pros or leave them with Lion as the base for future software applications, when any 2008 Mac Pro get an open invitation to upgrade to Mavericks.

BTW - I've unhackintoshed all three of my 2007 Mac Pros and did the brain dead easy Mavericks upgrade Tiamo has revealed.

I think Apple simply went for the simplest deprecation strategy making anything with a non 64 bit CPU (core duo) but unfortunately also EFI32 too. Understandable from a processor standpoint for the low end models but the unfortunate casualty of this process was the MP 1-2,1 which as you show is still up to doing a job.
 
A Cheap OctaneRender CUDA Rig For Small Jobs, Tinkering or Occasional 3d App Use

A GTX 780 Ti SC ACX Octane Rendering Equivalent For Old Mac Pros For Less & Without OCL Issues

Take a 2007 Mac Pro (or a 2006 Mac Pro that has the EFI hack [ see post #1 in this thread - https://forums.macrumors.com/threads/1333421/ ] to convert the 2006 Mac Pro into a 2007 Mac Pro - this will allow for adding somewhat faster CPUs) and get Mavericks and install it like Tiamo recommends [ https://forums.macrumors.com/threads/1598176/ ]. Get two <$200 EVGA B-stock [ http://www.evga.com/products/ProductList.aspx?type=8 ] GTX 480/1.5Gs and a new or B-stock <$100 EVGA GT 640/4G, and an FSP 450W Booster [ http://www.ebay.com/itm/FSP-Booster...-/231094146472?pt=PCA_UPS&hash=item35ce48d5a8 ]. The EVGA GT640 is a great card to use when setting up your scene and helps to improve responsiveness during setup if you de-select it in OctaneRender Preferences while you’re setting up or otherwise tweaking your scene. If you shop wisely, you can get all of this hardware for about $600, which is less than the current price of an EVGA GTX 780 Ti SC ACX ( currently, Mac users running Mavericks with the 780 Ti cards, as well as recent Titans which also have the GK110B chip, are experiencing OCL issues). Install the Booster (to power the two GTX 480s) in the DVD bay - above or preferably below your DVD drive ( I’ve removed my DVD drive and installed 2 HDs there for a 5 drive Raid 0). Then get a seat of OctaneRender for 359 € (currently $492.15 US Dollar) for your favorite 3d application [ http://render.otoy.com/shop/ ].

As shown in post # 898, above, an EVGA 780 Ti SC ACX has a Titan Equivalency (“TE”) of 1.32 (95 sec. (for Barefeat's Titan) / 72 sec) = 1.319444444444444 =~ 1.32. Using the GTX480 duo and a GT 640, you get a TE of 95 / 75 sec. = 1.26666666666667 =~ 1.27 (just slightly less) and will save yourself at least $200 ( $130 more for the EVGA 780 Ti SC ACX and about $100 for the FSP Booster if purchased for the GTX 780 TI SC ACX alone). Also, as is the case with the 780 Ti, it renders faster than a one Titan rig and costs less than a Titan ($1k base).

Caveat: If you’re into rendering large scenes and plan to have just one CUDA rig, you should probably go for GTX cards with a larger frame buffer than the GTX 480/1.5G.
 

Attachments

  • MP2007dualGTX480s_GT640.png
    MP2007dualGTX480s_GT640.png
    978.3 KB · Views: 142
Last edited:
thx for the informations for the 1,1 mac pro, do you think that a titan has limitations on a 1,1 mac pro for octane rendering, could you do a little test? thx you
 
thx for the informations for the 1,1 mac pro, do you think that a titan has limitations on a 1,1 mac pro for octane rendering, could you do a little test? thx you


Short Answer: Not just because the Mac Pro is a 2006 (1,1 version).

Long Answer: The original Titans (GK110As) are just a little slower in OctaneRender on all Mac Pros in OSX because you can't tweak the memory as you can in Windows to improve performance, but comparing the Titans at factory settings, the original Titans are just as fast on all Mac Pros as they are in Windows if you're measuring performance with OctaneRender post-version 1.0. Version 1 and the betas relied more heavily on double precision floating point peak performance ("DP") prowess than later versions of OctaneRender that are optimized for Keplar GPUs (high single precision ("SP") , but low DP, prowess). Newer Titans now have the GK110B (the same as the GTX 780 Ti). I suspect that is because the Titans will soon have 2880 CUDA cores enabled just as the 780 Tis already have - to keep the Titan brand competitive. Titans currently can have their greater DP prowess unleash with Nvidia Control Panel in Windows and have that 6G frame buffer. But for now, Titans have fewer CUDA cores enabled than the 780 Tis. The GK110B currently causes OCL problems in Macs Pros running Mavericks, but CUDA performance isn't affected. The surprising aspect to the scope of this problem is, however, that apps which you might not have expected to be partially dependent on OCL (like Preview) are affected negatively. So unless you're getting an original Titan, the current Titans have the same caveat as the GTX 780 Tis. But the 780 Tis currently are about 30% faster than the Titans with either the GK110A or GK110B. Moreover, the 780 Tis cost about $300 less for the reference design and Super Clocked ("SC") models than their Titan comparators.
 
Do you think one of those FSP X5 Booster optical bay PSUs could power a 780Ti or Titan in a 5,1 Mac Pro? Those look very interesting. Never been a fan of the external PSU hack.
 
Do you think one of those FSP X5 Booster optical bay PSUs could power a 780Ti or Titan in a 5,1 Mac Pro? Those look very interesting. Never been a fan of the external PSU hack.

Short answer: Absolutely yes.

Long answer: It could simultaneous power up to two of either the Titan and 780Ti or up to one 780Ti plus one Titan simultaneously if you've got the slot space for a pair - identical or mixed [one of two cards could also be a Radeon 7970 GHz Ed or R9 280(X) or 290(X) if you really like mixing it up because while Nvidia has CUDA and AMD doesn't, AMD's rendition of OCL kicks Nvidia's rendition of OCL to the curb]. I'm powering 2 GTX 480 SCs in one and a GTX 480 SC and 580 SC in another one of my 2007 Mac Pros with the FSP X5 Booster. The significance of this is that those older GTXs each gobble more power than the newer cards like the Titan and the 780s, even in their SC renditions. But see current caveat regarding GPUs with GK110B chip in last few posts, i.e., 915 and 917.
 
Maximizing GPU CUDA 3d Compute Performance By GPU Selection

GPU Performance Review

I. My CUDA GPUs’ Titan Equivalency*/ (TE) from highest to lowest (fastest OctaneRender**/ Benchmark V1.20 score to lowest):

1) EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) = TE of 1.319
2) EVGA GTX 690 / 4G = TE of 1.202
3) EVGA GTX Titan SC / 6G = TE of 1.185
4) EVGA GTX 590C = TE of 1.13

Titan that Bare Feats tested = TE of 1.0

5) EVGA GTX 480 SC / 1.5G = TE of .613
6) EVGA GTX 580 Classified (C) / 3G = TE of .594
7) Galaxy 680 / 4G = TE of .593
8) BFG Tech GTX 295 = As indicated below, this card's compute capability is deprecated with latest version of OctaneRender, so I use it for ACS and Blender.
9) EVGA GT 640 / 4G = TE of .097
I use the EVGA GT 640 / 4G for video output/as a 4G frame buffer and interactivity only when 3d scene building and scene tweaking.

II. Processing Power GFLOPS***/

1) EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) = 5,046/210
2) EVGA GTX 690 / 4G = 2× 2810.88 = 5,621.76
3) EVGA GTX Titan SC / 6G = 4,500/1,300-1,500
4) EVGA GTX 590C /3G = 2,488.3
5) EVGA GTX 480 SC / 1.5G = 1,344.96
6) EVGA GTX 580 Classified (C) / 3G = 1,581.1
7) Galaxy 680 / 4G = 3,090.43
8) BFG Tech GTX 295 = 1,788.480
9) EVGA GT 640 / 4G = 729.6

III. CUDA Compute Capability

CUDA compute capability (CCC) (and differences in GPU memory amounts) can affect what the renderer can actually perform:
Compute capabilities required for octane features:

Compute Capability Limitations
1) CCC of 1.0: Octane Render Version 1.20+ not supported.
2) CCC of 1.1 or lower: no PMC kernel or matte material (shadow capture)
3) CCC of 1.1 -> 1.3: no PMC or matte material (shadow capture)
4) CCC of 2.0 and 2.1: no limitations
5) CCC of 3.0 and 3.5: no limitations

Compute Capability Of My Cards
1) EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) = CCC of 3.5
2) EVGA GTX 690 / 4G = CCC of 3.0
3) EVGA GTX Titan SC / 6G = CCC of 3.5
4) EVGA GTX 590 Classified (C) / 3G = CCC of 2.0
5) EVGA GTX 480 SC / 1.5G = CCC of 2.0
6) EVGA GTX 580 C / 3G = CCC of 2.0
7) Galaxy 680 / 4G = CCC of 3.0
8) BFG Tech GTX 295 = CCC of 1.3
9) EVGA GT 640 / 4G = CCC of 3.5


*/ I use TE to compare how GPUs perform relative to the Titan that Bare Feats tested here: http://barefeats.com/gputitan.html . see post # 865, above [ https://forums.macrumors.com/showthread.php?p=18267072&coined#post18267072 ]. For example, my EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) with a TE of 1.319 is 1.319 times faster than the Titan that Bare Feats tested, but my EVGA GTX 480 SC / 1.5G with a TE of .613 is only about 60% as fast as the Titan that Bare Feats tested. Because of OctaneRender's perfect linearity two EVGA GTX 480 SC / 1.5G with a TE of .613 would be 1.226 (or 2x .613) times faster than the Titan that Bare Feats tested.


**/ OctaneRender [ http://render.otoy.com/features.php ] has plugin support presently for:
ArchiCAD
Blender
Daz Studio
Lightwave
Poser
Rhino
3ds Max
AutoCAD
Cinema4D
Inventor
Maya
Revit and
Softimage;
and will soon have plugin support for:
SketchUp (in development)
Modo (in development) and
Carrara (in development). Otherwise, it imports .obj files and there are free community developed exporter scripts for:
Autodesk 3D Studio Max
Autodesk Maya
Autodesk Softimage XSI
Blender
Maxon Cinema 4D
Sketchup and
Modo.


***/ These figures come from Wikipedia [ http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units ] and it’s my understanding that they reflect the performance of reference design GPUs. Almost all of my GPUs are non-reference design; so the performance of my non-reference design GPUs exceed these figures. Moreover, for only recent GPUs does Wikipedia set forth both single precision floating point peak performance and double precision floating point peak performance. The larger, first figure (when two are given) reflects single precision and the smaller, second figure reflects double precision.

Furthermore, since Mac users may find it difficult to tweak their video cards, I recommend that, if you seek peak performance, you should purchase a card with a well binned GPU and superclocked at purchase. It’s my experience that EVGA does this better than it’s competitors. It’s also my observation that the performance differential is almost always a lot greater than the difference in cost. Finally, it’s my observation that higher memory speeds have a significant impact on OctaneRender performance.

In Mavericks, currently GTX Titans (newer production) and GTX 780 Tis with the GK110B processor cause applications that call OCL to crash.
 
Last edited:
Short answer: Absolutely yes.

Long answer: It could simultaneous power up to two of either the Titan and 780Ti or up to one 780Ti plus one Titan simultaneously if you've got the slot space for a pair...
Huh. I thought the 780Ti/Titan were 250W cards and the X5 is a 450W PSU. It could drive both?

Also, I was wondering since you have experience with Octane, if I add a 3GB 780 to my current 670 4GB, how would Octane allocate that memory? I know it would limit the 670 to 3GB, but if I'm using the 670 as my display card, some of that 4GB is being used by the OSX. So, in theory, could my 3D app be using 1GB of VRAM and still allow both 3GB of free memory for scene rendering? Does that make any sense?
 
Do you think one of those FSP X5 Booster optical bay PSUs could power a 780Ti or Titan in a 5,1 Mac Pro? Those look very interesting. Never been a fan of the external PSU hack.

Absolutely - two of them.

----------

Huh. I thought the 780Ti/Titan were 250W cards and the X5 is a 450W PSU. It could drive both?

Also, I was wondering since you have experience with Octane, if I add a 3GB 780 to my current 670 4GB, how would Octane allocate that memory? I know it would limit the 670 to 3GB, but if I'm using the 670 as my display card, some of that 4GB is being used by the OSX. So, in theory, could my 3D app be using 1GB of VRAM and still allow both 3GB of free memory for scene rendering? Does that make any sense?

I understand your question clearly and the answer is "Yes, but it might slow just a little the performance of the 670."
 
Latest Benchmarks for AlphaCanisLupus1

Using Cinebench 15, my <$9k 32-core WolfPackPrime1 (Supermicro X9QR7-TF+/X9QRi-F+) scored 3,791 and using Geekbench 3, it scored 71,010 and 71,367, using 4 used E5-4650 QBEDs. The Geekbench 3 score is over 10,000 points higher than its Geekbench 2 score of 58,027. I've modified this server through its bios settings to emulate underclocking though the server settings for CPU Power Management Configuration, although there are no direct clock frequency controls. Before these modifications, my Cinebench 15 scores were under 3,600, with most of them in the high 3,400 - 3,500 range.

Using Geekbench 3, my <$9k 32-core WolfPackPrime0 (Supermicro X9QR7-TF+/X9QRi-F+) scored 71,112 and 71,691.

As I've stated before, I use Cinebench and Geekbench while tweaking my systems to achieve maximum CPU performance.
 
Last edited:
Prices has risen since the holidays, but there are options.

What kind of performance can you get for a self-build costing under $5K or $8.6K?

You can buy:
1) a used CPU like the $500 E5-4650 QBED [ http://www.ebay.com/itm/ES-Intel-Xe...2-7GHZ-20MB-QBED-ES-LGA2011-CPU-/131085370574 ];
2) a motherboard like the $251 GIGABYTE GA-X79-UP4 LGA2011/ Intel X79/ DDR3/ 4-Way CrossFireX&4-Way SLI/ SATA3&USB3.0/A&GbE/ ATX Motherboard [ http://www.superbiiz.com/detail.php?name=MB-X79-UP4 ];
3) a computer case like the $150 NZXT SWITCH 810 No Power Supply ATX Hybrid Full Tower Case (Gunmetal) [ http://www.superbiiz.com/detail.php?name=CA-SW810G ];
4) a PSU like the $288 LEPA G Series G1600-MA 1600W 80 PLUS Gold ATX12V v2.3 & EPS12V v2.92 Power Supply [ http://www.superbiiz.com/detail.php?name=PS-G1600MA ];
5) CPU cooler like the $90 CORSAIR Hydro Series H80i High Performance Water/Liquid CPU Cooler [ http://www.newegg.com/Product/Product.aspx?Item=N82E16835181031 ];
6) a set of memory like the $940 CORSAIR Dominator Platinum 64GB (8 x 8GB) 240-Pin DDR3 SDRAM DDR3 2133 Desktop Memory Model CMD64GX3M8A2133C9 [ http://www.newegg.com/Product/Produc...82E16820233363 ];
7) a couple of GPUs like two $720 XFX Triple D FX-799A-6NF9 Radeon HD 7990 6GB 384-bit x2 GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card [ http://www.newegg.com/Product/Product.aspx?Item=N82E16814150694 ], or 2 GTX 780 Ti's SC ACX for about the same price; and
8) a PCI-e SSD like a $1,129.99 960GB OWC Mercury Accelsior E2 PCI Express High-Performance SSD with eSATA Expansion Ports [ http://eshop.macsales.com/item/Other World Computing/SSDPHWE2R960/ ].

Total thus far is (500 + 251 + 150 + 288 + 90 + 940 + 1440 + 1,130 =) $4,789 for
(a) a liquid cooled single CPU Sandy Bridge system, but on an Ivy Bridge motherboard, with an 8-core CPU {that you can clocked at about 2.90 (from 2.7) GHz base/3.76 (from 3.5) GHz Turbo Boost};
(b) four GPU cores - each with a 3 gig frame buffer, all yielding a total of 16.4 Tflops SP and 3.788 Tflops DP for the OCL fans or you could go the all Nvidia CUDA/OCL route;
(c) 64 gigs of 2,133 MHz ram; and
(d) 960 gigs of storage, spec’d at about a peak of 756 GB/s reads and 673 GB/s writes.
Buy your OS(es), mouse and keyboard of choice and whatever other internal storage you need.

Now, if you had another $3.6K to spend on the system, you could buy a Sharp 32” PN-K321-4K Ultra HD LED monitor from here [ http://store.apple.com/us/product/HD971LL/A/sharp-32-pn-k321-4k-ultra-hd-led-monitor?fnode=53 ] or from wherever else the price suits you better.

Other resources: (1) http://rampagedev.wordpress.com/2014/01/05/updated-x79-dmg-released/ and (2) http://rampagedev.wordpress.com/dsdt-downloads/gigabyte-x79/ .
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.