Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Are the 4600 series ES Xeons the same ones you're using in your WolfPackPrime0? Absolute bargain for building a 4x CPU system!

Had no idea about your 'emulated' underclocking technique with these systems. How effective is it compared to the 5600 series Xeons?

I was also waiting for you to pounce onto the GB3 stage - and what an entrance haha! 71,691 is massive! :eek:

I know there are other systems out there better than mine but so far I'm still holding on to the top 12 core score, for both OSX and Windows :)
 
Are the 4600 series ES Xeons the same ones you're using in your WolfPackPrime0? Absolute bargain for building a 4x CPU system!

They (E5-4650 QBED ES Xeons) are faster than final released version (Mine Turbo Boost up to 3.5 GHz, rather than 3.3 GHz) are the thinkers for both WolfPackPrime0 and WolfPackPrime1. How else could I build each of these 32-core systems for about $8K each.

Had no idea about your 'emulated' underclocking technique with these systems. How effective is it compared to the 5600 series Xeons?

It's not nearly as precise to under- or over-clock a server via its bios power management features and has much greater limitations, but as Clint Eastwood (aka Harry Callahan) might have said: One's got know the limitations of the tools that are given and use them as best as is possible. Its tweaking by manipulating watt related parameters, as opposed to tweaking by adjusting the frequencies/steppings themselves. I call underclocking via this method "grossing under through the backdoor " or overclocking via this method "grossing up through the backdoor." Specifically, I do it through the parameters for controlling "Energy Performance Bias," using "Long Duration Power Limit," Long Duration Maintained" and "Short Duration Power Limit." To underclock, I leave "Long Duration Maintained" at the default of 0 (thus not forcing CPUs to run at or near base for any particular time), and manipulate "Long Duration Power Limit" wattage downward from the "Factory Long Duration Power Limit" which for WolfPackPrime0 and its mate WolfPackPrime1, is, at factory, 130 Watts (the CPUs' TDP) (this keeps the cores cooler and readies them for turbo sooner) and I manipulate "Short Duration Power Limit" ever so slightly higher than the setting, at factory, of a 1.2 differential - "Recommended Short Duration Power." This last adjustment provides a greater wattage power window for the turbo stage. I measure the effect of each tweak in Geekbench and Cinebench to achieve optimal settings. If you ever add a server to your systems, I'll help you to tweak it.

I was also waiting for you to pounce onto the GB3 stage - and what an entrance haha! 71,691 is massive! :eek:

I decided to end last year (2013) by providing glimpses of the power of WolfPackPrime0 and WolfPackPrime1 under updated benchmarks. I haven't tested either (a) WolfPack1 (EVGA-SR2 2x5680: CB11.5 - 24.7 / 10/15/2011 GB2 - 40,100 OSX [ http://browser.primatelabs.com/geekbench2/500630 ] ), GB2 - 35,322 Win [ http://browser.primatelabs.com/geekbench2/639653 ] or (b) WolfPack2 (EVGA-SR2 2x5680: CB11.5 - 24.6 / 10/15/2011 GB2 - 40,051 OSX [ http://browser.primatelabs.com/geekbench2/500492 - no owner attribution was attached]) with GB3 yet.



I know there are other systems out there better than mine but so far I'm still holding on to the top 12 core score, for both OSX and Windows :)

I thought that "HeavyMetal" was you. Congratulations! May you and your family have a New Year that provides health and prosperity beyond measure. Your friend always.:)
 
Last edited:
Thanks for thinking of me.

http://www.evga.com/articles/00813/#Kingpin new evga 780 ti classified for you tutor !

However, EVGA's calling it "KIngpIn" implies that it's for Kings. The "FIeldpIn," when it drops, would appear to be more appropriate for me, for I've been fishing and hunting about Ebay lately, gathering up relatively cheap, old GPUs for Octane Render. See my post no. 920, above. There're hidden surprises there - Would you have guessed that performance based order in toto? Now, guess how they'd rank by current purchase price, or by Titan Equivalency (TE) per Denar. Newer doesn't always mean better performance per Denar, especially when one has special needs cases, like I do. That's my Denar's worth*/. So the Cheap Bad Man must remain true to his moniker and, acquire, at least, some cheap "Bad" GPUs for the time being, or he'll be threshing and winnowing the film grain farther and for longer down that old and dusty, farm road.

*/ At this moment one Macedonian Denar equals 0.022 US Dollars, i.e., just a little more than 2 cents.
 
Last edited:
For GPU computing needs I like systems with lots of PCIe slots.

Here’s how I gauge the CUDA performance of my GPUs per dollar spent, from lowest cost per TE to highest:

1) EVGA GTX 480 SC /1.5Gig (G) = TE of .613; Price = ~$200 (on Ebay) [ for each $1000 spent or with 5 of them you’d get a TE of 3.065 { 1000 / 200 = 5; 5 * 0.613 = 3.065 }];
2) EVGA GTX 590 Classified (C) / 1.5G per GPU = TE of 1.13; Price = ~ $400 (on Ebay) [ for each $1000 spent or with 2.5 of them you’d get a TE of 2.825 { 1000 / 400 = 2.5; 2.5 * 1.13 = 2.825 }];
3) EVGA GTX 580C / 3G = TE of .594; Price = ~ $300 (on Ebay) [ for each $1000 spent or with 3.33 of them you’d get a TE of 1.807 { 1000 / 300 = 3.333; 3.333 * 0.594 = 1.980 }];
4) EVGA GTX 780 Ti Superclock (SC) ACX / 3G = TE of 1.319; Price = $730 from EVGA [ for each $1000 spent or with 1.37 of them you’d get a TE of 1.807 { 1000 / 730 = 1.37; 1.37 * 1.319 = 1.807 }];
5) Galaxy 680 / 4G = TE of .593; Price = ~ $460 for comparable GPU at NewEgg [ for each $1000 spent or with 2.174 of them you’d get a TE of 1.289 { 1000 / 460 = 2.174; 2.174 * 0.593 = 1.289 }];
6) EVGA GTX 690 / 2G per GPU = TE of 1.202; Price = $1,000 from EVGA [ for each $1000 spent or with 1.0 of them you’d get a TE of 1.202 { 1000 / 1000 = 1.0; 1.0 * 1.202 = 1.202 }];
7) EVGA GTX Titan SC / 6G = TE of 1.185; Price = $1,020 from EVGA [ 1.185 * 1000 = $1185]; and
8) Titan Reference Design that Bare Feats tested = TE of 1.0; Price= $1,000 from EVGA [ for each $1000 spent or with 1.0 of them you’d get a TE of 1.0 { 1000 / 1000 = 1.0; 1.0 * 1.0 = 1.0 }].

Obviously, other factors will come into play. These can include your budget, the number of available double wide PCIe slots that you have, as well as your needs, which can be influenced by business demands such as typical project size and your work style such as number and size of textures. Your clientele may have a bearing on which card(s) appear(s) to be the best value for your money.

Additionally, this February we could see the introduction of the GeForce GTX 790 that may feature 10GB of memory, with dual 320b interfaces and 2496 CUDA cores per GPU [and cards like the GTX 860 - based on the mid-range GM107 GPU] [ http://videocardz.com/48610/nvidia-maxwell-details-revealed-ces-2014 ].

Lastly, there will soon be more credible portable GPU computing options. According to Videocardz, in February, NVIDIA will be releasing their next mobile flagship - GeForce GTX 880M, using the GK104, with 1536 CUDA cores, with 8GB of memory [ http://videocardz.com/48633/nvidia-geforce-gtx-880m-rebranded-gtx-780m-8gb-memory ]. For those more interested in better portable OpenCL compute performance, AMD officially introduced at CES 2014 its R7 M200 series, though currently such mobile graphics cards are only available in certain systems, including Alienware, Clevo, Lenovo and MSI [ http://videocardz.com/48667/amd-radeon-r9r7r5-m200-series-officially-announced ].
 
Last edited:
Can the GPU fold space and thereby fold time or what's up with the new signature?

What’s the meaning of “... . a total GTX Titan RD Octane Rendering TE of >55” in my signature.

The basic import of that statement is that the 19 systems referenced in my signature have a total Octane rendering power of more than 55 reference design (RD) GTX Titans. The RD Titan that I’ve chosen to use as a reference point for performance comparison is the one that barefeats used here: http://barefeats.com/gputitan.html. The import of having a TE of greater than 55 is that each GTX Titan is attributed with having a rendering/compute capability equal to, at least, 10 single CPU E5-2687W V1 computer systems. To have a total TE of greater than 55 would equate to having a rendering/compute capability of approximately 550 (55x10) single CPU E5-2687W V1 systems. So here's proof that GPUs can fold space by decreasing the space needed to contain that much rendering power and proof that the GPU thereby folds or, at least, decreases rendering time. Additionally, those 19 systems have about 190 (mostly tweaked) CPU cores, among them, for additional far-out rendering capability - all contained within just 19 computer cases. Rendering can take place separately, but also simultaneously, on the GPUs vs. the CPUs. That's big time space and time worps. Accordingly,

Stephen Hawking,
Your science fiction [ http://www.hawking.org.uk/space-and-time-warps.html ] is no longer fictional.
Tutor
 
Last edited:
However, EVGA's calling it "KIngpIn" implies that it's for Kings. The "FIeldpIn," when it drops, would appear to be more appropriate for me, for I've been fishing and hunting about Ebay lately, gathering up relatively cheap, old GPUs for Octane Render. See my post no. 920, above. There're hidden surprises there - Would you have guessed that performance based order in toto? Now, guess how they'd rank by current purchase price, or by Titan Equivalency (TE) per Denar. Newer doesn't always mean better performance per Denar, especially when one has special needs cases, like I do. That's my Denar's worth*/. So the Cheap Bad Man must remain true to his moniker and, acquire, at least, some cheap "Bad" GPUs for the time being, or he'll be threshing and winnowing the film grain farther and for longer down that old and dusty, farm road.

*/ At this moment one Macedonian Denar equals 0.022 US Dollars, i.e., just a little more than 2 cents.

I thought Kingpin is a person http://www.geforce.com/whats-new/articles/vince-kingpin-lucido
 
Last edited:
New cards to consider. The new 790 is supposed to have 4992 Cuda Cores.

http://www.tomshardware.com/news/gtx-titan-black-edition-790,25839.html

.. with 5 gigs of vram supporting each of the two graphic processors. To me, it's the more phenomenal of the next two highend GTX cards for CUDA 3d rendering. The Titan Black Edition appears to be the same as the GTX 780 Ti with 6 vs. 3 gigs of vram and a much, much larger double precision floating point peak performance.
 
.. with 5 gigs of vram supporting each of the two graphic processors. To me, it's the more phenomenal of the next two highend GTX cards for CUDA 3d rendering. The Titan Black Edition appears to be the same as the GTX 780 Ti with 6 vs. 3 gigs of vram and a much, much larger double precision floating point peak performance.
Actually, the 790 will be 3GB per GPU for a total of 6GB. I like that the new Titan Black has more DP performance, but how much will that effect Octane? It was rewritten to better take advantage of Kepler chips and single-precision performance. How much DP work is still going on? Also, with cards like the 690 and soon 790, how does Octane see the RAM on these cards, as the individual size or combined size?
 
Unless you're rendering very large scenes, the GTX 790 may be The Card For Rendering.

Actually, the 790 will be 3GB per GPU for a total of 6GB.

I hope that the other players don't follow Asus's lead. I hope that they go with 5GB per graphics processor instead of 3GB each.

I like that the new Titan Black has more DP performance, but how much will that effect Octane?

Some definitely, but I question whether in Octane that will be enough to hold back the GTX 790 from achieving what the GTX 690 has achieved - and that is superior OctaneRender performance over the oGTX Titan.

It [Octane] was rewritten to better take advantage of Kepler chips and single-precision performance. How much DP work is still going on?

See prior response - except that the projected core difference between the Titan Black Edition and the GTX 790 will be even greater. But we most keep in mind that the code re-write only devalued the importance of greater DP performance, but didn't and can't entirely do away with it.

Here's how my GPUs lineup in OctaneRender prowess relative to the old (o) reference design (RD) Titan:
1) EVGA GTX 780 Ti Superclock (SC) ACX / 3gig (G) = TE of 1.319 (2880 Kepler cores)
2) EVGA GTX 690 / 4G = TE of 1.202 (3072 Kepler cores)
3) EVGA GTX Titan SC / 6G = TE of 1.185 (2688 Kepler cores)
4) EVGA GTX 590C = TE of 1.13 (1024 Fermi cores)

Titan that Bare Feats tested = TE of 1.0 (slower clocks than mine)

5) EVGA GTX 480 SC / 1.5G = TE of .613 (480 Fermi cores)
6) EVGA GTX 580 Classified (C) / 3G = TE of .594 (512 Fermi cores)
7) Galaxy 680 / 4G = TE of .593 (1,536 Kepler cores)

Core, memory and bus speeds affect performance, as does core count and core kind. The amount of vram affects the size of scenes that can be handled, but doesn't have much of an impact on the speed at which they'll be handled. The rule of thumb has been that a Fermi core = ~ 3x a Kepler core. But not even all Fermi cores are equal (compare Octane render performance of GTX 480 with that of the GTX 580 - this is probably due to Nvidia's starting to fear that GTX might or was beginning to encroach to heavily on Tesla sales). The supremacy of Fermi cores is shown by the (post-Octane-rewrite) performance ranking, above. The GTX 790 is projected to have 2x 2496 or 4992 cores. The nTitan will have the same number of cores as the GTX 780 Ti - 2880 cores. Greater speeds account for the GTX 780 Ti's small lead over the GTX 690. The 384 core lead that the GTX 690 has over the oGTX Titan is why the GTX 690 renders a little faster. With the 2,112 core lead that the GTX 790 will have over the GTX Titan Black Edition and GTX 780 Ti, the GTX Titan Black Edition and GTX 780 Ti will consume copious amounts of GTX 790 dust because no readily achievable clock speed (or in the case of the new Titan - DP) differences can save them from this fate. This leads me to believe that the GTX 790 will be the highest priced GTX card to date. That's why Videocardz shows its anticipated price as $999+. Or the GTX 790 could drive down the price of all other contenders.

Also, with cards like the 690 and soon 790, how does Octane see the RAM on these cards, as the individual size or combined size?

Octane sees the ram on my GTX 690 as that which is allocated to one of the graphics GPUs, i.e., 2GB. Also, when you put different spec cards in the same system, Octane reads only the vram buffer size of the one with the least amount of ram. So when I put my GTX 690 with my GTX Titans (w/6GB), Octane sees only 2GB. With the Asus GTX 790, Octane will see only 3GB into which to load the scene, unless you put, say, a GTX 480 in the mix, then it would see, at most, only 1.5GB. Unlike with vram, all cores, even of dual GPU cards get counted and used.
 
Last edited:
The GTX 790 is projected to have 2x 2496 or 4992 cores. The nTitan will have the same number of cores as the GTX 780 Ti - 2880 cores. Greater speeds account for the GTX 780 Ti's small lead over the GTX 690. The 384 core lead that the GTX 690 has over the oGTX Titan is why the GTX 690 renders a little faster. With the 2,112 core lead that the GTX 790 will have over the GTX Titan Black Edition and GTX 780 Ti, the GTX Titan Black Edition and GTX 780 Ti will consume copious amounts of GTX 790 dust because no readily achievable clock speed (or in the case of the new Titan - DP) differences can save them from this fate. This leads me to believe that the GTX 790 will be the highest priced GTX card to date. That's why Videocardz shows its anticipated price as $999+. Or the GTX 790 could drive down the price of all other contenders.

As someone who has yet to buy Octane and someone still in the "Which card to buy?" phase do you often run into problems with 3gig of memory for a scene vs 6gig for a Titan? It seems to me the new 790 with all those cores is a no brainer for speed but I'm always worried about not having enough memory for a scene.
 
As someone who has yet to buy Octane and someone still in the "Which card to buy?" phase do you often run into problems with 3gig of memory for a scene vs 6gig for a Titan? It seems to me the new 790 with all those cores is a no brainer for speed but I'm always worried about not having enough memory for a scene.

I've rarely run into vram related GPU issues. However, I've got cards with from 1.5GB to 6GB that I use for various rendering chores. What's your ultimate aim? Is this for work or play? What type of computer system will you be using and what's the OS? If for work, a lot has to do with the kinds of projects you hope to take on. Will you be compositing and to what extent? 3GB is sufficient for most. But it's better if you intend to own only one rendering GPU for work to get a GTX Titan because it's the most flexible. If its going in a Mac Pro that runs Mavericks, keep in mind that GPUs with the GK110B processors (all GTX 780 Tis and newer GTX Titans [even pre-Titan Black Edition]) currently crash/freeze when the system/application makes OpenCL calls. If you intend to eventually own more than one GPU, and you're just beginning to build rendering capacity, then I recommend that you start with a GTX 3-4 GB GPU if you're going to put it in an oMP. Currently, I've got two double wide GPUs (GTX 590/1.5GBs) in each of my three MacPro2,1s and dual Galaxy GTX 680/4GBs in my MacPro5,1 (extra power supplied by FSP BoosterX5s in each of them). My other GPUs are in selfbuilt systems that hold 3x, 4x and 8x double wide GPUs.
 
I've rarely run into vram related GPU issues. However, I've got cards with from 1.5GB to 6GB that I use for various rendering chores. What's your ultimate aim? Is this for work or play? What type of computer system will you be using and what's the OS? If for work, a lot has to do with the kinds of projects you hope to take on. Will you be compositing and to what extent? 3GB is sufficient for most. But it's better if you intend to own only one rendering GPU for work to get a GTX Titan because it's the most flexible. If its going in a Mac Pro that runs Mavericks, keep in mind that GPUs with the GK110B processors (all GTX 780 Tis and newer GTX Titans [even pre-Titan Black Edition]) currently crash/freeze when the system/application makes OpenCL calls. If you intend to eventually own more than one GPU, and you're just beginning to build rendering capacity, then I recommend that you start with a GTX 3-4 GB GPU if you're going to put it in an oMP. Currently, I've got two double wide GPUs (GTX 590/1.5GBs) in each of my three MacPro2,1s and dual Galaxy GTX 680/4GBs in my MacPro5,1 (extra power supplied by FSP BoosterX5s in each of them). My other GPUs are in selfbuilt systems that hold 3x, 4x and 8x double wide GPUs.

I'm currently in the process of building an oMP. AE and C4D work flow for motion graphics. I wanted to incorporate Octane into the mix. 1 GPU possibly 2 later down the road. I had planned to build a render farm but Octane just seems to simplify everything while being all the more powerful.
 
I'm currently in the process of building an oMP. AE and C4D work flow for motion graphics. I wanted to incorporate Octane into the mix. 1 GPU possibly 2 later down the road. I had planned to build a render farm but Octane just seems to simplify everything while being all the more powerful.

Given where you are in the process, what you intend to do, and what system you have, I'd recommend that you either wait a month or two to see if either the next Mavericks' update or Nvidia releases web drivers for OSX to resolve the Mavericks/GK110B OpenCL issue, or that you try to secure an early GTX Titan with the GK110A chip. Also, keep in mind (1) that you may be able to get (e.g., on EBay) a used early GTX Titan that is in good shape, (2) that you can put up to two Titans in an oMP (using an FSP BoosterX5 and still have left one single wide PCIe slot, and (3) that you can get one or more NetStor 4 double wide PCIe slotted external chassis from BHPhotoVideo for ~ $2,100 ea [ http://www.bhphotovideo.com/c/product/980261-REG/dynapower_usa_na255a_xgpu_netstor_6_slot_pcie.html ].


P.S. - see this recent thread that may open up new Gk110b potential - https://forums.macrumors.com/threads/1703274/ .
 
Last edited:
Given where you are in the process, what you intend to do, and what system you have, I'd recommend that you either wait a month or two to see if either the next Mavericks' update or Nvidia releases web drivers for OSX to resolve the Mavericks/GK110B OpenCL issue, or that you try to secure an early GTX Titan with the GK110A chip. Also, keep in mind (1) that you may be able to get (e.g., on EBay) a used early GTX Titan that is in good shape, (2) that you can put up to two Titans in an oMP (using an FSP BoosterX5 and still have left one single wide PCIe slot, and (3) that you can get one or more NetStor 4 double wide PCIe slotted external chassis from BHPhotoVideo for ~ $2,100 ea [ http://www.bhphotovideo.com/c/product/980261-REG/dynapower_usa_na255a_xgpu_netstor_6_slot_pcie.html ].

I'm a freelance motion designer and I run 2 Titans with Octane. It's pretty great. Sounds like the driver is indeed coming out. I ordered a second 780Ti an hour ago to maybe swap in if it works out. Stoked!
 
Other than OctaneRender, are there other credible GPU rendering applications?

Yes. Very soon, I intend to add coverage of some other GPU rendering applications that hold the promise of maximizing CPU/GPU related performance. I'll begin with two recent additions to my renderfarm: Thea Render [ http://www.thearender.com/cms/index.php/news/promotional-offer.html ] and RedShift [ https://www.redshift3d.com/products/redshift/ ]. That coverage will also include how they stack up against one another.
 
I'm a freelance motion designer and I run 2 Titans with Octane. It's pretty great. Sounds like the driver is indeed coming out. I ordered a second 780Ti an hour ago to maybe swap in if it works out. Stoked!

Seem like you have a system very close to what I want to build and use it for the same tasks. If you had to build it over again would you change anything?

----------

Given where you are in the process, what you intend to do, and what system you have, I'd recommend that you either wait a month or two to see if either the next Mavericks' update or Nvidia releases web drivers for OSX to resolve the Mavericks/GK110B OpenCL issue, or that you try to secure an early GTX Titan with the GK110A chip. Also, keep in mind (1) that you may be able to get (e.g., on EBay) a used early GTX Titan that is in good shape, (2) that you can put up to two Titans in an oMP (using an FSP BoosterX5 and still have left one single wide PCIe slot, and (3) that you can get one or more NetStor 4 double wide PCIe slotted external chassis from BHPhotoVideo for ~ $2,100 ea [ http://www.bhphotovideo.com/c/product/980261-REG/dynapower_usa_na255a_xgpu_netstor_6_slot_pcie.html ].


P.S. - see this recent thread that may open up new Gk110b potential - https://forums.macrumors.com/threads/1703274/ .

Yeah it won't kill me to wait another month or so to see how these new cards pan out. Has anyone ever tried to run 3-4 GPUs using riser cards and ribbon cables going outside the case ala bit coin miners? Seem like it would solve the space and heat issues albeit in an ugly and cumbersome package. Perhaps dust would be a problem.
 
Seem like you have a system very close to what I want to build and use it for the same tasks. If you had to build it over again would you change anything?

----------



Yeah it won't kill me to wait another month or so to see how these new cards pan out. Has anyone ever tried to run 3-4 GPUs using riser cards and ribbon cables going outside the case ala bit coin miners? Seem like it would solve the space and heat issues albeit in an ugly and cumbersome package. Perhaps dust would be a problem.

Perhaps so might power drain on those PCIe slots be a new problem (That could also cause internal heat issues, at the least). However, I'm not aware of anyone having tried it, successfully or unsuccessfully.
 
Hello

I am looking for a medium power workstation for studies and desktop virtualization. Maybe a 6-core or dual 4-core Xeon.

In the first message of this thread it is said that an X79 chipset will give ECC with OS X. Is there any way to ensure that? Maybe see a log of fixed single bit errors from within the OS?

Now running Asus P8C WS + E3-1245v2 with GT640 graphics and not happy, since the machine will get stuck or crash after a couple of days. With both 10.8.x and 10.9.x. Memory is 4+4 GB ECC.

I wish someone could point me to a motherboard + CPU combo that would have at least ECC and fully functioning speedstep, and then maybe sleep, shutdown and restart. I am good at following instructions but I'm incapable of figuring out DSDT and such stuff all by myself.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.