Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I may not have ever thought about treating those GPUs separately but for your inquiry.

Sorry, haven't been around for a while, mind's been elsewhere. Glad I could help you get the neurones firing this time.

Good test, would be really interested to see how the 690 with this setup compares to dual titans, i.e., whether having effectively two cards in one slot is better than two cards in two slots.
 
Did you find any benchmarks regarding scaling? I looked at the same two CPUs as well but they are so darn expensive..if the prospect of prices coming down quite a bit at some point a second 2643 later on could be much cheaper than a single 2687 upfront..

I'd imagine they'd scale similarly to the results here.

In the UK a single 2687W is 1,500 and the 2643 is 1,100. I'm building a water cooled system so don't really want to come back later and add a second CPU, still the upfront cost is 800 difference, the cost of a second high-end GPU.

My current thinking is that you'd be paying 50% more for 0.2GHz boost with per core results falling away with an overall 5% boost in general performance. I'm not the best person to ask when it comes to hardware though, hopefully someone with more knowledge will correct me if I'm wrong.
 
Also dual 2643s will give you a staggering 12 cores at 3.8GHz decimating a 2697 at 2.7GHz. I can't see myself losing sleep over going with the 2643, especially when I'm lying on a beach somewhere thanks to the money I saved.

EDIT: I got that a bit wrong - the 2697 runs a max of 3.5GHz so not quite decimation, more of a slap. Still that's more processing power than the fastest "Pro".
 
Last edited:
Also dual 2643s will give you a staggering 12 cores at 3.8GHz decimating a 2697 at 2.7GHz. I can't see myself losing sleep over going with the 2643, especially when I'm lying on a beach somewhere thanks to the money I saved.

EDIT: I got that a bit wrong - the 2697 runs a max of 3.5GHz so not quite decimation, more of a slap. Still that's more processing power than the fastest "Pro".

You mean dual 2643v2 will give you 12 cores at 3.5GHz? Not sure how many cores can run at 3.8 though. Only one? Heh, I'm totally lost with these things..start to inform myself and the more I read the less I know.

You go for dual 2643 then or single proc? I'm really not sure if I'd realistically upgrate to a dual cpu later on so maybe going with a little tweaked (lol) 4930K may be the smart thing to do for me..*sigh* back to the drawing board.


I'm tending to 2x2650s if dual proc. / 16 cores at 2.7 with a boost of 3.4 doesn't sound too bad on paper. Would be under 2000€ for both of 'em. Hmm..
 
Last edited:
Sorry, haven't been around for a while, mind's been elsewhere. Glad I could help you get the neurones firing this time.

Good test, would be really interested to see how the 690 with this setup compares to dual titans, i.e., whether having effectively two cards in one slot is better than two cards in two slots.

Here's a PDF that highlight's some of the more important differences and similarities of CUDA GPUs. It shows you which GTX card is a sibling of which Tesla card and gives some information about the compute performance of each card and which features aid in that assessment. It is still a work in progress, so don't be alarmed by the question marks or blanks.

It shows that the GTX 690 is really two GTX 680s in one package and that the Telsa K10 and the GTX 690 are siblings. While a Titan is almost twice as fast at rendering in OctaneRender as one regular, reference design (i.e., plain vanilla = RD) , nonoverclocked GTX 680, the importance of the word "almost" takes added significance. Note my post no. 865, above. That Titan Equivalency (TE) helps to better explain some of what is going on. A GTX680 has a TE of .503 because, using Barefeats testing, the Titan took 95 secs to render the scene and the GTX 680 took 189 sec. [95 / 189 = 0.5026455026455]. Since that TE is greater than 50%, no matter how small, a GPU with two GTX680s in one package has the potential to beat a Titan and if you tweak the GTX 690 that difference over a regular, reference design Titan only beomes magnified, as it did in the case of post no. 845, above. My tweaked GTX 690 earns a TE of 1.2 [1 min & 19 sec = 79 sec; 95 / 79 = 1.20253164556962 or 1.20 for ease]. Thus a tweaked GTX 690 is 1.2x times as fast at rendering as a non-tweaked, reference design (RD) Titan. Of course, two Titans in the same system would surely beat one GTX 690, but that's not fair, for the two Titans are double teaming the one GTX 690 - currently the price of one RD GTX Titan is the same as one RD GTX 690. To make things fair put up two GTX 690 against two GTX Titans - The two GTX 690s will win. Also as shown in the PDF of all of the out of the box GTX cards there analyzed, the GTX 690 has the potential for exhibiting the highest performance. If I had this information and had digested it when it came time for me to populated my eight double wide slotted Tyan Server, you'd be seeing 8xGTX690, instead of 8xTitan, leading off in my signature, below. But hopefully, as we live we learn. I'd prefer most that the next GPU card that I could purchase would be a GTX 790 Ti - a package with dual GTX 780 Ti's. I'd rather have eight of those in my Tyan. And I keep dreaming.
 

Attachments

  • Nvidia'sBaker'sDozen.pdf
    61.2 KB · Views: 916
I'd imagine they'd scale similarly to the results here.

In the UK a single 2687W is 1,500 and the 2643 is 1,100. I'm building a water cooled system so don't really want to come back later and add a second CPU, still the upfront cost is 800 difference, the cost of a second high-end GPU.

My current thinking is that you'd be paying 50% more for 0.2GHz boost with per core results falling away with an overall 5% boost in general performance. I'm not the best person to ask when it comes to hardware though, hopefully someone with more knowledge will correct me if I'm wrong.

Before I chime in with recommendations, (1) what's the budget, (2) what're the most important applications you have in mind and what's the application priority ranking, (3) are we talking about V1 Sandy Bridge or V2 Ivy Bridge Xeon(s) and (4) which motherboard do you have in mind or how many GPUs and/or CPUs do you need? In the dual CPU category, the Supermicro DAX line is the best if you want to mildly overclock those Xeons, but the downside of the DAX is that it doesn't offer the space for lots of x16 GPUs. That's the rub if for what you do that is most important to you, you need many GPUs. Twietee seems to need many CPUs if Maxwell Renderer is his most important application, but Adobe PS/Illustrator will flourish with fewer higher clocked cores. So there may have to be trade-offs/compromises in his case.
 
Let your needs and those of your applications define your system(s).

Thanks for the ***** menu! (No insult intended! :D)

Question: I'm about to build my first small rig myself and do wonder about the CPUs at most. Using Maxwell Renderer paired with Rhino 3d for modelling and the usual Adobe PS/Illustrator combo - not for professional work as one may call it since I don't earn any money from it, but it is directly connected with my professional work, fwiw. Throw in CAD and a quick game here and there, that's about my usage scenario.

Problem is, I really have no idea whether a 4930k is better, or maybe a E5 2643 v2 on a dual board which I can upgrade in a year or so to a 2x6core..and on and on..
I really can't find a lot of info about the hyperthreading. How does a E5-2650 v2 with 2.6 base clock and a turbo freq of 3.4 perform over a longer period (say 3-4 hours) in single threaded progs? Permanently?

Part of my problem is that most sites/forums I visit/found talk about BF4 optimized rigs or are far above my level.

Can you help me out, Tutor? :D Hope I could explain my problem somehow understandable..

Before I chime in with recommendations, (1) what's the budget and (2) what's the application priority ranking? In the dual CPU category, the Supermicro DAX line is the best if you want to mildly overclock those Xeons, but the downside of the DAX is that it doesn't offer the space for lots of x16 GPUs. That's the rub if for what you do that is most important to you, you need many GPUs. Another downside of getting a dual CPU motherboard (mobo) is that some functionality is lost when you don't populate both CPU slots, e.g., some ports or PCIe slots may not be active. At first blush, you seem to need many CPUs if Maxwell Renderer is your most important application, but Adobe PS/Illustrator will flourish with fewer higher clocked cores on a system using i7s like the 4930k. So there may have to be trade-offs/compromises depending on how you prioritize the importance of each of your uses.

----------

I thank you for sharing this, my wallet does not ;) Really struggling to decide between the 2643 and 2687, whether the 2687 is worth 50% more £.

Your providing me the information requested in my last RFI to you (post 881) should lessen the struggle.
 
habitullence was first, so I'll gladly let you tutoring him first. :D

But one thing for starters: my current PC (E8400@3GHz) has a CPU Mark of 2169 (LOL), can I assume that a stock 4930K with 13092 is six times faster in CPU based tasks aka Maxwell?
 
Did you find any benchmarks regarding scaling? I looked at the same two CPUs as well but they are so darn expensive..if the prospect of prices coming down quite a bit at some point a second 2643 later on could be much cheaper than a single 2687 upfront..

Here's the CPU Skinny on Maxwell Render [ http://support.nextlimit.com/display/knfaq/System+Specifications+FAQ ]:
"1. Which is the best hardware configuration for using Maxwell? ...
Each hardware component is in fact crucial in a particular part of the 3D process:
The CPU is the most important component for fast renders. Get the fastest processors you can afford, plus with network rendering, Maxwells render speed scales almost linearly when adding more computers to contribute to the render process. At least a quad-core CPU is recommended.
RAM is where the scene information is stored during the render process. If your scene has complex geometry, huge textures, set to render at high resolution, or has the MultiLight feature enabled (which increases the amount of RAM needed to store all the emitters information separately) then you may need a computer with a lot of RAM (4GB and more). A usual configuration for a rendering / video compositing computer is 12-16 GB of RAM.
The graphics card is not involved in the rendering process. It is only involved in the openGL camera navigation when you are creating your scenes. A gaming card will be sufficient for most tasks, unless you plan to work with scenes containing many thousands of objects and/or require antialiased viewports which are much better handled by professional graphics cards. Their added advantage is that their drivers have been certified to work properly with different CAD / 3D applications such as SolidWorks, Rhino, Maya, 3dMax etc. It is also recommended to have a card with large on-board memory (1GB or more) if you plan to work with many large resolution textures and want to view them in the openGL viewports."

So a dual processor build, using a $517 (US) Supermicro DAX [ http://www.superbiiz.com/detail.php?name=MB-X9DAXF ], would not be overkill and since it does not use the GPU for rendering, the limited no. of x16 slots would not be much of a loss, plus you can overclock it about 1.075X. But how important is that in the scheme of your other computer uses?

Consider getting two of these: Xeon E5-2643 v2 (Ivy Bridge), 6 cores / 12 threads each, running at 3.5 GHz base w/o overclocking and that turbo boosts (TB) to 3.8 GHz w/o overclocking ( having 25 MB cache and a TDP of 130W) [ http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon E5-2643 v2.html ]. With the DAX motherboard you may able to get both of them to overclock by 1.075x, giving you 12 processors/24 threads running at 3.76 GHz at base (which PS and Ill. would love also) and that turbo boost to 4.085 GHz. The V2s retail each for about $1,552 (US). If the budget is lower, you could opt for V1s - they're $855 (US) each, quads (not hexes) and have a base of 3.3 GHz and a TB of 3.5 GHz, which would be about 3.55 base and TB of 3.76 when DAX aided.
 
Last edited:
Wow, that link must have slipped through! My works a rather minimalistic so lightning and materials being the most important factors leaving very little 'power hungry' things for PS to do. I Think the most demanding single threaded stuff would be sth like BF4 which I could care less about.
The v2 octo 2.7ghz up to 3.4ghz could be a good choice, no? I'm not really keen about paying more than 900€ for a CPU. Is the board you linked immediately usable with IvyEs? I tended towards the Asus ones because of the flashback bios thingy - but iirc they are single proc only. I'm mobile right now so have to have a closer look later.

Thanks a lot!
 
habitullence was first, so I'll gladly let you tutoring him first. :D

But one thing for starters: my current PC (E8400@3GHz) has a CPU Mark of 2169 (LOL), can I assume that a stock 4930K with 13092 is six times faster in CPU based tasks aka Maxwell?

I wouldn't make that assumption of six X, but rest assured it would be like the difference between night and day. Moreover, it could be even more than six X. But to be sure, that specific application would have to be tested using each CPU unless there's a Maxwell Render benchmark (a CPU) test, as is the case with OctaneRender (a GPU test).
 
... .
The v2 octo 2.7ghz up to 3.4ghz could be a good choice, no? I'm not really keen about paying more than 900€ for a CPU.

With the board that I linked to that would be 2.7 GHz x 1.075 -> base = 2.9 GHz and 3.4 GHz x 1.075 -> Turbo = 3.66 GHz, all octified (and i'm assuming twice w/dual CPUs).


Is the board you linked immediately usable with IvyEs?

Yes, but its for dual CPU setups. If you're talking about a single Xeon, this is what I use for $229 (US) : GIGABYTE GA-X79-UP4 LGA2011/ Intel X79/ DDR3/ 4-Way CrossFireX&4-Way SLI/ SATA3&USB3.0/A&GbE/ ATX Mo [ https://www.superbiiz.com/detail.php?name=MB-X79-UP4 ]. And yes, you can overclock it there too.
 
What's a bin boost?

You mean dual 2643v2 will give you 12 cores at 3.5GHz? Not sure how many cores can run at 3.8 though. ... .
[/QUOTE]

At the highest Turbo bin you'd be getting the King's ransome if 2 cores of a multicored recent Intel CPU ran at that speed, for the more usual case is just one core per CPU runs at the highest bin and the others are poping in or about base and base + 1 bin, unless you keep your system (and particularly, the CPU(s)) very cool; then the chances of getting the maximum number of cores allowed to participate at a Turbo Boost (TB) bin level increases.

BTW - What's a bin that gets back to my post #871, above: for Nehalems and Westmeres its 133 MHz base clock intervals and for Sandy Bridges and Ivy Bridges its 100 MHz base clock intervals. E.G., for a Westmere like the X5680, here's what CPUWorld states, in part: Frequency (base) 3333 MHz; Turbo frequency = 3600 MHz (1 or 2 cores), 3467 MHz (3 or more cores). That means that it ran at 3.33 GHz (or 3333 MHz) at base, and one bin boost would be 3.33 + .133 = 3.46 GHz where three or more cores could run, and a second bin boost of 3.46 + .133 = 3.59 GHz where one or two cores could run. Rounding accounts for the slight value differences. If you're using GHz values then for a Sandy or Ivy Bridge, you add .100 for a bin boost. So a 3.2 GHz CPU chip that goes up one bin, goes from 3.2 to 3.2 + .100 or 3.3 GHz. For most of us the one hundred MHz bin interval is easier to deal with. Also, the wider the TB range, obviously, there's more bin levels that are involved, but as bin stages go higher fewer cores get to participate at the higher level. That's why keeping your system(s)/CPU(s) cool allows them to run faster.
 
Last edited:
Here's a PDF that highlight's some of the more important differences and similarities of CUDA GPUs.

Thanks, I'll take a look tonight.

Of course, two Titans in the same system would surely beat one GTX 690, but that's not fair.

I was thinking as the 690 is a two-in-one, maybe I misunderstood.

If I had this information and had digested it when it came time for me to populated my eight double wide slotted Tyan Server, you'd be seeing 8xGTX690, instead of 8xTitan, leading off in my signature, below. But hopefully, as we live we learn.

Yes we do, sorry if I lead you down this path. When this happens to me I remind myself it's not the tools but how we use them that really matters.

I'd prefer most that the next GPU card that I could purchase would be a GTX 790 Ti - a package with dual GTX 780 Ti's. I'd rather have eight of those in my Tyan. And I keep dreaming.

What's the 790 Ti?

I'm still waiting for availability of the Xeon 2643 v2 here, hopefully in the next month. The only big unknown for me is the GPU.
 
Before I chime in with recommendations, (1) what's the budget, (2) what're the most important applications you have in mind and what's the application priority ranking, (3) are we talking about V1 Sandy Bridge or V2 Ivy Bridge Xeon(s) and (4) which motherboard do you have in mind or how many GPUs and/or CPUs do you need? In the dual CPU category, the Supermicro DAX line is the best if you want to mildly overclock those Xeons, but the downside of the DAX is that it doesn't offer the space for lots of x16 GPUs. That's the rub if for what you do that is most important to you, you need many GPUs. Twietee seems to need many CPUs if Maxwell Renderer is his most important application, but Adobe PS/Illustrator will flourish with fewer higher clocked cores. So there may have to be trade-offs/compromises in his case.

We've already been down this road, maybe you banged your head ;)

Just in case we missed something before: 1) Budget's running at about $10k - UK's expensive! 2) Programming, photography, music, video and 3D in that order. 3) v2. 4) Have one GA-7PESH3 sitting in my studio begging for more expensive high-end component friends. Going with Gigabyte for hackintosh compatibility.

Like I said, build's all planned out from our previous discussions and research, unless you think I should change something. Just need to figure out the GPU - 780Ti, 790Ti, Titan Ultra, could use your invaluable input on this.

----------

At the highest Turbo bin you'd be getting the King's ransome if 2 cores of a multicored recent Intel CPU ran at that speed, for the more usual case is just one core per CPU runs at the highest bin and the others are poping in or about base and base + 1 bin, unless you keep your system (and particularly, the CPU(s)) very cool; then the chances of getting the maximum number of cores allowed to participate at a Turbo Boost (TB) bin level increases.

This is pushing me more to the 2643 over 2687W because of its higher base clock. Also I'm planning to water cool the system, hopefully this will give me more cores at max. Would I be right in that assumption?
 
... .
I was thinking as the 690 is a two-in-one, maybe I misunderstood.
... .

You're right that the 690 is a two-in-one, but it's the "one" part that I focus on because PCIe slot availability is at a premium.

What's the 790 Ti?

It's the mythical two 780 Ti chips-in-one package that I'm hoping Nvidia announces next.
 
We've already been down this road, maybe you banged your head ;)

My bad - I do feel a little disoriented.

Just in case we missed something before: 1) Budget's running at about $10k - UK's expensive! 2) Programming, photography, music, video and 3D in that order. 3) v2. 4) Have one GA-7PESH3 sitting in my studio begging for more expensive high-end component friends. Going with Gigabyte for hackintosh compatibility.

Like I said, build's all planned out from our previous discussions and research, unless you think I should change something. Just need to figure out the GPU - 780Ti, 790Ti, Titan Ultra, could use your invaluable input on this.

We can only get what's been released (780Ti) and credibly hope for what's been announced (nothing else so far, unless you want to plunge into the Tesla stratosphere), unless you want to join my dreamer's club where hope springs eternal that there'll be a 790Ti and a Titan Ultra also to choose from before next summer. But interestingly Asus may soon announce a little lesser 2 chip card - ASUS ROG MARS 760 [ http://videocardz.com/48019/asus-rog-mars-760-pictured ] and Nvidia has now formally launched the Tesla K40, where if you ignore the 12 gigs of ram on that monster, many of the other specs look a lot similar to those of the GTX780Ti [ http://videocardz.com/48026/nvidia-launches-tesla-k40-packed-12gb-memory ].


This is pushing me more to the 2643 over 2687W because of its higher base clock. Also I'm planning to water cool the system, hopefully this will give me more cores at max. Would I be right in that assumption?

Correct.
 
Nvidia has now formally launched the Tesla K40, where if you ignore the 12 gigs of ram on that monster, many of the other specs look a lot similar to those of the GTX780Ti

New Tesla's a monster with a monster price tag. If you were buying in the near future, Titan or GTX780Ti, or would you wait for the rumoured 780Ti with 6GB (I guess this might be the Titan Ultra)?
 
New Tesla's a monster with a monster price tag. If you were buying in the near future, Titan or GTX780Ti, or would you wait for the rumoured 780Ti with 6GB (I guess this might be the Titan Ultra)?

I think that the EVGA 780Ti with 6GB will not formally be called the "Titan Ultra" because the Titan will probably be next upgraded to have as many CUDA cores activated as the 780Ti and the Tesla K40 Atlas (2,880 cores) and as much ram as the Tesla K40 (12 gigs) and still keep "Titan" in the name and Titan's distinctive edge of being the GTX card having the greatest amount of FP64 (double precision) cores active. As part of my CUDA renderfarm deveopment plan, I'm getting a few EGVA GTX 780Tis within the next few days. Obviously, PCIe laden systems are becoming, for me, more and more crucial to my CUDA renderfarm development plan. Double GPU cards like the two-in-one GTX 690 and soon hoped for 790, 890 and so on, are key to my keeping total system count down. Moreover, a seat of OctaneRender for my TyanServer (even if loaded with eight GTX 690s, 790s etc.) costs the same as a seat for a Mac with a single GTX 580 or 680 card.
 
Delivery Completed!

And so far one EVGA 780Ti SC ACX is gifted with full OctaneRender speaking ability. Amazingly, her first words were, "I'm not a Titan, but I'm much cheaper and. at OctaneRender, much faster, even at factory settings. Please give me my Titan Equivalency mark." And giving her her due, she gets a TE of 1.32 (95 sec. (for Barefeat's Titan) / 72 sec) = 1.319444444444444 =~ 1.32. But what about her other sister and her two brothers?
 

Attachments

  • 1x780TiSC_ACX_72secORBCapture.PNG
    1x780TiSC_ACX_72secORBCapture.PNG
    1.4 MB · Views: 115
Last edited:
EVGA 780Ti SC ACX's play well together and put less of a dent in your wallet.

EVGA 780Ti SC ACX's prove that OctaneRenders' claim to linearity is completely true. If one renders the benchmark in 72 secs, then two will render it in 36 secs and three will render it in 24 secs and four will render it in 18 secs and. by mathematical progression, eight will render it in 9 seconds. I.e., two will cut 72 secs in half, but it'll take 4 to cut the latter time in half. and it'll take 8 to cut that last time in half. Since one EVGA 780Ti SC ACX, at factory, gets a TE of 1.32, then 4 get a TE of 5.28 (1.32 x 4 = 5.28) because it would take 5.28 regular, factory Titans to achieve the same score (95, 95/2=47.5, 47.5/2= 23.75; 23.75/18=1.319444444444444 or 1.32). So having four EVGA 780Ti SC ACX is equivalent to having 5.28 factory Titans for ($2,920/$4,000=) 73% of the cost of four Titans. Seems like a good deal to me: spend less and get more. That's what the Cheap Bad Man likes.
 

Attachments

  • 2x780TiSC_ACX_36secORBCapture.PNG
    2x780TiSC_ACX_36secORBCapture.PNG
    1.4 MB · Views: 128
  • 3x780TiSC_ACX_24secORBCapture.PNG
    3x780TiSC_ACX_24secORBCapture.PNG
    1.4 MB · Views: 114
  • 4x780TiSC_ACX_18secORBCapture.PNG
    4x780TiSC_ACX_18secORBCapture.PNG
    1.4 MB · Views: 119
Octanerender performance of Galaxy GTX680/4g re-visited.

Here's how my memory tweaked Galaxy GTX680/4Gs perform on Octanerender 1.20, earning a TE of .59 per card. So 2 tweaked Galaxy GTX680/4Gs are equivalent in Octane rendering performance to 1.1875 reference design (RD) Titans and all four of them are equivalent to 2.375 RD Titans.
 

Attachments

  • OR120tweaked1GTX680_4gCapture.JPG
    OR120tweaked1GTX680_4gCapture.JPG
    93.2 KB · Views: 124
  • OR120tweaked2GTX680_4gCapture.JPG
    OR120tweaked2GTX680_4gCapture.JPG
    93.3 KB · Views: 106
  • OR120tweaked4GTX680_4gCapture.JPG
    OR120tweaked4GTX680_4gCapture.JPG
    94.1 KB · Views: 111
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.