Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
How many cores are too many?

As shown yesterday, the OS will boot with 32 real cores/threads activated and yield a Geekbench 3 score of about 49,000 for 32 Sandy Bridge real cores running at 2.7 GHz. I had been having problems with consistent crashes whenever I activated hyper-threading. At first it made me wonder whether the OS had issues with the way my system implemented hyper-threading, but I began to wonder what would happen if I activated hyper-threading, but used a bios setting that limited the number real cores activated from 8 to 4 per CPU. My guess was that 32 was a cap. Well, the next re-boot after making this change let me know that the OS wasn't crashing because it was having an issue with how hyper-threading was implemented on my system because my system booted right up with (4x4=) 16 real cores, plus 16 more hyper-threads and achieved a Geekbench 3 multicore score of 32,391 [ http://browser.primatelabs.com/geekbench3/557286 ], currently on page 21 of the top multicore listing. I'm delving further into this matter other to see if I can find a better solution to this limitation.

BTW - I've got high resolution video working and I am posting this update, as was the case with the earlier followup updates, on 10.9.2 on the SuperMicroMacStrosity, using a really cheap USB to Ethernet Adapter that I had purchased some time ago. That's how I have been able to easily post the Geekbench scores. Also, for purposes of comparison, the current configuration yields a Geekbench 2 overall score of 27,774 [ http://browser.primatelabs.com/geekbench2/2457650 ], a Cinebench 15 xCPU score of 1,720 and a Cinebench 11.5 xCPU score of 19.60. Some of you mathematicians out there might be able to put together formulae for the correlations between scores and predict what my WolfPacks1&2 (which I built in 2010 and 2011, which routinely achieve Cinebench 11.5 xCPU scores of 24.7+ and Geekbench 2 scores of 40,000+) would score under Cinebench 15 and Geekbench 3, respectively, after I finish cleaning their parts and putting them in new, large Lian Li cases. I'm projecting they will each score over 45k in Geekbench 3.
 
Last edited:
How many cores are too many?

I'm honestly surprised that many cores are recognised at all, considering there's never been (or will be?) an official mac like that.

** of note, apple are probably looking at ways to block your little experiment with their next update haha :p

I hope not though!!
 
Sometimes experiments give glimpses into the future by revealing all of the present.

I'm honestly surprised that many cores are recognised at all, considering there's never been (or will be?) an official mac like that.

I too was surprised, but only for a moment because of this: maximum # of supported cores = 32, whether the mix is 32 real cores or 16 real cores and 16 hyper-threads. That underlined portion likely doesn't fall into the category of "never will be." In fact, it may hint at the very next phase - the release of a 16 core high end Mac Pro option when Intel releases the E5-2697's successor. Additionally, my experiment shows that the OS does now support at least 128 gigs of ram.


** of note, apple are probably looking at ways to block your little experiment ...

The word "experiment" best sums up this exercise because I will not be sacrificing the 1.4x performance gain that Windows and Linux OS provide to run my heavily CPU-focused applications this way on some of my best hardware. I'd rather have those applications run full tilt. For the applications that I now rely on OSX to do, my four Mac Pros completely suffice. Although for Mavericks - Apple deprecated three of my Mac Pros despite my having purchased the top of the line. Apple's eraser was too big. But Tiamo provided a fix.
 
Last edited:
Its time for the Grand Experiment to end.

As for that which I did not accomplish in modifying my WolfPackPrimes into SuperMicroMacstrosities -
1) no working HMDI audio;
2) no working internal ethernet, however, I did remember a way to get ethernet service easily and cheaply. But as to internal ethernet and HDMI audio, I didn't even get to that at all before fully accepting that it was time for me to give this exercise a death blow because of
3) no performance to match my expectations. I was looking to OSX performance on those systems like that performance that I've come to expect from Windows and Linux, but I found that the Mac OS has lower limits to the number of cores that it'll readily/easily support than either Windows or Linux have.

But I don't view this exercise in any way as a complete failure for the following reasons -
1) the bios settings that I experimented with to try to makeup for the core # limitations, are serving me better under Linux and Windows than those settings that i had made before to emulate moderate underclocking. So this exercise has made my machines' perform faster;
2) I learned a lot more about my Mac hardware;
3) while not reaching the performance levels that would have made me stick with it to finish off HDMI audio and internal ethernet, I did learn more about maximizing the performance of my systems' OSes - such as, but not limited to, gaining a better understanding of the operating system pieces and the roles that they play; and how to better diagnose sources and cures for OS related problems;
4) it gave me the opportunity to see what Apple may have in store for us in the next Mac Pro (but obviously 2 CPUs with a total of 32 real cores will crush 16 comparable real cores in a single CPU package, even with the additional 16 hyper-threads) and
5) it enabled me to get a better grasp of what the benchmark metrics are really testing, how the current versions of those metrics relate to earlier versions, and how to better use those metrics to increase the performance of my systems.

BTW - Here is my latest Geekbench 3 score (49,337) as this experiment draws to a close - http://browser.primatelabs.com/geekbench3/558552 .
 
Last edited:
What is the point of flashing the 1,1 to a 2,1?

-> Installed 2x X5355s and got over 47% higher score in GeekBench than with 2x 5160s 3GHz.
-> Everything works just as expected.
-> Edited .strings to change "2x 2.66GHz Unknown" to "2x 2.66GHz Quad Core".
 
What is the point of flashing the 1,1 to a 2,1?

-> Installed 2x X5355s and got over 47% higher score in GeekBench than with 2x 5160s 3GHz.
-> Everything works just as expected.
-> Edited .strings to change "2x 2.66GHz Unknown" to "2x 2.66GHz Quad Core".


Mainly, it offers a change in the model number and some additional processor support. This is the definitive thread on that mod: http://forum.netkas.org/index.php/topic,1094.0.html . Did you edit .strings to insure that the 2x X5355 worked as you desired? There's an old thread dedicated to that system identity change: https://forums.macrumors.com/threads/269553/ .
 
Mainly, it offers a change in the model number and some additional processor support. This is the definitive thread on that mod: http://forum.netkas.org/index.php/topic,1094.0.html . Did you edit .strings to insure that the 2x X5355 worked as you desired? There's an old thread dedicated to that system identity change: https://forums.macrumors.com/threads/269553/ .

Thanks. Lots of pages to dig through. Google turns up "it fixes crashes for some", "no perf difference", "reports x5355 correctly (cosmetic)", "microcode has changed", "there is no difference"… and so forth.
-> Can't seem to find a solid reason to change to 2,1.
-> Will stick with it being as original as possible.

That is literally what I did. Change what "About This Mac" says w.r.t. touching up cosmetics (Unknown -> Quad Core label). I don't think it has any tangible effect. Applications (such as GeekBench) detect the correct processor anyways.
 
Last edited:
Further Thoughts Concerning SuperMicroMacstrosity - Some Luck Has Its Place;

or Why Might There Currently Be No Self-built Dual E5-2697 System Running OSX To The Hilt1*/;
or Math Too Has Its Place;
or How You've Been Blocked From Doubling In One System What's Been Released As The Best Of The Best, Unless You Figure Out What's Been Done And How To Get Around It;
or They Won't Have To Change CPU Core/Thread Capacity For The Very Next One When the 16 Core Xeons Are Released;
or All Of The Above.

Mathematically, what are snug fitting Sandy or Ivy socks? Luckily, E5-4650 QBEDs fit well, but just not perfectly.

Have you ever worn socks that were too tight? I have and they made my feet feel uncomfortable. Have you ever worn socks that were over stretched or otherwise too big. I have and they kept falling into my shoes. Fit as to socks is important as is fit into max core count if you’re trying to get the maximum in CPU performance from OSX. If I am correct in my belief, as I indicated above in my last few posts, that OSX currently can handle only 32 threads, whether they be 32 real cores or 16 real cores and 16 hyper-threads, regardless of the number of CPUs present, then that has implications for certain other current self-builds. To test my belief further I did this search on Geekbench 3: http://browser.primatelabs.com/geek...3&q=E5-2697+v2+MacPro6,1&sort=multicore_score to find out whether I’d see any 2 CPU systems that were identified as being a MacPro6,1 with E5-2697s (12-cores)2*/, and guess what? There were currently 16 pages of them (382 results), but not one of them listed more than 12 real cores, although obviously some of them were Hackintoshes (given their model names and/or core speeds). While it could be that the high price of two E5-2697s has prevented any outfitted systems of that configuration from making their appearance on Geekbench 3, I sort of doubt it and now know that someone other and sooner than me has come up against the 32 thread limit. I.e., I suspect that someone else’s socks were too small because they were trying to fit 24 real cores and 24 additional threads into a 32 core/thread limit (whether they realized that there is a cap and what the cap is and that the cap doesn’t discriminate between real cores vs. hyper-threads, or they just gave up trying without ever figuring out what was the exact dimensions of the problem). As my luck would currently have it in my case, there are not 32 or 16 core Xeon CPUs for 1/2/4 CPU systems and the least amount of cores for any Ivy Bridge Xeon is the 6 cores in the E5-2643 (3.5 GHz base and 3.8 GHz max turbo) that is for up to 2 CPU systems. My CPUs in my 4-CPU WolfPackPrime systems are all 8 cores. Thirty-two is, of course, equally divisible by 32, 16, 8, 4, 2, and 1. Hardly anyone who would experiment, as I did, with a 32 CPU slotted system (32x1=32), a 16 CPU slotted system (16x2=32), or an 8 CPU slotted system (8x4=32). Moreover, there are no 4, 2, or 1 core Ivy Bridge Xeons. But I do know that there are a few owners of 4 CPU slotted systems. So what socks currently fit better than mine for 4 CPU slotted systems? Its the E5-4627 V2s [ http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon E5-4627 v2.html, but for my four E5-4650s I did spend less a quarter of what four E5-4627 V2s would now cost new and, in fact, for my entire system, fully configured, did I spend less than what four E5-4627 V2s would now cost ]. The E5-4627 V2 is an 8 core CPU has a base frequency of 3.3 GHz (whereas mine have 2.7 GHz) and has a turbo speed of 3.6 GHz (whereas mine have 3.5 GHz [solely because they are the QBED C1 stepping - otherwise it’d be 3.3 GHz]). Since my system gets a Geekbench 3 score of 49,337, I would project that a similarly configured system with 4 E5-4627 V2s would achieve a score of about 60,000+, especially, as to older systems like mine, after having applied the Ivy Bridge Bios Flash and installed faster (1866 MHz) memory : (2.7 GHz x 4 = 10.8 = my systems; 3.3 GHz x 4 = 13.2 = 4xE5-4627 V2s; 13.2 / 10.8 = 1.22; 49,337 x 1.2 = 60,300 ). Moreover, the E5-4627 V2s have no hyper-threads. Thus, there would be no hyper-threads wasted because they wouldn’t fit in the 32 thread cap anyway.




BTW - Looking at systems with a CPU with greater than 8-cores (and admittedly my Geekbench 3 searches, in and of themselves and as worded, are somewhat under-inclusive [see example of under-inclusiveness below in footnote 1*]) -
1) I found one single CPU E5-2696 v2 (12 cores) MacPro6,1 - http://browser.primatelabs.com/geekbench3/search?utf8=✓&q=E5-2696+v2+MacPro6,1 ;
2) I found not one w/ E5-2695 v2 (12 cores) MacPro6,1 - http://browser.primatelabs.com/geekbench3/search?utf8=✓&q=E5-2695+v2+MacPro6,1 ;
3) I found two pages of single CPU w/ E5-2690 v2 (10-cores) MacPro6,1 listing a couple of anomalous base speeds - http://browser.primatelabs.com/geekbench3/search?utf8=✓&q=E5-2690+v2+MacPro6,1 and
4) I found not one w/E5-2680 v2 (10-cores) MacPro6,1 - http://browser.primatelabs.com/geekbench3/search?utf8=✓&q=E5-2680+v2+MacPro6,1 .


1*/ By using the term "Hilt," I mean that no cores or threads have been turned off, like I and, e.g., he/she obviously had to do with each of those 10 real cores + 10 hyper-threads on the two E5-2690 V2 CPUs - http://browser.primatelabs.com/geekbench3/365335 . Here is a broader search showing a few of those who have Ivy Bridge Xeons and have confronted the 32 thread cap and how they dealt with it: http://browser.primatelabs.com/geek...page=1&q=mac+os+x+64-bit&sort=multicore_score .


2*/ Parentheticals containing core counts weren’t part of the search terms.


P.S. - No final released Sandy Bridge Xeons had greater than 8 real cores and no more than one additional hyper-thread per core.
 
Last edited:
Have you tried OS X 10.10 Yosemite - I think there's a new kernel (at least it has a new name) maybe it'll support more than 32 cores now
 
Cinema 4d (C4d) V. 15 Team Rendering: Bucketing - A big positive for gigantic frames.

C4d Team Rendering makes multiple fast, low Vram GTX cards shine when rendering large frames.

I have five systems that each have four GTX 590s in each of them. On the positive side, a fast, low Vram GTX Fermi card such as the GTX 590 (which is roughly the equivalent of two GTX 570s) can be had for under $400 on Ebay and in Octane render a GTX 590 renders faster than the oGTX Titan. But on the negative side the advertised 3G of Vram for the GTX 590 is actually 1.5G per card (it has two of them), which seemingly is a real bummer for rendering a large, complex frame, such as a 4k frame. It will not fit into the Vram of a single GTX 590.

C4d Team Rendering and the Octane render C4d plugin have, however, come to the rescue. The Team Render feature in C4d, which has built-in network rendering that supports Octane render, is described, in summary, by Maxon as follows:

"Team Render is an all-new network rendering concept that uses peer-to-peer communication to distribute render tasks. Because there's no bottleneck at a central server, the assets required to render your scene get to each client quicker. This means clients spend time rendering instead of waiting for assets.
Utilize the power of your team to easily render the frames of an animation or to distribute the buckets of a still frame. Machines share the workload to calculate caches for Global Illumination, Ambient Occlusion and Subsurface Scattering so you can utilize advanced rendering features and still meet the deadline."

What this means is that if you set up bucket rendering in C4d while using Octane as the renderer, then the scene gets split up. So the more GTX systems that you have participating in the Octane render, the smaller the scene data sent to your GPUs. Just to be on the safe side when I recently rendered a 4k 3d demo, I used four systems, each with four GTX 590s, participating in the render. The render job was accomplished with every GPU card having lots of memory to spare throughout the render.

P.S. Serendipity and the Samsung 28-Inch Ultra High Definition LED Monitor are wonderful.
 
Last edited:
What is the content of your 4K demo? How many polygons(real, non instance) and texture did you use in your scene?
 
The Balancing Act

Tutor, I’ve read your threads pretty thoroughly. I’ve profited from that experience. I’m getting ready to purchase hardware and have one conundrum I thought I’d share.

You provided the following at the beginning of your thread on,
“All We Know About Maximizing CPU Related Performance”:

1) Single CPU system -
(a) Gigabyte motherboards recommended for a good combination of price/performance/flexibility. Consider an X79 Sandy Bridge (socket 2011) motherboard and a Xeon 1650 [3.2 -> 3.8 GHz] - has error correction support - $583 for best price/performance if you can use 6 cores . . .

With that in mind I am considering the relative merits of E5 vs i7 in 6-cores. My thinking is to use the: GIGABYTE GA-X79-UP4 LGA 2011 Intel X79 SATA 6Gb/s USB 3.0 ATX, as foundation with one of the following:

i7-3930K Sandy Bridge-E 6-Core 3.2GHz (3.8GHz Turbo) LGA 2011 130W Desktop Processor BX80619i73930K
E5-1650 v2 (12M Cache, 3.50 GHz) FC-LGA12A

My suspicion is that the time to future-proof the new build is upon me. On the downside, as I understand it I will be losing ECC Memory, while the plus is 4-channel DDR3. Am I right to be suspicious? Or is the exercise premature? In the final analysis should I watch for the right Mobo to use with an 8-core model E5-2690 v3 for the best balance? (Haven’t seen any pricing yet on the ‘low-end’ of this product line and its motherboard requirements.)

All this is to support a 2K video production environment from camera thru upload of mastered video to delivery system, and to publish a web magazine up to its servers; all this in support of the family ‘cottage industry’.

Thx for just being out here. -III
 
Analyzing Your Desired Output(s) To Determine Your Required Input(s) In A New Age

Welcome LBattis.

The title that you've chosen, "The Balancing Act," for your post is most appropriate. In everything that I do, I've found that the perfect balance starts with my imagining my end point; then I work backwards all the way to the present. I always put my desired outcome at the forefront; that is to first imagine the feeling that I want to have at the conclusion (and record it); then I ask myself what in particular generated that feeling (and record it); then what generated/produced that or those things that generated that feeling (and record it), and so forth (and record it) - working my way all the way back from that goal - the feeling of success/job well done - to the present. Then I have a map showing exactly how I ought to travel to get to exactly where I want to end up. Some might say that I approach things ass-backward; but I'd respond that although I do approach things backward from my final vision, it depends on what they mean by "backward." In nature, the hind quarters are generally on the back. Moreover, it's easier for me to accomplish my goal if it's been well defined and if I know in the minutest of detail exactly how to get there. So with that in mind, lets start with what you want to accomplish.

All this is to support a 2K video production environment from camera thru upload of mastered video to delivery system, and to publish a web magazine up to its servers; all this in support of the family ‘cottage industry’.

If, e.g., you desired to write code/ formulae etc. where memory errors were critical, then a Xeon processor supporting ECC memory would be very important. However, since it looks like that is not your purpose, then an i7 would be completely satisfactory. I have systems with i7 Gigabyte (G)UD3, (G)UD5, and (G)UP4 motherboards (using, e.g., i7 975s, i7 980X and i7 3930Ks CPUs). I use them for graphics and video production. The lack of ECC memory hasn't been an issue at all. I do code. So for that purpose, I use systems with Xeon Nehalems/Westmeres and Sandy/Ivy Bridge CPUs, like, e.g., the X5680 or E5-4650 QBED (which is just the E5-2680 with sufficient QPI pathways for 4 CPU system communication).

Tutor, I’ve read your threads pretty thoroughly. I’ve profited from that experience. I’m getting ready to purchase hardware and have one conundrum I thought I’d share.

You provided the following at the beginning of your thread on,
“All We Know About Maximizing CPU Related Performance”:

1) Single CPU system -
(a) Gigabyte motherboards recommended for a good combination of price/performance/flexibility. Consider an X79 Sandy Bridge (socket 2011) motherboard and a Xeon 1650 [3.2 -> 3.8 GHz] - has error correction support - $583 for best price/performance if you can use 6 cores . . .

That recommendation was for a general purpose build in that it covers more bases, but there's no need for a Xeon if your use pattern is precisely defined and doesn't trigger the need for ECC.

With that in mind I am considering the relative merits of E5 vs i7 in 6-cores. My thinking is to use the: GIGABYTE GA-X79-UP4 LGA 2011 Intel X79 SATA 6Gb/s USB 3.0 ATX, as foundation with one of the following:

i7-3930K Sandy Bridge-E 6-Core 3.2GHz (3.8GHz Turbo) LGA 2011 130W Desktop Processor BX80619i73930K
E5-1650 v2 (12M Cache, 3.50 GHz) FC-LGA12A

Is a good choice. I've found that the GIGABYTE GA-X79-UP4 LGA 2011 Intel X79 SATA 6Gb/s USB 3.0 ATX is an excellent foundation for a self-build. I now have eight of them. The vast majority of mine have the E5-4650 (QBED) C1 processors [see, e.g., http://www.ebay.com/itm/ES-Intel-Xe...CPU-/111412526373?pt=CPUs&hash=item19f0b43d25 Yes, their selling price has started to creep up] because at the time that I purchased components for those systems, I could purchase those 8-cores for < $500 each and they have the exact performance characteristics of the E5-2680 V1, including having a max turbo of 3.5 GHz (w/o any clock tweaking). On the UP4 motherboard, every one of those CPUs can be over clocked by, at least, another 5%. The i7 3930K has a lot more over clocking headroom and still holds its own, even against the Ivy Bridge equivalents.

The motherboard that you've chosen is, of course, my favorite. It's the best value for a 4 double wide PCIe Sandy/Ivy Bridge single CPU system (Gigabyte makes a dual CPU system - GA-7PESH3 [ http://b2b.gigabyte.com/products/product-page.aspx?pid=4468#ov ] if your applications justify use of a dual CPU system; and, of course, if your applications would benefit more from a 4 CPU system, then consider Supermicro [ http://www.superbiiz.com/detail.php?name=SY-847R7FP ] . JUST MAKE SURE TO STAY AWAY FROM THE VERSION 1.0 GIGABYTE UP4 MOTHERBOARD (GET V1.1 INSTEAD) AND TO FIRST UPGRADE TO THE LATEST BIOS. The initial bios releases (pre-4.0) had many issues. I've found the later bioses to be friendly/stable.

My suspicion is that the time to future-proof the new build is upon me. On the downside, as I understand it I will be losing ECC Memory, while the plus is 4-channel DDR3. Am I right to be suspicious? Or is the exercise premature? In the final analysis should I watch for the right Mobo to use with an 8-core model E5-2690 v3 for the best balance? (Haven’t seen any pricing yet on the ‘low-end’ of this product line and its motherboard requirements.)

Absolute future-proofing is, of course, impossible. But wisely spending your money for what you need when you need it is nothing but applaudable. While the choice of CPU will determine whether you have ECC support (Xeon vs. i7), the UP4 handles them both. 4-channel DDR3 is a very important consideration. Depending on whether and how you chose to over clock a CPU, not getting memory ay least one or more steps higher than the system's standard memory speed may become a limitation. As I've pointed out earlier in this thread, a Sandy or Ivy Bridge Xeon can be over clocked by only a little (about up to ~7.55 % at most), but you'd still need to get memory at least one step higher to do so safely because overlooking the CPU also overclocks the memory by the same percentage. With those Xeons, you cannot change the turbo boost range, only the clock speeds (within low limits) at those ranges. Increasing the turbo boost range/steps (which you can do with an i7 CPU) doesn't necessitate faster memory. But in any event, I'd recommend that you get memory as fast, at a minimum, as the motherboard's max - here currently 2133 MHz [ http://www.gigabyte.com/products/product-page.aspx?pid=4766#sp ]. A couple of the benefits of a self-build is that you can over clock the CPU and memory and start with faster memory than that specified by Intel for the CPU (e.g., Intel spec's 1600 MHz memory for Sandy and 1866 memory for Ivy and with your GUP4 selfbuild you can begin with 2133 MHz memory). From what I've read, some of the V3 Xeons may have up to four more cores each and, at the high end (MHz and dollariwise), support more over clocking. But if you wait for the V4s or a while longer for the V5s, then I'm sure you'll get much better performance. My point is that while there is no way to future-proof absolutely, we should make system additions when we need them and do so most economically. When the family ‘cottage industry’ needs additional resources, don't be afraid to arm yourself with knowledge and make the wisest purchasing decision that you can make under the circumstances. Don't let the unachievable future-proof make you like a deer blinded by a tractor trailer's headlights just before impact. This is particularly so when the acquisition is needed (and will be used to produce additional income) and in what may be becoming the Age Of GPU Computing.

What particular video applications will you be using and will you rely on GPUs for added processing power (and will it require OpenCL and/or CUDA)? For GPU processing support, I currently use ATI GPU based systems (e.g., 3xPowerColor 5970s (circa 2009) in GUP3s [and am looking at getting either quad R9 280Xs or quad 7970s for another Gigabyte UP4 system] ) for Final Cut Pro and other OpenCL computing software, but since there are currently so few of my applications than use OpenCL, the vast majority of my systems use Nvidia CUDA GPUs. I use GPU computing to keep my 2007 MacPro2,1s still relevant and highly useful and yet they're seven years old. Aggregation too has its place, particularly when GPUs are relatively so cheap and yet so powerful. For me the key is having ample PCIe slots. I hate saying what I'll never do because things change. So I say, "I don't currently ever see myself getting/building a new system with less than 4 PCie slots." And I know that just by saying that, my negative karma will trigger the soon release of a faster and more powerful interconnect than PCIe or Nvidia to produce and release a 100 GPU $200 Titan CUDA card with an oTitan equivalency of 200 and that occupies just 1 PCIe slot. But I can say confidently, "I'll never be a stag with truck tire marks on my ass."

P.S. Or, might I become a similarly situated doe?
 
Last edited:
One more thing

... GIGABYTE GA-X79-UP4 LGA 2011 Intel X79 SATA 6Gb/s USB 3.0 ATX, as foundation with ... [an] i7-3930K Sandy Bridge-E 6-Core 3.2GHz (3.8GHz Turbo) LGA 2011 130W Desktop Processor BX80619i73930K...

If you build this setup, you may want to visit the magnificent and wonderful Rampagedev here - http://rampagedev.wordpress.com/dsdt-downloads/gigabyte-x79/ - and download the applicable files. They're Great!
 
Otoy/Octane GPU Rendering Update - Awesome Extensions Of GPU Assisted Technology

Beam me in, not up, Scotty. Holographic video is just around the corner - from this https://www.youtube.com/watch?v=etoS6daj20c&t=14m31s last year to this http://youtu.be/9H0NMsp1lX0 this year. What's the future Otoy tech for next year? Growing a 2014 small (10") volumetric LF display prototype into a 6' by 4' volumetric LF display. For more info, checkout this http://render.otoy.com/forum/viewtopic.php?f=7&t=41609 .
 
Analysis of the Output Moving Forward

Tutor,

Thank you for your beautifully thought-out and organized answer. It's nice to see the sun. You've confirmed many of my suspicions and covered nicely my needing to address the video side in more depth. I've decided on a few of the foundation components: Intel i7-4930K LGA 2011 64 Technology Extended Memory CPU Processors BX80633I74930K - Gigabyte LGA 2011 DDR3 2133 Intel X79 SATA 6Gb/s USB 3.0 ATX Motherboard GA-X79-UP4.

I was able to find the v4, version of the 3930 on sale for a significant amount less than the 3930. I'm game . . . In the background I'm keeping my eye on sales of Crucial M550's for boot and I have a raft of WD3T-Reds to populate the drive bays of a Xigmatek Assassin CCM-38DBX-U01 Black SECC ATX Case I have on the shelf.

So, the real question is video. Going forward will Nvidia refocus on the CUDA technology? On the OpenGL side of the street; are the applications coming? Are the developers interested? On the good news side of the street I've been looking hard at DaVinci and it does make use of OpenGL. On the bad news side many of the smaller and more utilitarian software houses don't seem to be working on any parallel development of their product from CUDA across to OpenGL. These are very cloudy observations on the order of the proverbial blind man describing the elephant. As for the web-publishing, compared to video, the load is trivial yet the i7 is best for this arena.

My guess is that going forward the new in-process ecc-workalikes that Intel has baked into the i7 will improve. I think OpenGL will also gain traction going forward and that that will accelerate Nvidia's efforts as well . . .
So, My thinking is to stay on the Open side of the street and see what happens. If I need CUDA down the road, I'll make the necessary adjustments.

With that said, what card to select. I'm looking over the R290X cards carefully. I have some more studying to do before I even attempt to wax illoquacious on this topic. AND Please . . . when they produce and release a 100 GPU $200 Titan CUDA card with an oTitan equivalency of 200 and that occupies just 1 PCIe slot, Please let me know. I'll probably change my opinion or perhaps it will be time for a second video card!

Have you noticed how the 21st Century's rather like the 18th in many ways?

-III
 
I've decided on a few of the foundation components: Intel i7-4930K LGA 2011 64 Technology Extended Memory CPU Processors BX80633I74930K - Gigabyte LGA 2011 DDR3 2133 Intel X79 SATA 6Gb/s USB 3.0 ATX Motherboard GA-X79-UP4.

....

That's a sound decision.

So, the real question is video. Going forward will Nvidia refocus on the CUDA technology? On the OpenGL side of the street; are the applications coming? Are the developers interested? On the good news side of the street I've been looking hard at DaVinci and it does make use of OpenGL. On the bad news side many of the smaller and more utilitarian software houses don't seem to be working on any parallel development of their product from CUDA across to OpenGL. These are very cloudy observations on the order of the proverbial blind man describing the elephant. As for the web-publishing, compared to video, the load is trivial yet the i7 is best for this arena.

My guess is that going forward the new in-process ecc-workalikes that Intel has baked into the i7 will improve. I think OpenGL will also gain traction going forward and that that will accelerate Nvidia's efforts as well . . .
So, My thinking is to stay on the Open side of the street and see what happens. If I need CUDA down the road, I'll make the necessary adjustments.

I hope that you're right about OpenCL gaining traction and that Nvidia advances its OpenCL implementation to, at least, V1.2 from V1.1 (ATI moved to V1.2 in its 5000 series GPUs in 2009 {and some ATIs now support V2 - http://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#FirePro_Workstation_Series }).

With that said, what card to select. I'm looking over the R290X cards carefully. ... .

The good thing about Resolve is that it supports both CUDA and OpenCL - "[ We imported a 289 frame 5K RED clip into an HD 1080PsF 25 project with Decode Quality set to ‘Half Res. Good.’ We performed a playback with 16 blur nodes being rendered on-the-fly." [ http://www.barefeats.com/images14/gpuss_dav.png and http://www.barefeats.com/gpuss.html ] [ http://forum.blackmagicdesign.com/viewtopic.php?f=11&t=10199&p=65101&hilit=fastest+gpu#p65101 ]
].

I looked at the R9 290X's but none of the ones I found have more than 4GB of Vram. That's important because, "We now recommend a minimum of 2GB of GPU RAM for HD image processing and greater than 4GB of GPU RAM for 4K images..." [ http://forum.blackmagicdesign.com/viewtopic.php?f=11&t=19795 ] [Emphasis added.] I've found both a 6GB R9 280X and a 6GB 7970.

Have you noticed how the 21st Century's rather like the 18th in many ways?

I hadn't considered it before now. But now giving it only brief consideration - that the 21st Century is rather revolutionary also, but mainly only technologically so. What are the other similarities that you see?


N.B. "Resolve Lite does allow for GPU acceleration now (something that was only in the full version previously), but you can only use one GPU at a time for acceleration, so if you don’t plan on buying the full version of Resolve, two GPUs aren’t all that useful (it will use the other one for non-processing duties)." [ http://nofilmschool.com/2014/03/bla...-1-3-gpu-support-red-epic-scarlet-dragon-raw/ ] The full version supports up to five GPUs.
 
Last edited:
Vectoring an Approach

Thank you for keeping me inside the lines. Putting the focus on RAM caused me to go back and re-evaluate my 2K-video plan's lifespan and how maintaining headroom is part of 'future-proofing' . . . So here I am back on the frontier.
Taking 6G's to heart I went looking and found numerous 7970's and 280x's.
I also tripped over this:
SAPPHIRE-Radeon-HD-7990-6GB-384-Bit-x2-GDDR5-PCI-Express-3-0
I have been looking hard since for reference to its OS X viability without success. Do you know of anyone who's successfully run this in OS X? I'm looking at a good deal on that card presently and if it will hitch to the OS X wagon I'm game to add it to this build.
-III
 
Thank you for keeping me inside the lines. Putting the focus on RAM caused me to go back and re-evaluate my 2K-video plan's lifespan and how maintaining headroom is part of 'future-proofing' . . . So here I am back on the frontier.
Taking 6G's to heart I went looking and found numerous 7970's and 280x's.
I also tripped over this:
SAPPHIRE-Radeon-HD-7990-6GB-384-Bit-x2-GDDR5-PCI-Express-3-0
I have been looking hard since for reference to its OS X viability without success. Do you know of anyone who's successfully run this in OS X? I'm looking at a good deal on that card presently and if it will hitch to the OS X wagon I'm game to add it to this build.
-III

Other than the cited reference in my last post of Barefeats' speed comparison of, among others, a R9 280X (see pic), I'm not aware of anyone else running a R9 280X under OSX with Resolve. The few Resolve users that I know personally (Windows, Linux and OSX users) are running with Nvidia GPUs, such as GTX Titans, GTX Titan Blacks or GTX 780 TIs. Even prior to the release of these GPUs, many posters on Blackmagic Design's Resolve forums said that the GTX 580 was the fastest GPU for Resolve. I view the R9 280X not as the fastest GPU for Resolve (see Barefeats' test), but as a better value if you're an occasional/sporadic user of Resolve. A 6GB Titan Black costs $1K, but a 6GB R9 280X costs <$400. If much to all of your work is with Resolve and that is how you earn your living, then the GTX Titan Black is the better value. Thus, the more you plan to use Resolve, the speed superiority of the more expensive Titan tends to make it the better value.

P.S. Thanks for making me think more deeply about the wisdom of building another ATI 4xGPU focused system. I'm now considering making my next build an Nvidia GTX Titan Black 4xGPU focused system. But, beware that with the two video processor cards (like the ATI 7990 and GTX 690 that have two video processors) that although they're advertised as having a large amount of Vram, that the amount actually allocated to each processor is 1/2 of the total. So in the case of the 6GB 7990 you're getting two 3 GB processors and in the case of the 4GB GTX 690 you're getting two 2GB processors. Those 6 GB cards that we've been discussing are true 6 GB setups because there's only one video processor and thus they give you the Vram amount necessary for 4k video in Resolve. Just food for thought.

See additional resource - http://www.barefeats.com/tube10.html .
 
Last edited:
In Ground Effect

4-GPU focused, very nice! Glad if I was of any assistance whatsoever.
I'm thinking I'll go with the MSI AMD Radeon R9 280X, 6GB. It appears to be settling in to the OSX environment nicely and the RAM is there.

For cooling the CPU I'm leaning toward Noctua. The NH-D14 SE2011, will fit my case and I'm avoiding fluid which I still see as a win where possible. The downside is that I lose the PCIeX16 slot next to the CPU. As the card will mount in the other one, my thinking is to solve this when/if it ever becomes a problem and accept it as the price of tranquility in the meantime.

RAM has me in a bit of a conundrum. Speed/Timing/Latency . . . I am not fully adept at manipulating these variables. Among others I am looking at the following:

$229.99 CORSAIR Dominator Platinum 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CMD16GX3M2A1600C7
Cas Latency: 7
Voltage: 1.5V
Multi-channel Kit: Dual Channel Kit
Timing: 7-8-8-24
Model #: CMD16GX3M2A1600C7
_________

$182.99 CORSAIR Vengeance Pro 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 2400 (PC3 19200) Desktop Memory Model CMY16GX3M2A2400C11R
Cas Latency: 11
Voltage: 1.65V
Multi-channel Kit: Dual Channel Kit
Timing: 11-13-13-31
Model #: CMY16GX3M2A2400C11R
_____

My sense of it is that the Dominator DDR3 1600, might well outperform the Corsair DDR3 2400 and be worth its price (which I will shop harder). Is there any good rule of thumb or hard-equation for solving this?

. . .

To answer your question about the 21st/18th Century comparison. Two quick examples that come to mind are the increasing civil unrest and questions about the viability of the food supply. My mind is prone to wandering off in any variety of directions. This is just something I was musing the other night.

-III
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.