Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Advice on 4P

I've almost finished building my 4P E5-4650 Supermicro 8047R-7RFT+ system and I'm wondering what other users (especially you Tutor as you have the same setup) have done to get the most out of 4P systems.

To me it seems that OS consideration is could be a major factor. I'd be keen to see a cinebench comparison between Win2008R2, Win7, Server 2012 and :apple:.

The main software that we use is Maya 2013/4 and Premier Pro CC.
 
It's two - two - two rendering systems in one to maximize CPU and GPU performance.

I've almost finished building my 4P E5-4650 Supermicro 8047R-7RFT+ system and I'm wondering what other users (especially you Tutor as you have the same setup) have done to get the most out of 4P systems.

To me it seems that OS consideration is could be a major factor. I'd be keen to see a cinebench comparison between Win2008R2, Win7, Server 2012 and :apple:.

The main software that we use is Maya 2013/4 and Premier Pro CC.

Last bit of data in my sig. is my Cinebench 11.5 (CB11.5 xCPU) score under Win2008 R2 with two 4P E5-4650 Supermicro 8047R-7RFT+ systems. Getting the most out of my two 4P systems using heat sinks with fans rather than the supplied passive only ones and setting bios options for Performance wherever allowed and QPI fixed at 8 rather than Auto. For 3d applications and Adobe CC, I have two GTX 580C/3Gs in one system and a GTX 690 and a GTX 480 in the other one for CUDA. I use two FSP Boosters in each to help power them. Someday I'll push the current GPUs down to other systems (now housing ATI 4890s) and push down four of the Titans from my Tyan Server to these systems (and four to one of my EVGA SR-2 systems) when eight of the Titan's successor are put in the eight dual wide PCIe slots in the Tyan Server. I use Octane Render for the 3d applications. Since Octane Render doesn't tax the CPUs at all, I can simultaneously render using the 4 CPUs, yielding a GPGPU workstation and a CPU workstation in the same box. However, that powerful duality resides in each of my CUDA rigs.
 
Last edited:
If you have firsthand knowledge (based on actual experience) or know (or know of) someone who does know (based on actual experience) that changing the Titan's ID and/or bios to that of a Tesla K20 will not, by itself, enable RDMA for GPU Direct, then you may have (or may have access to) information that I need to know. Please let me know in detail how to properly change the ID and/or bios of the Titan to that of a Tesla K20. Your assistance will be greatly appreciated. I'll take it from there and take full responsibility for the risks involved in modifying my Titan cards.

I have spent some time with the GTX 5xx series (that's what I had handy) and was able to make it become a Quadro or Tesla through changing hard-straps (resistdors). The Quadro drivers would install but SpecViewPerf scores were not there which made me suspect there's more to this than just changing PCI IDs.

At the same time I was playing with the BIOS itself, but the problem with all this was that I had a Mac Pro so it was twice as difficult (BIOS + EFI) and Mac hardware does not play nice, so flashing was pain in the butt (FreeDOS booting, etc)

With Titan it does sound like a challenge because things change from generation to generation, though it might be possible to do it since all Titan boards are reference designs and layouts appear to be shared across platforms (Titan, Quadro, Tesla) for the most part.

Of course actual GK ASICs might be (are?) different in which case it would all be futile.

57o1.jpg


Here's a close-up of the ROM for the Titan. Generally one finds hard-straps around it (or on the flip side) for PCI IDs with most GTX cards. This requires tracing paths on the board while making measurements with a DMM, from specific pins of the ROM to the ground (pull-down) or positive rail (pull-up).

With Titan, it appears that it shares the board design with GTX 780, which is good because they might leave the hard-straps in place to allow for different board PCI ID configurations.

What you are looking for is even further away than just modifying the PCI IDs. You need two Titans, modded and in the same box to start with (unless you have some inter-connect like Infiniband, but then that makes things event more complicated).

Also, let's not forget the BIOS itself. You might need the Tesla BIOS which you could compare with the Titan one, and see whether things need to be changed and how. It's a whole other area of work...

Next would be to find out where does RDMA happen and where does it get blocked (I will presume it will, because this is still not a real Tesla card) and go from there. :)

It might be easier to attempt all this from Linux, if it has support for RDMA, since it is more open to peeking and poking than other OSes.

Of course this is all in theory - I don't have a Titan, or two, that I could just experiment with, and it feels like a long shot anyway because it might require lots and lots of time.

Is the effort worth it for this one feature alone, and how many people would really benefit from it?

The time and other resources involved might simply outweigh the purchase of Tesla cards that do this OOB. :)
 
Finding the right resource for scaling Babel and it appears to be omnious

I have spent some time with the GTX 5xx series (that's what I had handy) and was able to make it become a Quadro or Tesla through changing hard-straps (resistdors). The Quadro drivers would install but SpecViewPerf scores were not there which made me suspect there's more to this than just changing PCI IDs.

At the same time I was playing with the BIOS itself, but the problem with all this was that I had a Mac Pro so it was twice as difficult (BIOS + EFI) and Mac hardware does not play nice, so flashing was pain in the butt (FreeDOS booting, etc)

With Titan it does sound like a challenge because things change from generation to generation, though it might be possible to do it since all Titan boards are reference designs and layouts appear to be shared across platforms (Titan, Quadro, Tesla) for the most part.

Of course actual GK ASICs might be (are?) different in which case it would all be futile.

Image

Here's a close-up of the ROM for the Titan. Generally one finds hard-straps around it (or on the flip side) for PCI IDs with most GTX cards. This requires tracing paths on the board while making measurements with a DMM, from specific pins of the ROM to the ground (pull-down) or positive rail (pull-up).

With Titan, it appears that it shares the board design with GTX 780, which is good because they might leave the hard-straps in place to allow for different board PCI ID configurations.

What you are looking for is even further away than just modifying the PCI IDs. You need two Titans, modded and in the same box to start with (unless you have some inter-connect like Infiniband, but then that makes things event more complicated).

Also, let's not forget the BIOS itself. You might need the Tesla BIOS which you could compare with the Titan one, and see whether things need to be changed and how. It's a whole other area of work...

Next would be to find out where does RDMA happen and where does it get blocked (I will presume it will, because this is still not a real Tesla card) and go from there. :)

It might be easier to attempt all this from Linux, if it has support for RDMA, since it is more open to peeking and poking than other OSes.

Of course this is all in theory - I don't have a Titan, or two, that I could just experiment with, and it feels like a long shot anyway because it might require lots and lots of time.

Is the effort worth it for this one feature alone, and how many people would really benefit from it?

The time and other resources involved might simply outweigh the purchase of Tesla cards that do this OOB. :)

Thanks a million for your insights. My CUDA rigs are in my sig. My goal would be to start with the Titans since that rig would be, in a sense, my master system and then do the same with the cards in the other rigs. My goal would be to have all 26 GPUs, or as many as possible, communicating among themselves. For me, it is worth pursuing for that one feature. To have that much rendering power in Teslas would cost me a fortune. I would tie them together with IB. What's your take? Thanks for your time and sharing your knowledge.
 
Last edited:
Tutor I think you had experience with the Supermicro Dual 1366 boards right?
Anyone else with this rig please chime in too.

I really think I need to retire my SR-2 and go for a more reliable option.
Had 45 minutes of checksum errors again last night before the board would boot.

Doesn't matter how well the SR-2 overclocks if I can't even get thing thing to boot most of the time... any time savings from the overclock are more than wiped away.

THIS is the closest thing I can find from Supermicro for my needs but I don't think it's the same board people were using for hackintoshes before the SR-2 came out. Was it the X8DAI or something?

Any ideas if the X8DTi-F-O would work?
I don't need audio as that's coming from my firewire interface - but I'll need a PCIe slot for that. Onboard Ethernet would be nice though.

Thanks for any help!


Edit: This is just to get me by until we know the E5 Xeons are fully working in OSX after the Mac Pro tube comes along ;)
 
And in other news... THIS is what a pair of E5 2680 v2 CPUs will get you in GB3 in 32 bit mode.

The 2.8GHz speed is showing as 3.6GHz, meaning there have been some BIOS adjustments I think.
 
Tutor I think you had experience with the Supermicro Dual 1366 boards right?
Anyone else with this rig please chime in too.

I really think I need to retire my SR-2 and go for a more reliable option.
Had 45 minutes of checksum errors again last night before the board would boot.

Doesn't matter how well the SR-2 overclocks if I can't even get thing thing to boot most of the time... any time savings from the overclock are more than wiped away.

THIS is the closest thing I can find from Supermicro for my needs but I don't think it's the same board people were using for hackintoshes before the SR-2 came out. Was it the X8DAI or something?

Any ideas if the X8DTi-F-O would work?
I don't need audio as that's coming from my firewire interface - but I'll need a PCIe slot for that. Onboard Ethernet would be nice though.

Thanks for any help!


Edit: This is just to get me by until we know the E5 Xeons are fully working in OSX after the Mac Pro tube comes along ;)

No, I have not worked with the Supermicro Dual 1366 boards, but from what I've heard and read about them, they are excellent quality. However, for a selfbuild without much tweaking, this is the alternate 1366 mobo that I'd now recommend that you also consider: http://www.superbiiz.com/detail.php...80fb42fff0ed&gclid=CO-ihMeu07kCFcbm7Aod63kArw http://www.tyan.com/Motherboards_S7025_S7025AGM2NR It's a little more expensive ($400 vs $350), but offers more possibilities with 4 double wide x16 PCIe slots. Just make sure that it already has the bios update to run your CPUs.
 
Last edited:
No, I have not worked with the Supermicro Dual 1366 boards, but from what I've heard and read about them, they are excellent quality. However, for a selfbuild without much tweaking, this is the alternate 1366 mobo that I'd now recommend that you also consider: http://www.superbiiz.com/detail.php...80fb42fff0ed&gclid=CO-ihMeu07kCFcbm7Aod63kArw http://www.tyan.com/Motherboards_S7025_S7025AGM2NR It's a little more expensive ($400 vs $350), but offers more possibilities with 4 double wide x16 PCIe slots. Just make sure that it already has the bios update to run your CPUs.

Ok cool thanks Tutor! But is this Tyan board suitable for running OSX? My aim at the moment is to keep my system as close to its current setup as possible just with a different motherboard to keep me going for a while.
 
Ok cool thanks Tutor! But is this Tyan board suitable for running OSX? My aim at the moment is to keep my system as close to its current setup as possible just with a different motherboard to keep me going for a while.
Looks like it - See
http://www.tonymacx86.com/buying-advice/61348-evga-classified-sr-2-a.html and
http://www.tonymacx86.com/general-hardware-discussion/72282-sr-2-memory-problems.html . BTW - (1) you two seem to be kind of passing in the night: EVGA <-> Tyan; (2) here's the manual for the Tyan mobo that I suggest you consider [ http://www.tyan.com/manuals/S7025_Manual_v1.4.pdf ]; and (3) check out the manual on this other Tyan workhorse [ http://www.tyan.com/manuals/FT68-B7910_UG_v1.0a.pdf ].
 
Last edited:
When 4 double wide PCIe x16 slots aren't enough and you've got money to burn

take a look at the vCORE Extreme: http://www.advancedhpc.com/gpu_computing/vcore_extreme.html or http://www.sabrepc.com/vsearch.aspx?SearchTerm=nextio for $48K+. NextIO's vCORE Extreme supports up to 16 double-wide GPU’s and 8 server connections over x16 PCIe links. Your lower density servers can access up to 16 GPU’s. Your GPU resources can be quickly reconfigured among lower density servers and each job can be optimized for the right ratio of CPU’s to GPU’s, powered by four 1400W PSU’s. Or you could just settle for 8 double wide PCIe x16 slots for the dual LGA 1366 version of a Tyan server for $3,617 [ http://www.superbiiz.com/detail.php?name=TS-B715V2R ] or the dual LGA 2011 version of a Tyan server for $5,000 [ http://www.sabrepc.com/p-3748-tyan-b7059f77av6r-ft77a-b7059-8-gpu-4u-server-barebone.aspx ].
 
Last edited:
OctaneBench Coming Soon!

OctaneRender is currently being used as a benchmark by major hardware review websites (such as Tom's Hardware and Barefeats) to evaluate the CUDA performance of new GPUs. In the next few weeks a newly developed benchmarking tool called OctaneBench will be launched [ http://render.otoy.com/forum/viewtopic.php?f=7&t=34811 ]. With this tool you'll be able to compare the path tracing performance of OctaneRender on different CUDA enabled GPUs in an easier/more standardized way. I'll use the tool to assist me in tweaking my GPUs, just as I use Geekbench and Cinebench to not only measure the performance of my CPUs, but also to tweak them optimally.
 
Looks like it - See
http://www.tonymacx86.com/buying-advice/61348-evga-classified-sr-2-a.html and
http://www.tonymacx86.com/general-hardware-discussion/72282-sr-2-memory-problems.html . BTW - (1) you two seem to be kind of passing in the night: EVGA <-> Tyan; (2) here's the manual for the Tyan mobo that I suggest you consider [ http://www.tyan.com/manuals/S7025_Manual_v1.4.pdf ]; and (3) check out the manual on this other Tyan workhorse [ http://www.tyan.com/manuals/FT68-B7910_UG_v1.0a.pdf ].

Thanks for the updates - you really know how to make a guy wan to spend money on stuff :D
 
you really know how to make a guy wan to spend money on stuff

No kidding. If folks want to experience bankruptcy, just talk to Tutor about building a new system. He'll have you spending yourself silly. :) The Hack I might build at this point is already $6600US, and that doesn't include any RAM.

Mommy, make the bad man stop! ...
 
Tyan on the way

Ok so I've purchased a Tyan S7025AGM2NR.

Will test it alongside the SR-2 and keep whichever board is more reliable.

All the evidence seems to be there to support the board as being OSX compatible as far as I can tell.

• Same 5520 / ICH10R chipset as the SR-2
• Onboard Intel 82574L ethernet controller has been confirmed working and sources claim it's the same as use in a real Mac Pro LINK1 LINK2
• Onboard Realtek ALC262 should also work as found in this list although I will be using my existing firewire audio interface
• Investigating the manual (thanks for the tip Tutor) shows the following BIOS settings are present as are in the recommended settings for the EVGA SR-2:
  • AHCI SATA
  • ACPI Configuration: Suspend Mode: [S3(STR)]
  • High Precision Event Timer: [Enabled]

So I'm yet to see if I need a custom DSDT which is likely. With any luck I will be able to get RampageDev to give me a hand with this ;)

Fingers crossed!

P.S. if anyone can find that thread about covering certain CPU contacts to raise FSB I'd be quite grateful :D

----------

No kidding. If folks want to experience bankruptcy, just talk to Tutor about building a new system. He'll have you spending yourself silly. :) The Hack I might build at this point is already $6600US, and that doesn't include any RAM.

Mommy, make the bad man stop! ...

LOL!

Yeh that's why I tinker and learn at home with my machine while trying to push my boss at work to front the cash for an even bigger monster :D
 
If you feel the need to tweak CPUs and GPUs, try to avoid brute force.

...

P.S. if anyone can find that thread about covering certain CPU contacts to raise FSB I'd be quite grateful :...

The reasons that I recommend this board for you are the ability to swap in parts that you already own (from your EVGA SR-2) to tide you over until "the E5 Xeons are fully working ...," multi-OS compatibility, price, the plethora of PCIe slots, and stability - Not for CPU tweaking that might lead to instability concerns. Instability appears to me to be your chief point of frustration with the EVGA SR-2 ("... I need... a more reliable option"). However, if you decide to add GPGPUs (which you can tweak) or coprocessors, such as the Xeon Phi, to that Tyan you'll be set for future growth in the areas where many multiples of CPU like performance are achievable because you'll have at your disposal as many fast double wide expansion slots as you have with the SR-2. Thus, I suggest that you fight the urge (as I have done with my Tyan Server) and leave the so called "FSB" [ http://howtohackintosh.wordpress.com/overclocking/overclocking-intel-iseries-processors/ the BCLK] as is. If you feel the need to tweak your CPU, tweak it through software mods only - avoid hardware CPU mods. You may have thought that you'd never hear such a suggestion from, of all people, me, given my long history of tweaking via any mod possible, but now you have heard my caution. As the article that I've referenced shows, todays CPUs are very complex with many linkages [much more complex than those that I applied brute force to yesteryear].
 
Last edited:
Omnious, thanks for helping to sharpen my plan of attack.

...

What you are looking for is even further away than just modifying the PCI IDs. You need two Titans, modded and in the same box to start with (unless you have some inter-connect like Infiniband, but then that makes things event more complicated).

Also, let's not forget the BIOS itself. You might need the Tesla BIOS which you could compare with the Titan one, and see whether things need to be changed and how. It's a whole other area of work...

Next would be to find out where does RDMA happen and where does it get blocked (I will presume it will, because this is still not a real Tesla card) and go from there. :)
...

This is what further research on the RDMA and GPUDirect features of the Tesla cards reveals:

What enables Teslas bi-directional PCIe communication is the fact that the Teslas have two DMA engines. So, if the GTXs don't have two DMA engines or I cannot activate them if they are present, then, for me, it's not worth the effort to change their IDs. This part of the problem I believe will be the most difficult to solve.

The Tesla's faster communication with InfiniBand (IB) using NVIDIA GPUDirect™ depends on a special Linux patch, the InfiniBand driver, and the CUDA driver. This part of the problem I believe that I can solve. (P.S. - Downloaded them - [ http://www.nvidia.com/object/software-for-tesla-products.html and GPUDirect support for RDMA is available now in the latest CUDA Toolkit. https://developer.nvidia.com/gpudirect ]).

Thus, my action plan, before changing any of my GTXs' IDs, is to determine whether any of the GTXs have two DMA engines and how to activate both of them. If the two DMA engines are present and I determine how to activate them, then I must figure out how to change the GTXs IDs and then do any necessary bios changes.

Is the effort worth it for this one feature alone, and how many people would really benefit from it?

The time and other resources involved might simply outweigh the purchase of Tesla cards that do this OOB.


If its not worth the effort to do this mod, I rather spring for another Tyan Servers with eight double wide PCIe x16 slots (~$6K) to house, initially, four GTX 680s and four GTX 580s.

...
P.S. if anyone can find that thread about covering certain CPU contacts to raise FSB I'd be quite grateful

Before you even call me out - I know: No sooner than I counsel against using brute force on CPUs, I describe a plan of action contemplating its use it on a GPU card. I'll attribute that apparent inconsistency to my abhorrence of hobgoblins in micro(computer?) craniums [Ralph Waldo Emerson (1803–1882)], for neither am I a little statesman or philosopher or divine.
 
Last edited:
This is what further research on the RDMA and GPUDirect features of the Tesla cards reveals:

What enables Teslas bi-directional PCIe communication is the fact that the Teslas have two DMA engines. So, if the GTXs don't have two DMA engines or I cannot activate them if they are present, then, for me, it's not worth the effort to change their IDs. This part of the problem I believe will be the most difficult to solve.

The Tesla's faster communication with InfiniBand (IB) using NVIDIA GPUDirect™ depends on a special Linux patch, the InfiniBand driver, and the CUDA driver. This part of the problem I believe that I can solve. (P.S. - Downloaded them - [ http://www.nvidia.com/object/software-for-tesla-products.html and GPUDirect support for RDMA is available now in the latest CUDA Toolkit. https://developer.nvidia.com/gpudirect ]).

Thus, my action plan, before changing any of my GTXs' IDs, is to determine whether any of the GTXs have two DMA engines and how to activate both of them. If the two DMA engines are present and I determine how to activate them, then I must figure out how to change the GTXs IDs and then do any necessary bios changes.

I'm sorry I've been absent past week - been working long hours for awhile now and barely have time for anything else.

Also, I'm in the process of building a new system (after giving up the real Mac route) and gathering all the bits and pieces took time, too. I've got almost everything but some small things like cooling fans, a few compression fittings and quick disconnects (I'm doing a water cooling setup - my first :) ).

Once that's done and over with, I'll have a standard platform that's more allowing for playing with and testing than Mac was (and will ever be).

Perhaps I'll be able to squeeze some time in for the GPU Direct R&D as well. ;)
 
Thanks for the update

I'm sorry I've been absent past week - been working long hours for awhile now and barely have time for anything else.

Also, I'm in the process of building a new system (after giving up the real Mac route) and gathering all the bits and pieces took time, too. I've got almost everything but some small things like cooling fans, a few compression fittings and quick disconnects (I'm doing a water cooling setup - my first :) ).

Once that's done and over with, I'll have a standard platform that's more allowing for playing with and testing than Mac was (and will ever be).

Perhaps I'll be able to squeeze some time in for the GPU Direct R&D as well. ;)

If you need any assistance with your new build, just ask. Among many others, RampageDev, PunkNugget and/or I will give you any assistance we can.
 
Last edited:
Before you even call me out - I know: No sooner than I counsel against using brute force on CPUs, I describe a plan of action contemplating its use it on a GPU card. I'll attribute that apparent inconsistency to my abhorrence of hobgoblins in micro(computer?) craniums [Ralph Waldo Emerson (1803–1882)], for neither am I a little statesman or philosopher or divine.

Fair enough! But hey you know what us reckless kids are like these days... obviously reckless enough to ask for a more reliable system, then find ways to make it unreliable again ;)

I'm looking forward to opening up another PCIe slot if the onboard GbE ports are working :)
 
That penny pinching bad man intends to keep knocking at your door.

No kidding. If folks want to experience bankruptcy, just talk to Tutor about building a new system. He'll have you spending yourself silly. :) The Hack I might build at this point is already $6600US, and that doesn't include any RAM.

Mommy, make the bad man stop! ...

The bad man will very soon be 60, but he has no desire/plan to stop what he loves doing. While he may have you spending yourself silly, what he recommends that you should buy should help you get what you tell him you need and provide you with the maximum performance for each Washington that you trade. Also, just remember that if your dictionary is the illustrated version, that the bad man's picture should be next to the words "penny pincher," "cheap," etc. and that the bad man always tries to save himself and you money, just not at the expense of quality.
 
Last edited:
And in other news... THIS is what a pair of E5 2680 v2 CPUs will get you in GB3 in 32 bit mode.

The 2.8GHz speed is showing as 3.6GHz, meaning there have been some BIOS adjustments I think.



That mobo isn't very tweakable, nor are those CPUs. Their (Version2) performance is excellent, crushing that of Version1 2687Ws.

Here's what's going on with that 3.6 GHz figure: Xeon E5-2680 v2 10 / 20 2.8 GHz 3.6 GHz 25 MB 2 115W HT, TBT [ http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon E5-2680 v2.html ]. That's the Turbo Boost max no. that Linux is reading. But even so, that's still an excellent score. Linux is the best OS under Geekbench for many-cored systems. Just check the highest scores for Geekbench 2 and 3 and you'll see Linux as the OS. I'm in the minority because I believe that one Xeon E5-2680 v2 will be the top end CPU offering (because of thermal/TDP concerns) in the 2013 Mac Pro. I believe that Apple will not offer a CPU with a TDP of 130 or greater. I doubt that the single fan cylinder will meet Apple's expectations regarding noise level, thermals and performance if it houses such a chip.
 
Last edited:
that the bad man always tries to save himself and you money, just not at the expense of quality.

It's forever appreciated; I was, of course, teasing.

For what it's worth, we've bandied about the Gigabyte motherboard that seems to be the NextBigThing(tm), assuming support in OS X Mavericks. For folks that are considering this motherboard, pay close attention to its size. It's an EEB motherboard, meaning your standard ATX cases won't be large enough to fit it.

I'm somewhat partial to Cooler Master's cases, and their (expensive) Cosmos II will fit it. So will their Haf 932. I'm sure other manufacturers have cases that will support it.
 
It's forever appreciated; I was, of course, teasing.

For what it's worth, we've bandied about the Gigabyte motherboard that seems to be the NextBigThing(tm), assuming support in OS X Mavericks. For folks that are considering this motherboard, pay close attention to its size. It's an EEB motherboard, meaning your standard ATX cases won't be large enough to fit it.

I'm somewhat partial to Cooler Master's cases, and their (expensive) Cosmos II will fit it. So will their Haf 932. I'm sure other manufacturers have cases that will support it.

It's still smaller than the SR-2 which is a good sign. I have the Lian-Li PC Z70 - which is about as unobtrusive as a rhinoceros in the study room - but provides plenty of space inside. When I eventually get the GA-7PESH3 going I think I'll be looking for a rack system to free up some space on my desk.
 
It's still smaller than the SR-2 which is a good sign. I have the Lian-Li PC Z70 - which is about as unobtrusive as a rhinoceros in the study room - but provides plenty of space inside. When I eventually get the GA-7PESH3 going I think I'll be looking for a rack system to free up some space on my desk.

Here's a great inexpensive case ($160 - SSI EEB, SSI CEB, Extended ATX, ATX, Micro ATX compatible) that I own [ http://www.newegg.com/Product/Product.aspx?Item=N82E16811163185 ]. It would house the $400 Tyan (S7025AGM2NR Dual LGA1366 Xeon/ Intel 5520/ SATA2/ A&V&2GbE/ SSI EEB Server Motherboard) that you bought. To me what makes it a great case is that it requires that you install the motherboard (and thus the PCIe end connectors) at the top of the case (the board is rotated 90 degrees) so the heat from the cards goes out of the top of the case, making for a cooler system. In other words, your PCIe cards hang down and there're 2 large fans at the bottom of the case to aid heat exhaustion at the top of the case and bring in cooler air from underneath the whole system. This will further enhance the TurboBoost potential of your CPUs and the ability to safely tweak the GPU(s) for utmost performance. You also get ample storage and the case is as unobtrusive a gazelle on the far side of a rhino.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.