Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Here're some pics of the guts of Mr. Freeze(r). There are two uninstalled views (side and front). The top unit is an Alphacool NexXxoS Monstra Dual 120 mm Radiator. It'll be used as the first stage chiller. The bottom unit consists of 4 ganged together Aquacomputer Airplex Modularity System 240 Radiators - the main chillers. In the last pic (the installed view) - the MOnstra pre-chiller is on the top shelf of Mr. Freeze(r) and is hiding behind the four System 240 Radiators.

Fascinating looking at that setup and the video too Tutor. I am a fan of less plumbing and condensate build up when it comes to cooling electronics as you can probably imagine. The industry I am now even more involved in than I ever thought barely less than a week ago has mosfets to cool which dwarf even a cluster of your ingenious designs, but water cooling is out of the question due to their harsh locations on remote sites as up mountains are inhospitable places!

But thank you for my regular brain top up regardless :D
 
Hi Tutor look at this http://forum.netkas.org/index.php/topic,8565.0.html





Thanks RoastingPig,

With your information, that of h9826790, and mine, it appears safe to assume that a dual processor 2009-2012 MacPro with dual 55xx or 56xx processors can use up to 128G of ram, particularly if the user is running Mavericks or later.

Does anyone have any information regarding the max, useable memory limits applicable to the 2006-2007 and 2008 MacPros? I know that my 2007 MacPros use up to 32G of ram (that's the most that I could find at the time, which was a few years ago - even prior to Mavericks) and thus I believe that the same would apply to 2006 and 2008 MacPros. But does anyone have/know of such a system that has and uses more than 32G of ram. Moreover, does anyone have any examples of 2006-2012 MacPros running a pre-Mavericks Mac OS that (1) in the case of the 2006-2008 Mac Pros, have and use more than 32G of ram. (2) in the case of the 2009-2012 Mac Pros, have and use more than 64 gigs of ram for dual processor systems or (3) in the case of a 2009-2012 single processor system, uses more than 56G of ram and, if so, whether the single processor system uses 55xx or 56xx Xeons, rather than the W36xx Xeons?
 
Fascinating looking at that setup and the video too Tutor. I am a fan of less plumbing and condensate build up when it comes to cooling electronics as you can probably imagine. The industry I am now even more involved in than I ever thought barely less than a week ago has mosfets to cool which dwarf even a cluster of your ingenious designs, but water cooling is out of the question due to their harsh locations on remote sites as up mountains are inhospitable places!

But thank you for my regular brain top up regardless :D

My goals are now to keep my GPUs and CPUs at about 2 degrees Centigrade when idle and under 10 degrees centigrade under load and to avoid condensation.

----------


Thanks for the reference.
 
My goals are now to keep my GPUs and CPUs at about 2 degrees Centigrade when idle and under 10 degrees centigrade under load and to avoid condensation.

----------



Thanks for the reference.

Another goal would be less plumbing, boxes, energy. What's the cost benefits of running at low C or are you running high clocks?
 
Another goal would be less plumbing, boxes, energy. What's the cost benefits of running at low C or are you running high clocks?

Here're some of the benefits*/. High temps reduce the longevity of Intel CPUs and Nvidia GPUs and, conversely, cool running GPUs and CPUs last longer. The CPUs and GPUs run more stably when they're kept cool and, thus, aren't as prone to commit errors. So, your compute result are more reliable. Modern CPUs and GPUs automatically reduce their speed when they reach certain temperatures - throttling (down clocking speed) is triggered. The CPUs and GPUs run even faster, without any user intervention, when their temperatures are kept low and a heavy load is applied. The turbo boost potential of Intel CPUs (particularly beginning with Nehalem CPUs) is highly dependent on low CPU/core temperature. The CPU enters turbo boost (an automatic speed ramping) mode more frequently, reaches higher stages of turbo boost (turbo boost has various stages and more of them the more recent the CPU), more cores participate in turbo boost ( heat prevents the maximum no. of cores from participating in turbo boost) and the duration of turbo boost lasts longer when the CPU is running cool. For example, if you go to CPU-Worlds' site and look at information regarding the X5690 CPU [ http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon X5690 - AT80614005913AB (BX80614X5690).html ], you might notice the following:
"Frequency 3467 MHz
Turbo frequency 3733 MHz (1 or 2 cores)
3600 MHz (3 or more cores)". [Emphasis added.]

If you're curious, you might wonder why your X5690s are consistently running below 3467 MHz. It's more likely to be load related or if you're running heavy loads, their environment could be too hot -> to throttling. Moreover, how can you get more CPU cores to participate at that 3467 MHz or 3733 MHz turbo levels. That answer is very simple: If you want the maximum core participation, then keep those CPUs as cool as possible when they're fully loaded by a sufficiently threaded application. Why keep you GPUs as cool as possible? The same type of phenomenon applies to modern Nvidia GPUs. Thus, works gets completed faster.

So even putting aside the fact that Nvidia GPUs and Intel Nehalem, Westmere and certain Haswell GPUs can be overclocked significantly (and Sandy and Ivy Bridge CPUs less so), if you're running a Mac where you can't over clock at all, you can still get better performance from your CPUs and GPUs by ensuring that their operating environment is kept as cool as possible.

Now for the over clocking icing on the cake. If you are running systems which allow for over clocking (even some Supermicro servers are being sold with this feature), the cooler your system is maintained, the more you can over clock the CPUs. The same applies to GPUs. That the Titan Z (a dual GPU processor card) is clocked at about 200 MHz lower that the other single GPU processor Titans is due solely to heat/and the attempt to keep power requirements at a certain level. Reduce the heat/temperature of a Titan Z sufficiently, then you'll get performance closer to that of two (faster clocked) Titan Blacks, with the expenditure of a little less energy than required to run two separate Titan Black GPU cards.

After my mod to my first WolfpackAlphaCanisLupus (i.e., Tyan Server) is completed and tested, I may post a more detailed cost/performance (benefit) analysis. But here's a summary, I've spent less for my Titan Zs on a GPU processor basis AND TO WATER-COOL THEM AND SINCE I WAS GOING HYDRO - TO WATER-COOL THE CPUs in that system, than I would have spent for the same no. of water-cooled Titan Black GPU processors WITHOUT THE WATER-COOLING SYSTEM THAT THEY NEED AND WITHOUT THE BENEFIT OF HAVING WATER-COOLED CPUs. My average cost for an available single water-cooled Titan Z card + water-cooling enclosure was about $300 more than the then price for an available water-cooled Titan Black ( Newegg currently has the Titan Black priced at $1,399.99 or, to be fair, $1,400 and the (almost always out-of-stock) non-water cooled Titan Z for $1,499.99 or, to be fair, $1,500 (you can buy EK hydro conversion kits for under $170 ea. at FrozenCPU [that includes a double slot conversion bracket]). The primary reason for that low price differential ($100) between the water-cooled Titan Black and the non-water cooled Titan Z is the heat generated by having two GPU processors on the same card. The availability for all versions of the Titan Z cards has been much less than that of the Titan Blacks. But both of these cards are experiencing low availability [for those into CUDA computing Maxwell wasn't and hasn't been the silver bullet]. EVGA hasn't notified me of the availability for purchase of a water-cooled Titan Z since I began requesting notification of in-stock status in September of this year.

In sum, if I can get the performance in one Titan Z equal to or very close to that of two Titan Black GPU processors, then my money saved will have been very significant. Also, since 3d rendering software seats are sold per computer system, there's also a significant savings there. Additionally, WolfpackAlphaCanisLupus holds at most eight double wide GPUs. Whereas a Titan Z non-water cooled version is a three slot card, a water-cooled Titan Z is a two slot card. The standard air-cooling system to help dissipate heat caused the Titan Z (non-water-cooled versions) to be very fat cards. Thus, I can get up to eight water-cooled Titan Zs or eight water-cooled Titan Blacks in my system, but for every Titan Black card that has only one GPU processor, the Titan Z has two GPU processors. So, using Titans Zs, I hope to double the performance I'd get from an equivalent number of Titan Black cards. Thus, if Mr. Freeze(r) + traditional water-cooling enable me to achieve the performance that I expect, I will have saved $$$s + have one of the fastest all-in-one, single computer rendering systems on planet Earth. To me - an old, cheap man, that's a benefit worth the costs and time involved. Moreover, since the mid-1980s that's how I have rolled (my systems).

*/Server rooms are usually kept cool for some or all of these reasons.
 
Last edited:
Here're some of the benefits*/. High temps reduce the longevity of Intel CPUs and Nvidia GPUs and, conversely, cool running GPUs and CPUs last longer. The CPUs and GPUs run more stably when they're kept cool and, thus, aren't as prone to commit errors. So, your compute result are more reliable. Modern CPUs and GPUs automatically reduce their speed when they reach certain temperatures - throttling (down clocking speed) is triggered. The CPUs and GPUs run even faster, without any user intervention, when their temperatures are kept low and a heavy load is applied. The turbo boost potential of Intel CPUs (particularly beginning with Nehalem CPUs) is highly dependent on low CPU/core temperature. The CPU enters turbo boost (an automatic speed ramping) mode more frequently, reaches higher stages of turbo boost (turbo boost has various stages and more of them the more recent the CPU), more cores participate in turbo boost ( heat prevents the maximum no. of cores from participating in turbo boost) and the duration of turbo boost lasts longer when the CPU is running cool. For example, if you go to CPU-Worlds' site and look at information regarding the X5690 CPU [ http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon X5690 - AT80614005913AB (BX80614X5690).html ], you might notice the following:
"Frequency 3467 MHz
Turbo frequency 3733 MHz (1 or 2 cores)
3600 MHz (3 or more cores)". [Emphasis added.]

If you're curious, you might wonder why your X5690s are consistently running below 3467 MHz. It's more likely to be load related or if you're running heavy loads, their environment could be too hot -> to throttling. Moreover, how can you get more CPU cores to participate at that 3467 MHz or 3733 MHz turbo levels. That answer is very simple: If you want the maximum core participation, then keep those CPUs as cool as possible when they're fully loaded by a sufficiently threaded application. Why keep you GPUs as cool as possible? The same type of phenomenon applies to modern Nvidia GPUs. Thus, works gets completed faster.

So even putting aside the fact that Nvidia GPUs and Intel Nehalem, Westmere and certain Haswell GPUs can be overclocked significantly (and Sandy and Ivy Bridge CPUs less so), if you're running a Mac where you can't over clock at all, you can still get better performance from your CPUs and GPUs by ensuring that their operating environment is kept as cool as possible.

Now for the over clocking icing on the cake. If you are running systems which allow for over clocking (even some Supermicro servers are being sold with this feature), the cooler your system is maintained, the more you can over clock the CPUs. The same applies to GPUs. That the Titan Z (a dual GPU processor card) is clocked at about 200 MHz lower that the other single GPU processor Titans is due solely to heat/and the attempt to keep power requirements at a certain level. Reduce the heat/temperature of a Titan Z sufficiently, then you'll get performance closer to that of two (faster clocked) Titan Blacks, with the expenditure of a little less energy than required to run two separate Titan Black GPU cards.

After my mod to my first WolfpackAlphaCanisLupus (i.e., Tyan Server) is completed and tested, I may post a more detailed cost/performance (benefit) analysis. But here's a tidbit, I've spent less for my Titan Zs on a GPU processor basis AND TO WATER-COOL THEM AND SINCE I WAS GOING HYDRO - TO WATER-COOL THE CPUs in that system, than I would have spent for the same no. of water-cooled Titan Black GPU processors WITHOUT THE WATER-COOLING SYSTEM THAT THEY NEED AND WITHOUT THE BENEFIT OF HAVING WATER-COOLED CPUs. My average cost for an available single water-cooled Titan Z card was about $350 more than the then price for an available water-cooled Titan Black. The availability for these Titan Z cards has been much less than that of the Titan Blacks. But both of these cards have experienced low availability. EVGA hasn't notified me of the availability for purchase of a water-cooled Titan Z since I posted my notification request in September of this year. So if I can get the performance in one Titan Z equal to or very close to that of two Titan Black GPU processors, then my money saved will have been very significant. Also, since 3d rendering software seats are sold per computer system, there's also a significant savings there. Additionally, WolfpackAlphaCanisLupus holds at most eight double wide GPUs. Whereas a Titan Z non-water cooled version is a three slot card, a Titan Z Hydro Copper is a two slot card. Thus, I can get up to eight water-cooled Titan Zs or eight water-cooled Titan Blacks in my system, but for every Titan Black card that has only one GPU processor, the Titan Z has two GPU processors. So, using Titans Zs, I hope to double the performance I'd get from an equivalent number of Titan Black cards. In sum, if Mr. Freeze(r) + traditional water-cooling enable me to achieve the performance that I expect, I will have save $$$s + have one of the fastest all-in-one, single computer rendering systems on planet Earth. To me - an old, cheap man, that's a benefit worth the costs and time involved. Moreover, since the mid-1980s that's how I have rolled (my systems).

*/Server rooms are usually kept cool for some or all of these reasons.

So much to absorb!

I will briefly counterpoint as the thermal paste junkie myself for 20 years that all my systems get the die plates and lids polished with Cape cod cloths and chrome polish first if necessary, all denatured isopropyl washed down and cleaned throughly and I use gelid gc extreme paste. I have had good results too with cards also.

For my future x5690 dual board I will need to maximise solid works I think I will use the more exotic liquid silver compounds and delid my chips. Due to the fact that until there is a Macintosh EFI utility written for us to adjust timings for cpu and ram I think air cooled and total optimisation of that will be sufficient for mine and the majority of power users barring notable exceptions.

Cards for cuda on the other hand - I cannot deny your peer arguments I simply needed to grasp it, but it would be convenient to tax those clocks that little bit more as you have more than enough capacity to do that without causing stress to the system. There is surely a way to give them that clock in the bios?

Though currently my single 4,1 heatsink is of need of a severe air line dust extraction as the can of compressed air is not moving all the crud and it's about 8-10c more than it should be. But it's a case of time carting that board down to get it done and then back again immediately when the time for truly taxing it out even 6 core is about a week away.

How much would a 25-35% increase in efficiency will save on energy costs for your long jobs?
 
So much to absorb!

... .

How much would a 25-35% increase in efficiency will save on energy costs for your long jobs?

See my last post again - I've thrown in more information about, among other things, costs to give you a better idea about cost/benefit. I haven't yet done any energy efficiency analysis of this mod or any other mods that I've made. I suspect that, on the whole, my mods result in more energy being used, but for significantly less periods of time since the render jobs are completed significantly faster. In sum, I haven't seen any higher electrical bills (in fact, they've been a little lower). That's most important to my thinking in terms of electrical efficiency. And if I cut render times by say - 25-35%, all involved are happier and the benefit of that is measurable in increased business.

All of the CPUs in 18 of my non-Mac systems are water-cooled and I intend to water-cool the CPUs in the rest, including my three MacPro2,1s and my one MacPro4.1->5,1. I have, however, many GPUs to water-cool. That's what 2015 is for.

Certainly, getting all that one can from air-cooling is enough for a much larger segment of computer users. I suspect that the vast majority of users don't even know enough to care about the issue. I just happen to be a member of that small group of fanatics who always strive for maximizing CPU/GPU related performance and getting the most that I can from the money that I spend/have spent.
 
Last edited:
See my last post again - I've thrown in more information about, among other things, costs to give you a better idea about cost/benefit. I haven't yet done any energy efficiency analysis of this mod or any other mods that I've made. I suspect that, on the whole, my mods result in more energy being used, but for significantly less periods of time since the render jobs are completed significantly faster. In sum, I haven't seen any higher electrical bills (in fact, they've been a little lower). That's most important to my thinking in terms of electrical efficiency. And if I cut render times by say - 25-35%, all involved are happier and the benefit of that is measurable in increased business.

All of the CPUs in 18 of my non-Mac systems are water-cooled and I intend to water-cool the CPUs in the rest, including my three MacPro2,1s and my one MacPro4.1->5,1. I have, however, many GPUs to water-cool. That's what 2015 is for.

Certainly, getting all that one can from air-cooling is enough for a much larger segment of computer users. I suspect that the vast majority of users don't even know enough to care about the issue. I just happen to be a member of that small group of fanatics who always strive for maximizing CPU/GPU related performance and getting the most that I can from the money that I spend/have spent.

I agree mostly reading back, especially with the pc mobos though without having an EFI tweak utility I am not so sure it is necessary when you can't overclock it so shock horror I am tempted to disagree as I think that the cheesegrater plumbing can be utilised better in the hacks. I am fairly sure that the cpus in the towers will not benefit even in single core unless you tweak those volts and multipliers, they already run very good with stock tlc pasting and die plate polishing as I do very successfully.

You've got thoughts of liquid nitrogen running round my head btw :D
 
I agree mostly reading back, especially with the pc mobos though without having an EFI tweak utility I am not so sure it is necessary when you can't overclock it so shock horror I am tempted to disagree as I think that the cheesegrater plumbing can be utilised better in the hacks. I am fairly sure that the cpus in the towers will not benefit even in single core unless you tweak those volts and multipliers, they already run very good with stock tlc pasting and die plate polishing as I do very successfully.

You've got thoughts of liquid nitrogen running round my head btw :D

Since the clock speeds of the CPUs in the Tyan cannot be directly tweaked up or down, I'm relying on the fact that once I water-cooled the CPUs in my self-built Windows systems they too performed better and faster even before I resorted to tweaking their clock speeds in their bioses. Water-cooling those CPUs did also provide me more headroom for tweaking their clock speeds. Moreover the biggest speed gain from any of my systems has come from under clocking the CPUs in my EVGA SR-2s and water-cooling them because when they were put under heavy load, more cores were able to turbo boost higher, longer and more frequently. Thus, I expect a performance increase from water-cooling the CPUs in my Tyans and I would expect a similar increase in performance from water-cooling the CPUs on my four Mac Pros.
 
thanks for this... cut it out and it fit... your the man :)


have you ever ran 3x 690's in them? can it recognise the 6 gpu's?

I'm confounded. I have only one GTX 690 and when I ran it in my 2007 MP using Mavericks and Nvidia driver 310.40.25f01, only one of the two GPUs was recognized by OSX and OctaneRender (See first pic). When I ran one of my GTX 590s, however, both of it's GPUs were recognized by OSX and OctaneRender (see 2nd pic). In fact, when I installed two GTX 590s, four GPUs were recognized (see 3rd pic). I don't have a third GTX 590 available now; so I can't test it with a third GTX 590 at this time. Here're pics from my testing. Based on what I could test, I have to say at this time that I don't know for sure whether your system would recognize 6 gpu's in 3 GTX 690s, but I doubt it.
 

Attachments

  • GTX690OctaneRender.png
    GTX690OctaneRender.png
    760.2 KB · Views: 230
  • GTX590OctaneRender.png
    GTX590OctaneRender.png
    759.2 KB · Views: 165
  • 2xGTX590MP1.png
    2xGTX590MP1.png
    116.2 KB · Views: 207
Last edited:
Tyan/Titan-Z Mod Update 2 - A Titan Z Won't Equal 2 Titan Blacks W/O Extra Cooling

I finally decided to fill this Tyan with GPUs. It'll have six GTX Titan Zs, one GTX Titan Black and one GTX 780 6G. Thus, this system will contain 14 GPU processors for a total of 39,744 (2880 * 13 + 2304) CUDA cores. They'll all be water-cooled. My change in the configuration of this mod has necessitated ordering more parts. Given their expected arrival date, that moves my projected completion deadline date from December 24th to December 31st.

Since this Tyan GPU server currently behaves like a server in every respect, including, upon startup, sounding just like a jet plane because of the Tyan's three thick, large diameter internal turbo fans, I'm thinking about sticking to that theme [but only by as much as necessary to meet my thermal goals] by using my Sunon PMD1212PTB1-A(2).F.GN's fans. I am awaiting my fan control boxes to bring them all under my total control.


Here are some of the specs for my Sunons:
...
Voltage - Rated 12VDC
Size / Dimensions: Square - 120mm L x 120mm H x 25mm W [There'll be nine of them on each MoRa rad., two of them inside Mr. Freeze(r) on the the Alphacool NexXxoS Monstra Dual 120 mm Radiator and four of them inside Mr. Freeze(r) on the 4 ganged together Aquacomputer Airplex Modularity System 240 Radiators]
Air Flow 150.0 CFM (4.24m³/min)
Static Pressure 0.620 inches [or 15.748 mm] H2O (154.4 Pa)
...
Noise 54 dB(A)
Power (Watts) 12.0W
RPM 4500 RPM
Termination 2 Wire Leads
Ingress Protection -
Weight 0.485 lb (219.99g)
Operating Temperature 14 ~ 158°F (-10 ~ 70°C)
Current Rating 1A
Voltage Range 6 ~ 13.8VDC
...


I decided to try one of my (as of yet un-water-cooled) Titan Zs in the Tyan to see how it runs on air. Running Octane Render 2.14, I was able to overclock that Titan Z by 250 MHz (705 MHz + 250 MHz = 955 MHz) and it's ram by 100 MHz ( these weren't the limits) to achieve in DL 72.94 Ms/sec. av. and in PT 24.58 av. Both Coherent Ratio and Path Term were set to 1. They (each of it's GPUs) were boosting on air at over 1350 MHz. I believe that the extended over-clocking capability way due to the Tyan's three internal wind tunnel fans at play. To better put this in context, the least expensive Titan Z was rendering the scene faster than either two GTX 780 Tis or two GTX Titan Blacks clocked at stock speeds. Moreover, when water-cooled, the Titan Z will occupy only the space that would be occupied by one 780 Ti or Titan Black. Additionally, The Titan Z can then be over-clocked even higher. Thoughtout both tests the Titan's temps ran between 52 and 54 degrees (C). So, I'd expect much lower temperatures once I water-cool them all, even after I've over-clocked them much more.
 
Did anyone notice this dual socket 2011 launch from Asus?

http://www.asus.com/Commercial_Servers_Workstations/Z10PED8_WS/

I know the Z9PE got a bad rep - maybe this could be an improvement...

Now I'm not so up-to-date with the OSX compatible chipsets - maybe someone could shed some light on the Intel C612?

Has an M.2 Socket as well.

Could this be the return of the dual CPU hackintosh?

Supported? Not yet. Compatible? As of now, it's probably "doable." Haswell-E requires a kernel patch to run OS X right now, and even then, there's no power management or FakeSMC management for HWSensors (just something the developer has to add in, just hasn't been done yet.) From what I can tell, I would not be surprised at all if the board was Hackintosh-able. However, Haswell-E users like myself are still waiting on that ever so elusive nMP refresh with the Socket 2011-3 Xeons so we can have native power management. Hackintosh hardware isn't nearly as finicky as it used to be. With modern bootloaders like Clover, almost any board will work for basic functionality if the embedded hardware (audio, Ethernet, etc) has drivers. I, for one, would LOVE to see the return of dual socket Hackintoshes. Give us that little extra boost from these trashcan, single socket nMPs.
 
Last edited:
My Z600 as a Hackintosh

So last night I gave into the nag that I needed to hackintosh my Z600. I can install Gentoo in 30 minutes flat, I can install Arch in less than 20, I now have 9 hours wrapped into getting that silly computer to boot OS X. Most PC it's less than an hour process to get OS X working on this pre-built it's not. First Unibeast will not boot because of USB so I needed to use Niresh's Macpwn. Macpwn did not detect the hard drive after installing so I had no option to boot from the drive to finish the install by now it's after 11 and I go to bed while my work Mac makes a Unibeast stick. I get up early this morning and plug in the Unibeast stick and it can see the HD that OS X is installed on so I boot from it and half way through black screen neither yes or no with graphics enabler worked which is strange because the baby Quadro I used (FX1800) has worked OOTB on every hackintosh I've built in the last couple years. Out of frustration I put in my FirePro v5800 which has never worked and it worked, with no desktop acceleration. I got in, got a boot loader and the few kexts I was sure I needed and rebooted, everything worked with no unibeast **Tooorrrr**. I DL and installed the Nvidia web driver popped in the 750, reboot and everything is dandy including sleep. When I get home this evening I'll install the sound driver and it should be good to go.
 
A Bitter Taste Can Be Most Memorable

Did anyone notice this dual socket 2011 launch from Asus?

http://www.asus.com/Commercial_Servers_Workstations/Z10PED8_WS/

I know the Z9PE got a bad rep - maybe this could be an improvement...

Now I'm not so up-to-date with the OSX compatible chipsets - maybe someone could shed some light on the Intel C612?

Has an M.2 Socket as well.

Could this be the return of the dual CPU hackintosh?

Given my past horrible experiences with three dual CPU Asus motherboards, I'll avoid their dual CPU motherboards for the foreseeable future. If I were in the market for a dual CPU E5-26xx V3 motherboard, I'd buy a Supermicro motherboard. Their DAX board [ http://www.supermicro.com/products/motherboard/Xeon/C600/X10DAX.cfm ] costs about $150 less [compare https://www.superbiiz.com/detail.php?name=MB-10PD8WS with https://www.superbiiz.com/detail.php?name=MB-X10DAX ]; Supermicro's board is a lot more tweakable [ http://www.supermicro.com/manuals/motherboard/C612/MNL-1563.pdf ]; and Supermicro hasn't ever disappointed me.
 
Last edited:
How Many GPUs Per System Is Too Many?

Here're the answers from Amfeltec [see my post #1166, above, entitled" GPU-Oriented PCIe Expansion Cluster"]:


1) First, unsolicited Amfeltec caution:

“Please note that the general purpose motherboard can supports maximum 7-8 GPUs.
If you are planning to have more than 7-8 GPUs on your mother board we recommend
to confirm with motherboard vendor regarding support this numbers of GPU.”

2) (A) My first query following caution:

“I forgot to ask you a question that I have regarding your cautionary statement about general purpose motherboards being able to support a maximum of 7-8 GPUs. Are you aware on any particular motherboard(s) capable of supporting up to 16 GPUs and, if so, who is/are the manufacturer(s) and what is/are the motherboard model(s) that you are aware of that can handle that many GPUs? Since I’m a 3d animator who uses GPUs for 3d rendering, that information is very important to my purchasing decisions.”


2) (B) Amfeltec’s response:

"The motherboard limitation is for all general purpose motherboards. Some vendors like ASUS supports maximum 7 GPUs, some can support 8.

All GPUs requesting IO space in the limited low 640K RAM. The motherboard BIOS allocated IO space first for the on motherboard peripheral and then the space that left can be allocated for GPUs.

To be able support 7-8 GPUs on the general purpose motherboard sometimes requested disable extra peripherals to free up more IO space for GPUs.

The server type motherboards like Super Micro (for example X9DRX+-F) can support 12-13 GPUs in dual CPU configuration. It is possible because Super Micro use on motherboard peripheral that doesn’t request IO space."

3) (A) My last relevant query:

“Regarding the motherboard GPU limitation for all general purpose motherboards, does it matter whether the graphics cards are dual GPU cards (like the GTX 590, 690 and Titan Z)? In other words, would five GTX 590s, 690s or Titan Zs (all dual GPU processor cards) be counted and thus be subject to the same limitations posed by ten GTX 570s, ten GTX 670s, or ten GTX Titan Blacks (all single GPU processor cards)?”

3) (B) Last relevant Amfeltec response:

“I think it will be no difference. You can check it by plug in each board (one CPU and dual CPU) to the motherboard and in Device Manager check the size of the IO space resources requested. I think it will be the same. Very important that GPU board requests just one interval or multiple intervals in one 4K page. The IO space is the legacy request for GPU (support VGA mode) and I think this will be the same for 1 or dual [G]PU [c]ar[d]s.”
 
No room for Monstra Radiator here - Tyan/Titan-Z Mod Update 2

Now theres from for soda pops. I'm being facetious because they'd be hard to reach and I wouldn't want to risk their exploding or otherwise leaking. But, there's no room for Alphacool NexXxoS Monstra Dual 120 mm Radiator that was originally inended to go into the upper portion of the freezer. Now I'll store Silica there to control ambient moisture. I underestimated the room that would be required by the piping for the coolers in the freezer. As a result, the Monstra will become an room air cooled unit, placed in-line between the 2 extremely large Mo-RA3 radiators.

Here are the main reasons why I chose to run the piping for the four Aquacomputer Airplex Modularity System 240 Radiators - now the only freezer-based chillers - through the door:

1) the door is easily replaceable;
2) because the door is easily replaceable, it's also one of the least expensive parts of the freezer unit for me to replace should I decide to use the freezer for it's manufactured intended purpose;
3) that location (the door) is more convenient because of where the chillers sit, the orientation of the connectors on the chillers, and when I open and close the door, it's easily for me to control the placement and behaviors of the piping and to gain access to the quick disconnects to remove the chillers completely from the freezer;
4) getting the piping through the door posed no risk of damaging some important (though hidden) essential part of the freezer because the door obviously functions only as a door (it's connected by hinges) and I could easily see that no essential electrical/cooling part was run through it; and
5) the portions of the door that I cut into are made of plastic, was easy to cut and didn't take long to cut into and modify and it was easier to work the piping through the door.

Additionally, as I got farther into this project, I found that I had to build a home for the cooling components and that pushed everything else off schedule.

Here're some pics of Mr. Freeze(r) at various stages.

Next comes the wiring; then the final piping; then piping and wiring management/aggregation for more visual appeal, then finally the 24 hour no-leak pre-testing.
 

Attachments

  • IMG_0507.jpg
    IMG_0507.jpg
    784.9 KB · Views: 167
  • IMG_0508.jpg
    IMG_0508.jpg
    965.1 KB · Views: 193
  • IMG_0040.jpg
    IMG_0040.jpg
    548.4 KB · Views: 180
  • IMG_0041A.jpg
    IMG_0041A.jpg
    634.9 KB · Views: 251
  • IMG_0038.jpg
    IMG_0038.jpg
    957.7 KB · Views: 283
Last edited:
Can one utilize 16 GPU cards if he/she has only one free PCIe slot?

Check out http://amfeltec.com/products/gpu-oriented-cluster/ .

Hey tutor i just bought one :)

awaiting for a new psu as my rm1000 won't cut powering 4 x 580's... hopefully 5 cards will be recognised :)
one in the mac and 4 on the stand, il let you know how i go.

also 2 negatives for anyone thinking about it
1. its not very interchangeable the bottom 2 cards are locked in you have to dis assemble the unit
2. in the bottom slots you can only fit reference thick gpus. so anything over the standard size won't fit in the bottom.
 
Hey tutor i just bought one :)

awaiting for a new psu as my rm1000 won't cut powering 4 x 580's... hopefully 5 cards will be recognised :)
one in the mac and 4 on the stand, il let you know how i go.

also 2 negatives for anyone thinking about it
1. its not very interchangeable the bottom 2 cards are locked in you have to dis assemble the unit
2. in the bottom slots you can only fit reference thick gpus. so anything over the standard size won't fit in the bottom.

Sure, please do let me know how it goes. I have 5xGTX580s, but all 5 of mine are the Classified Ultra 3Gs that have 2x8-pin +1x6-pin power connectors [= 75 watts for the PCIe slot + 2x150 watts for both 8-pins connectors + 75 watts for the 6-pin connector for a grand total of 450 watts per card!]. Thus, I'd need 1800 watts of power for just four of them to be able to run at their max TDP, not to mention needing even power more to fully over-clock them.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.