Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For those with massive rendering needs, you might want to consider a GPU accelerated super computer.

Ok so how do these nodes 'talk' to each other?? This is about 4 nodes acting together as one machine running under one OS right??? Or are they just 4 essentially separate machines housed in one enclosure??
 
Ok so how do these nodes 'talk' to each other?? This is about 4 nodes acting together as one machine running under one OS right??? Or are they just 4 separate machines housed in one enclosure??

It appears to be 4 dual Sandy/Ivy Bridge machines housed in one enclosure, but how they're connected is unclear. This is what the manual says:

Nodes
Each of the four serverboards act as a separate node in the system. As independant nodes, each may be powered off and on without affecting the others. In addition, each node is a hot-swappable unit that may be removed from the rear of the chassis. The nodes are connected to the server backplane by means of an adapter card. Note: A guide pin is located between the upper and lower nodes on the inner chassis wall. This guide pin also acts as a “stop” when a node is fully installed. If too much
force is used when inserting a node this pin may break off. Take care to slowly slidea node in until you hear the “click” of the locking tab seating itself.
System Power
Four 1620 Watt power supplies are used to provide the power for all serverboards. Each serverboard however, can be shut down independently of the other with the power button on its own control panel.
 
Checked out the Supermicro FatTwin you linked to, Whoa nice gear! One word for that server setup "PRO"
 
unless you have a FatWallet for externals.

hahaha :p

Ok so you can use for example net render in Cinema 4D to make use of all CPUs, but your 12x GPUs won't operate as one unit I guess.

Sorry still getting my head around it, each node seems like it would need its own OS and everything right?

----------

- - - - -
And to anyone who hadn't seen them drop - the Ivy Bridge Xeons are now available for order:

http://www.newegg.com/Product/Produ... bridge xeon&bop=And&Order=PRICED&PageSize=20

And you can configure your workstation with them over at Boxx:
http://config.boxxtech.com/products/cf_step2_or.asp?ModelInstanceID=1258
 
hahaha :p

Ok so you can use for example net render in Cinema 4D to make use of all CPUs, but your 12x GPUs won't operate as one unit I guess.

In applications like Cinema 4d with Octane render that support CUDA, all 12 can operate at once. In fact, with my current CUDA rigs, I've had 8 Titans, 4 GTX 680s, 6 GTX 580s, 4 GTX 480s and a GTX 690 in 6 totally separate systems, but they are rendering different parts of the same job.

Sorry still getting my head around it, each node seems like it would need its own OS and everything right?

You have to supply the OS, HDDs, GPUs, CPUs and memory for each node.
 
Last edited:
In applications like Cinema 4d with Octane render that support CUDA, all 12 would operate as one. In fact, with my current CUDA rigs, I've had 8 Titans, 4 GTX 680s, 6 GTX 580s, 4 GTX 480s and a GTX 690 in 6 totally separate systems operate as one.

Ahhh ok I thought you couldn't scale GPUs across multiple machines??

Does it work with OSX? I have to try this with our machines here!

Still haven't bought octane yet, hopefully I can at least do some CPU vs GPU comparisons with the demo.
 
Right - GTX GPUs across multiple machines aren't seen by each other.

Ahhh ok I thought you couldn't scale GPUs across multiple machines??

Does it work with OSX? I have to try this with our machines here!

Still haven't bought octane yet, hopefully I can at least do some CPU vs GPU comparisons with the demo.

Note that I edited my post to be more precise - I use all of the GPU's at once by allocating parts of the job to different systems so they are all working at the same time.

With Octane Render Stand Alone Edition for Cinema 4d, you can buy licenses singularly or in packs of 3 and 5 for Mac and Windows. I use Windows because it's faster with my systems because I can fully tweak the GPUs with EVGA's Precision X and Nvidia's control panel that, to my knowledge, only work in Windows. The Octane Render plug-in (costs extra) allows Octane to work in the application itself. There's an edition for -
1) ArchiCAD,
2) Cinema4D,
3) Lightwave,
4) Poser,
5) Rhino,
6) 3ds Max,
7) Blender,
8) Daz Studio,
9) Maya,
10) Revit,
11) Softimage, with more on the way.

There are free community developed exporter scripts (.OBJ) for those who don't want to buy the plug-ins.
 
Last edited:
Note that I edited my post to be more precise - I use all of the GPU's at once by allocating parts of the job to different systems so they are all working at the same time.

Ok gotchya, e.g.
Node 1 - frame 1-25/100
Node 2 - frame 26-50/100
Node 3 - frame 51-75/100
Node 4 - frame 76-100/100

I used to do this for After Effects renders when a client was breathing down my neck!!!
 
I have those in my 3 Mac Pro 2,1s. You may have to upgrade the EFI first. My browser isn't finding the source now, but I believe that the instructions to do are at: http://forum.netkas.org/index.php?action=dlattach;topic=1094.0;attach=888 You'll have to sign up to view the materials.



OK, thanks.
I clicked the link but it says the server is down? Any other links?

----------

They look good, go for it! You will need to flash your EFI from 1,1 to 2,1 to get it to accept the processors though.
- - - - -
edit
^^^^ Ah damn Tutor beat me to it haha!!!


No worries, Thanks Guys!

Whats your take on the ATI 5770 HD, I have the NVIDIA GeForce 7300 GT now

----------

They look good, go for it! You will need to flash your EFI from 1,1 to 2,1 to get it to accept the processors though.
- - - - -
edit
^^^^ Ah damn Tutor beat me to it haha!!!

Ok, Bought'um!

Will this work to update firmware

http://support.apple.com/downloads/Mac_Pro_EFI_Firmware_Update_1_2

Thanks again!
 
But don't forget ...

Ok gotchya, e.g.
Node 1 - frame 1-25/100
Node 2 - frame 26-50/100
Node 3 - frame 51-75/100
Node 4 - frame 76-100/100

I used to do this for After Effects renders when a client was breathing down my neck!!!

Exactly. But don't forget that if you've been building your setups from GPUs that vary per (or even within a) system and span many years of age, it may actually be better to split the render on a machine basis and might even counsel splitting on a scene basis because different GPUs may differ in CUDA compute capability [ https://developer.nvidia.com/cuda-gpus ], speed and memory. For example,


GeForce Desktop Products
GPU Compute Capability
GeForce GTX TITAN 3.5
GeForce GTX 780 3.5
GeForce GTX 770 3.0
GeForce GTX 760 3.0
GeForce GTX 690 3.0
GeForce GTX 680 3.0
GeForce GTX 670 3.0
GeForce GTX 660 Ti 3.0
GeForce GTX 660 3.0
GeForce GTX 650 Ti BOOST 3.0
GeForce GTX 650 Ti 3.0
GeForce GTX 650 3.0
GeForce GTX 560 Ti 2.1
GeForce GTX 550 Ti 2.1
GeForce GTX 460 2.1
GeForce GTS 450 2.1
GeForce GTS 450* 2.1
GeForce GTX 590 2.0
GeForce GTX 580 2.0
GeForce GTX 570 2.0
GeForce GTX 480 2.0
GeForce GTX 470 2.0
GeForce GTX 465 2.0
GeForce GTX 295 1.3
GeForce GTX 285 1.3
GeForce GTX 285 for Mac 1.3
...
GeForce GT 640 (GDDR5) 3.5
GeForce GT 640 (GDDR3) 2.1

CUDA compute capability (and differences in GPU memory amounts) can affect what the renderer can actually perform:
Compute capabilities required for octane features:
Compute Capability Limitations
1.0 Octane Render Version 1.20+ not supported.
Version 1.1 or lower: no PMC kernel or matte material (shadow capture)
1.1 -> 1.3 no PMC or matte material (shadow capture)
2.0 and 2.1 no limitations
3.0 and 3.5 no limitations


Texture count limitations
Compute Capability Texture Count

lower than 3.0 64 LDR (8 bit) RGBA, 32 LDR (8 bit) greyscale, 4 HDR
(16 bit) RGBA and 4 HDR (16 bit) greyscale

3.0 or higher 144 LDR (8 bit) RGBA, 68 LDR (8 bit) greyscale, 10
HDR (16 bit) RGBA and 10 HDR (16 bit) greyscale

[ http://render.otoy.com/faqs.php ]. Moreover, in the case of my WolfPackPrime0 CUDA rig, the GTX 480/1.5G limits the ram used by the GTX690/2x2G.

Moreover, as I mentioned in an earlier post your systems can be rendering on the CPUs separately from the rendering taking place on the GPUs, so you should take into account not only system differences and differences within systems, but also the rendering needs and variances within a project, possibly viewing the allocation within a completely different light based on all relevant factors.
 
Last edited:
OK, thanks.
I clicked the link but it says the server is down? Any other links?

----------




No worries, Thanks Guys!

Whats your take on the ATI 5770 HD, I have the NVIDIA GeForce 7300 GT now

----------



Ok, Bought'um!

Will this work to update firmware

http://support.apple.com/downloads/Mac_Pro_EFI_Firmware_Update_1_2

Thanks again!

Follow the instructions here: http://forum.netkas.org/index.php/topic,1094.0.html. The URL that I referenced earlier triggers the download of the Mac Pro 2006-2007 Firmware Tool.

----------

Ok gotchya, e.g. ...
See additional thoughts in post no. 716, above.
 
And to anyone who hadn't seen them drop - the Ivy Bridge Xeons are now available for order

I put them all into a "compare" chart on Intel's site here.

The higher end 8-core chips (2687, 2667) look like they might be interesting because they're apparently giving each core 3.125M of cache vs the usual 2.5M. I wonder how much of a difference that'll make.

I'm thinking my Hack rig will include the aforementioned Gigabyte dual-chip motherboard along with 2 of the 10-core 2690s. It won't be the fastest available from a pure MHz perspective, but I'm sure it'll cook right along.

Now to wait for Mavericks to drop...
 
Following tradition, Apple wants the new Mac Pro to be Cool and quiet.

I put them all into a "compare" chart on Intel's site here.

The higher end 8-core chips (2687, 2667) look like they might be interesting because they're apparently giving each core 3.125M of cache vs the usual 2.5M. I wonder how much of a difference that'll make.

I'm thinking my Hack rig will include the aforementioned Gigabyte dual-chip motherboard along with 2 of the 10-core 2690s. It won't be the fastest available from a pure MHz perspective, but I'm sure it'll cook right along.

Now to wait for Mavericks to drop...

That system may be over twice as fast as the new MP. I'm open to Apple convincing me otherwise, but I believe that their top 12-core CPU will likely be the Xeon E5-2695 V2 - 12/24 - 2.4 GHz - 30 MB because of the TDP of 115 Watts. Any higher may ride the throttle.
 
but I believe that their top 12-core CPU will likely be the Xeon E5-2695 V2 - 12/24 - 2.4 GHz - 30 MB because of the TDP of 115 Watts. Any higher may ride the throttle.

I guess we'll have to wait and see how efficient a chimney the new Mac Pro chassis can be, and whether the heat sink and fan are able to keep the chip cool enough without sounding like a turbine engine. Is that extra 15W going to make enough of a difference?

At least we know the highest performance Mac Pro will cost more than $2300, given the 2695's price. Bearing in mind, of course, I'm not even remotely considering the price of the GPUs and whatnot.
 
Although regular GTX cards won't communicate across systems, Telsa's will. GTX cards hacked into Tesla's may - see post no. 615, above.

Just modifying the cards would not work, but "massaging" the drivers to comply might... ;)

Unless the hardware bits are fused for these features in which case it will not matter what is done to the software as there's no way it could be made to work.
 
I'm thinking my Hack rig will include the aforementioned Gigabyte dual-chip motherboard along with 2 of the 10-core 2690s.

Exactly what I was thinking too... good balance between total cores and clock speed. Especially with two CPUs, there's plenty of cores vs the MP tube!

I hope we get native USB 3 and/or ethernet... keep those PCIe slots free ;)

See additional thoughts in post no. 716, above.

Thanks for your insights... shows there's more to fast rendering than just pressing the 'go' button - proper planning can also save you time :)
 
Just modifying the cards would not work, but "massaging" the drivers to comply might... ;)

Unless the hardware bits are fused for these features in which case it will not matter what is done to the software as there's no way it could be made to work.

The Tesla/Quadro drivers rely on the video card's ID straps for activation. Modifying the GTX card's ID resisters correctly changes the ID to that of the Tesla/Quadro equivalent card. That tricks the drivers into awaking Tesla/Quadro functionality, if it is present but dormant, on the GTX card. There are examples of this hack working on Gnif's thread [ http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/ ] for other GTX cards such as the GTX 680.
 
The Tesla/Quadro drivers rely on the video card's ID straps for activation. Modifying the GTX card's ID resisters correctly changes the ID to that of the Tesla/Quadro equivalent card. That tricks the drivers into awaking Tesla/Quadro functionality, if it is present but dormant, on the GTX card. There are examples of this hack working on Gnif's thread [ http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/ ] for other GTX cards such as the GTX 680.

Thanks, I'm fully aware of Gnif's work...

The cards do appear to be their pro counterparts at a first glance, but the pro performance is not there.

No amount of swapping resistors (hard-straps) or modifying bits in the BIOS (soft-straps) makes a difference. The drivers would need to be "adjusted" as well.

But, even after all this, the actual pro features could be permanently fused away in the silicon (for GTX 5xx, 6xx, 7xx) so all of the modding would've been futile.
 
In detail, how do I change the hard/soft straps on a Titan to a Tesla K20 ID?

Thanks, I'm fully aware of Gnif's work...

The cards do appear to be their pro counterparts at a first glance, but the pro performance is not there.

No amount of swapping resistors (hard-straps) or modifying bits in the BIOS (soft-straps) makes a difference. The drivers would need to be "adjusted" as well.

But, even after all this, the actual pro features could be permanently fused away in the silicon (for GTX 5xx, 6xx, 7xx) so all of the modding would've been futile.

I'm interested in accessing only one Tesla feature, i.e., RDMA for GPU Direct. The Tesla card can communicate directly with another Tesla card in another system via RDMA for GPU Direct. Factory GTX Titan cards do not support RDMA for GPU Direct. I don't have firsthand knowledge that changing the resisters will enable RDMA for GPU Direct because I haven't yet modified the resisters or the bios on any of my GTX cards. That's why I said, "Although regular GTX cards won't communicate across systems, Telsa's will. GTX cards hacked into Tesla's may - see post no. 615, above." If you have firsthand knowledge (based on actual experience) or know (or know of) someone who does know (based on actual experience) that changing the Titan's ID and/or bios to that of a Tesla K20 will not, by itself, enable RDMA for GPU Direct, then you may have (or may have access to) information that I need to know. Please let me know in detail how to properly change the ID and/or bios of the Titan to that of a Tesla K20. Your assistance will be greatly appreciated. I'll take it from there and take full responsibility for the risks involved in modifying my Titan cards.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.