Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
My current dual core MP gives me 16 threads of bucket rendering in c4d. The 2010 MP gives my physicist friend 24 c4d threads. So 36 (2cpus x 8core x 2) in the next mac pro is a slam dunk -- or could be easily be configured in a DIY or Boxx type purchase.

Budgets:
-The Nvidia Maximus config will run around $3,800
-Dual Xeon $2,700-$5k
-Case, power, cooling, RAM, etc... $2,500-$4,000
-Storage (I already have 3 SSDs and a 12 gig RAID)
Total: $9k-$12k
They make a 9 core ? :confused:
I think you mean 32.

Oh and I think you guys mean TB when you say gig. Nobody has had a 12 gig raid since the 90's.
 
nVidia and C4D

If I remember correctly from my work with C4D, it doesn't use any nVidia GPU in its calculations, and neither does V-Ray for it. Wouldn't that just be a waste of money?

Huh? $40,000!?!?!?!? You are talking like a crazy man. Such a system wouldn't cost me nearly that much.

My current dual core MP gives me 16 threads of bucket rendering in c4d. The 2010 MP gives my physicist friend 24 c4d threads. So 36 (2cpus x 8core x 2) in the next mac pro is a slam dunk -- or could be easily be configured in a DIY or Boxx type purchase.

Budgets:
-The Nvidia Maximus config will run around $3,800
-Dual Xeon $2,700-$5k
-Case, power, cooling, RAM, etc... $2,500-$4,000
-Storage (I already have 3 SSDs and a 12 gig RAID)
Total: $9k-$12k

My plan...if I went w/someone like Boxx, would be to get everything sans the Tesla...and then add that late summer.
 
If I remember correctly from my work with C4D, it doesn't use any nVidia GPU in its calculations, and neither does V-Ray for it. Wouldn't that just be a waste of money?
V-RAY RT supports proxies in CUDA and OpenCL both. I saw it working almost two years ago already and here's the roadmap blurb for the next revision:

http://www.vrayforc4d.net/portal/development-roadmap said:
- V-Ray RT gpu rendering: this works then inside c4d, supports cpu and gpu rendering via cuda and openCL. it also additionally supports Distributed rendering, so all gpu's in the network(or cpus) can be used for the RT engine rendering. this needs the vrscene system we work on for 1.5 the V-Ray RT feature itself will
be in the update VRAYforC4D 2.0. the RT engine we implement has proxy support, motionblur, displacement, ies lights, dome light, render elements (multipass), AA filters (same as vray 3.0 will have).
 
V-RAY RT supports proxies in CUDA and OpenCL both. I saw it working almost two years ago already and here's the roadmap blurb for the next revision:

I wasn't under the impression that RT was designed for production renders. NVidia seemed to be testing such a thing with iray over the last couple years, and then of course there are smaller ones like Octane. We're starting to see gpus with significant amounts of memory, so I'm wondering how long before the feature list grows on gpu driven functions. It seems like we're at a point where you could start to derive better performance at least on smaller jobs that don't require 10+ GB of ram just to run.
 
Of course not. "RT" signifies "Real Time" and is used for previews. I think that's about all anything does with GPUs. Some GL renderers will allow you to same frames and clips too tho.
 
They make a 9 core ? :confused:
I think you mean 32. Oh and I think you guys mean TB when you say gig. Nobody has had a 12 gig raid since the 90's.

Yes...12 TB...I don't know where you read anything about 9 core. Not in my posts.

----------

I wasn't under the impression that RT was designed for production renders. ...

Of course not. "RT" signifies "Real Time" and is used for previews. I think that's about all anything does with GPUs. Some GL renderers will allow you to same frames and clips too tho.

Things have changed a lot since you left the business, Tesselator. calaverasgrande...your impression is incorrect. There are several GPU-only renderers on the market. Octane is one. GPU absolutely can be and is used for final production renders in some VRAY scenarios.

Proof?
http://www.workshop.mintviz.com/features/reviews/vray-rt-2-0-production-renderer/

The CPU is still preferred in some VRAY scenarios such as architectural renderings.
 
Last edited:
If I remember correctly from my work with C4D, it doesn't use any nVidia GPU in its calculations, and neither does V-Ray for it. Wouldn't that just be a waste of money?

The AR and Physical renderers in c4d do *not* use the GPU. VrayC4D 1.2 did not use it either. Things change with 1.5...and the 2.0 roadmap.

1.5 introduces the ability to pool any CPUs on the network to render a single frame. So you could have hundreds of buckets tackling a single preview or final frame. AND if you are willing to bounce out to the VRAY standalone...you can use the GPU in 1.5.

This video demonstrates the use of both developments, which incidentally are already enjoyed by Max and Maya VRAY users.

http://vimeo.com/55445420
Please note...GPU rendering is not discussed until 5:30 into the video.

And don't miss where Stefan mentions...v2 will tap into ALL GPUs on the network!
 
Things have changed a lot since you left the business, Tesselator. calaverasgrande...your impression is incorrect. There are several GPU-only renderers on the market. Octane is one. GPU absolutely can be and is used for final production renders in some VRAY scenarios.

Proof?
http://www.workshop.mintviz.com/features/reviews/vray-rt-2-0-production-renderer/

The CPU is still preferred in some VRAY scenarios such as architectural renderings.

What are some of the others? Both V-Ray RT and Octane are extremely incomplete! Using Octane will make your scenes, surfaces, lighting and etc. incompatible with any system not running Octane. You pretty much have to remake all your projects just especially for it. If you create for Octane and you want to use a real CPU renderer then you have to remake everything that way too.

But try not to make me feel too obsolete... :) Octane has only been out for about a month (on my app of choice) and it's still not ready for prime time. Lots and lots of troubles with it. Although I suppose most of them have work-arounds like so much in the CG industry. Juan or Ahmet will tell you the same thing if you ask them. :)

Also Octane currently only runs under Windows right?

Here's something I recently read that seems to relate:
http://www.3dworldmag.com/2011/01/07/pros-and-cons-of-gpu-accelerated-rendering/
 
Last edited:
OK, well, I'm sure you know best. And you're kinda being a little smarty-pants. So I'll just leave you to it.

Enjoy yourself.
 
OK, well, I'm sure you know best. And you're kinda being a little smarty-pants. So I'll just leave you to it.

Enjoy yourself.

Tesselator,

I posted last night after having a few too many drinks. I apologize for my post. But I will say that I found it a little arrogant of you when earlier in the thread you said I "wasn't asking the right questions" and then with utmost certainty dismissed what I was saying about in regard to rendering with GPUs.

Nevertheless I had no reason to post what I did. Again...sorry.
 
Tesselator,

I posted last night after having a few too many drinks. I apologize for my post. But I will say that I found it a little arrogant of you when earlier in the thread you said I "wasn't asking the right questions" and then with utmost certainty dismissed what I was saying about in regard to rendering with GPUs.

Nevertheless I had no reason to post what I did. Again...sorry.

No problem. I say silly stuff when drinking too - tho usually much worse. :)

I didn't think I was arrogant when I said: "From my experience the conclusion based on your reply to me, is that you're asking all the wrong questions." though. All I meant was that the entire industry has determined and proven that using a rendering farm type of configuration is typically much more effective than trying to stuff everything into one system. And your question(s) were focused on trying to achieve the later. Thus: "Asking the wrong questions", "Looking in the wrong places", and other such phrases seemed to apply nicely.

And you misunderstood me about the GPU stuff I think. I wasn't dismissing what you said. I was contributing what I know and asking you for more details. What I know is that GPU rendering can look real pretty but in too many cases you lose a lot of features. Sometimes instancing doesn't work right (for example), other times it's other stuff - various shaders, hundreds of plug-ins, and so on. Some apps fair better than others but all have some features that don't work with renderers like Octane. Additionally it's not real great for collaborative projects (as so many of mine have been) because the output is different and doesn't match up with the CPU (final) renders. On top of that Octane specifically, isn't even available for OS X (AFAIK).

My request for more detail was an attempt to pick your brain based on your experiences with both VRRT and Oct and to find out what else currently exists or that you have used.

But typical me - I don't say stuff the right way. I can see how you might have thought I was being dismissive.
 
Of course not. "RT" signifies "Real Time" and is used for previews. I think that's about all anything does with GPUs. Some GL renderers will allow you to same frames and clips too tho.

I knew what it stood for, but I don't keep up with their development. I don't know whether they have any plans to further propagate the use of gpu based rendering. NVidia had a project called iray a while ago that could be run off gpus. As your article states it's often an issue of available memory.




What are some of the others? Both V-Ray RT and Octane are extremely incomplete! Using Octane will make your scenes, surfaces, lighting and etc. incompatible with any system not running Octane. You pretty much have to remake all your projects just especially for it. If you create for Octane and you want to use a real CPU renderer then you have to remake everything that way too.

But try not to make me feel too obsolete... :) Octane has only been out for about a month (on my app of choice) and it's still not ready for prime time. Lots and lots of troubles with it. Although I suppose most of them have work-arounds like so much in the CG industry. Juan or Ahmet will tell you the same thing if you ask them. :)

Also Octane currently only runs under Windows right?

I haven't messed with Octane. What else is it missing?
 
Tesselator, if I understand your viewpoint, your notion is to throw a horde of budget-conscious headless CPUs at the problem. I can see the logic in that approach.

Bet let's compare other approaches...

For contrast...A poster on one of the c4d sites swears that the *only* professional approach is to use a commercial render farm. He says it's just part of the price in doing business and in his world...the need to wait really ceases to exist.

My approach is dictated by:
-My budget
-A confessed proclivity to want control and local computation
-A desire to steer clear of DIY tinkering with hardware
-Electrical and heat considerations

And more germane to the discussion:
-My belief that the dawn of GPU rendering has come.


I am perfectly in agreement that CPU power is still a unavoidable aspect...and will always be. But after my next purchase I'll have around 75 network CPU threads to throw at my renders.

In VRAY I have a solution that will pool all my network CPUs even on a single frame render. And VRAY CRAVES CUDA, so that's a big part of the equation for me.

Here are some crazy examples of what is now possible with VRAY and the GPU:
http://www.youtube.com/watch?v=85t9C3LVE7w
http://www.youtube.com/watch?v=s7niAKeAVxY
http://www.youtube.com/watch?v=AigQrByqkbA
 
I haven't messed with Octane. What else is it missing?

It depends which application, which application version. Also what plug-ins you've come to depend on too. For the app I most commonly use the short list of "limitations" reads as follows:
  • Octane currently doesn't support complex polygons or polygons with holes. To avoid artifacts user must work only wth 3 or 4 vertex polygons.
  • Octane is a GPU render engine. It can't use any kind of CPU shaders, materials, textures or any other kind of nodes available inside Lightwave. You must always use the Octane nodes to shade the objects. Lightwave material parameters like "T" texture layers or Lightwave nodes are not supported by the plugin. Only a few basic material parameters (color, diffuse, specular or transparency) are supported by Octane.
  • Lightwave native lights doesn't work with Octane. Users must use only the Octane light to work with Octane.
  • Octane has it's own color space functions and tone mapping. To work with Octane user must always disable the Lightwave color space (set all values to Lineal).
  • Lightwave color space functions are not supported by the plugin.
  • Currently Octane doesn't support ray visibility options, so all visibility options of Lightwave don't work. Features like all "unseen by" or cast or receive shadows are not supported.
  • Octane for Lightawave currently can't render particle systems. It also can't work with 1 or 2 vertex polygons
  • FiberFX is not supported.
  • Current version of Octane doesn't support Motion Blur.
  • Lightwave Image Editor options are not supported by the plugin. Octane read the maps from the files itself, and can't process the Image Editor options.
  • Limited region rendering is not supported by Octane.
  • This first version of the plugin doesn't support Octane Live materials within Lightwave.
  • Octane doesn't support multiple UV maps. User must be sure that each vertex doesn't have more than one UV map. To avoid problems it's best to work with only one UV map in each object layer.
  • It is also highly recommended that you have your Windows display adapter set to a non-Octane rendering graphics card (ie. your on-board graphics card, or a second graphics card).



Tesselator, if I understand your viewpoint, your notion is to throw a horde of budget-conscious headless CPUs at the problem. I can see the logic in that approach.

Bet let's compare other approaches...

For contrast...A poster on one of the c4d sites swears that the *only* professional approach is to use a commercial render farm. He says it's just part of the price in doing business and in his world...the need to wait really ceases to exist.

That sounds close to accurate. Except there is no such thing as a "commercial render farm". :) It's just whatever you decide to configure. I guess unless we all start calling machine configurations after the tasks they configured to perform? Like "A commercial PhotoShop platform"? :)

My approach is dictated by:
  • -My budget
  • -A confessed proclivity to want control and local computation
  • -A desire to steer clear of DIY tinkering with hardware
  • -Electrical and heat considerations

  • Budget I covered earlier.
  • Control is all done from the users workstation. Typically a single window does everything. For example here's some screens of SquidNet:

    new-ui.png

    Some render controllers allow both multiple nodes per frame
    and the usual split sequence ranges across the various nodes.
    Several even supply a remote desktop like UI since nodes mostly run headless.

  • There's nothing to tinker with really. Just buy the box or blade, connect the ethernet cables, install the software, and off ya go.
  • That could indeed be an issue actually. But then again, since it's all remote control over ethernet you could just put the stack in your garage or something. They can all be put to sleep or woken up remotely as well so after the installation there's no need to have the machines anywhere near your workstation room.

And more germane to the discussion:
-My belief that the dawn of GPU rendering has come.


I am perfectly in agreement that CPU power is still a unavoidable aspect...and will always be. But after my next purchase I'll have around 75 network CPU threads to throw at my renders.

In VRAY I have a solution that will pool all my network CPUs even on a single frame render. And VRAY CRAVES CUDA, so that's a big part of the equation for me.

Here are some crazy examples of what is now possible with VRAY and the GPU:
http://www.youtube.com/watch?v=85t9C3LVE7w
http://www.youtube.com/watch?v=s7niAKeAVxY
http://www.youtube.com/watch?v=AigQrByqkbA

Yeah, I know. We've had a system like that in Lightwave for about 7 or 8 years now. Here's the old one which hasn't been updated in 4 years to get some idea:
http://www.worley.com/E/Products/fprime/videos.html
Real time previewers kick total ass bro! That particular one uses only CPU tho. ;) I think actually Lightwave was the first to do GI and almost all native engine features in a RT system like that - but it didn't take long for it to popularize and now many engines offer this type of component. Five or six weeks ago I posted some usage vids of the VR RT system in a thread somewhere here at MR myself. I like V-Ray a lot! :)
 
It depends which application, which application version. Also what plug-ins you've come to depend on too. For the app I most commonly use the short list of "limitations" reads as follows:
  • Octane currently doesn't support complex polygons or polygons with holes. To avoid artifacts user must work only wth 3 or 4 vertex polygons.
  • Octane is a GPU render engine. It can't use any kind of CPU shaders, materials, textures or any other kind of nodes available inside Lightwave. You must always use the Octane nodes to shade the objects. Lightwave material parameters like "T" texture layers or Lightwave nodes are not supported by the plugin. Only a few basic material parameters (color, diffuse, specular or transparency) are supported by Octane.
  • Lightwave native lights doesn't work with Octane. Users must use only the Octane light to work with Octane.
  • Octane has it's own color space functions and tone mapping. To work with Octane user must always disable the Lightwave color space (set all values to Lineal).
  • Lightwave color space functions are not supported by the plugin.
  • Currently Octane doesn't support ray visibility options, so all visibility options of Lightwave don't work. Features like all "unseen by" or cast or receive shadows are not supported.
  • Octane for Lightawave currently can't render particle systems. It also can't work with 1 or 2 vertex polygons
  • FiberFX is not supported.
  • Current version of Octane doesn't support Motion Blur.
  • Lightwave Image Editor options are not supported by the plugin. Octane read the maps from the files itself, and can't process the Image Editor options.
  • Limited region rendering is not supported by Octane.
  • This first version of the plugin doesn't support Octane Live materials within Lightwave.
  • Octane doesn't support multiple UV maps. User must be sure that each vertex doesn't have more than one UV map. To avoid problems it's best to work with only one UV map in each object layer.
  • It is also highly recommended that you have your Windows display adapter set to a non-Octane rendering graphics card (ie. your on-board graphics card, or a second graphics card).

So no pelting objects. You'll need a vector pass of some sort to apply motion 2D motion blur in post instead. When it says holes I'm not sure whether what it considers a hole. If you delete an internal face many applications will view those edges as border edges, and booleans aren't exactly reliable unless they're rebuilt. Losing light linking would hurt. It sounds like it would be difficult to use in animation. You could probably get away with stills depending on what is really available in terms of octane lights.
 
I think you're misunderstanding the PC market. Low-end manufacturers are losing sales, for which they blame Microsoft Windows 8. But high-end workstations are still selling as well as ever.

If you decide to buy PC instead of the Mac Pro, let it be because the Mac is so far out-of-date rather than cost differences.
 
So no pelting objects. You'll need a vector pass of some sort to apply motion 2D motion blur in post instead. When it says holes I'm not sure whether what it considers a hole. If you delete an internal face many applications will view those edges as border edges, and booleans aren't exactly reliable unless they're rebuilt. Losing light linking would hurt. It sounds like it would be difficult to use in animation. You could probably get away with stills depending on what is really available in terms of octane lights.

It can be useful - it's just that you have to create everything around its limitations. And this also makes sharing and porting scenes and objects pretty uncomfortable. :p

Some apps allow single sided polygons (one single directional normal) to have many vertices. I think the point limit per-poly in LW for example is either 1024 or 2048 (I forget). So it's easy to create a condition where the normal encompasses one or several "holes". Here's an example object with two such normals (top and bottom):

Normal_Holes.png

Although best avoided IMO, geometry like this can be useful for low poly engines in like games, RT or interactive architectural walkthrus, and so forth. Of course the solution in this particular case is just to convert the faces to trips or quads.

Caveats aside the main point I was trying to make was that GPU rendering (as I see it) is still mostly confined to previews during the set creation (texturing, lighting, fine tuning animation, prop placement, etc.) and still frame rendering. Unless you love compositing (which I do) and then it's also quite useful for rendering out FX layers to be used in something like eyeon's Fusion - certainly my favorite. :)

I've been hoping for GPU based rendering for about 12 years now. I always think we're just on the verge of it going mainstream but it hasn't happened - or it happens with extreme and harshly restrictive caveats. I haven't looked in the past 8 to 12 months to be honest so maybe it's become a viable reality in some apps already? From the little peeks I've gotten while participating in this thread I have to say it's looking better than last I checked.

In either case however it's still a major advantage to have the renderer in a separate system so that you're not sitting on your thumbs half the time waiting for stuff to finish up - and animators can animate, modelers can model, and so on unrestricted by rendering. (division of labor rocks!) :D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.