Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

vinegarshots

macrumors 6502a
Sep 24, 2018
982
1,349
If you could squeeze in a minute or two to share some sample output, it would be really beneficial to helping you once and for all determine if the Mac is the right tool for you from a 3d rendering perspective and see if we can find you an even better Mac optimized workflow.


Thanks :)

That's pretty patronizing, TBH. And regarding your other post, if one "works" in Blender, no one should be considering an M1 Ultra. Time is money, and you're burning time if you're GPU rendering on a Mac. A 4090 renders orders of magnitude faster in Blender.

Also, 24GB of memory on a 4090 is plenty for a lot of professional use-cases. I think I only ran out of GPU memory on my 3080 on one project out of dozens last year, and that was only for the Denoiser pass.
 

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
That's pretty patronizing, TBH. And regarding your other post, if one "works" in Blender, no one should be considering an M1 Ultra. Time is money, and you're burning time if you're GPU rendering on a Mac. A 4090 renders orders of magnitude faster in Blender.

Also, 24GB of memory on a 4090 is plenty for a lot of professional use-cases. I think I only ran out of GPU memory on my 3080 on one project out of dozens last year, and that was only for the Denoiser pass.

I‘m genuinely surprised that you find that patronizing?

This thread is going around in circles and everybody is speaking in definitive terms as to what ‘professionals use’.

I think you need to re-read my other post. I never said NOT to buy a 4090 for blender Or other paid professional software. I simply highlighted constraints of the 4090 that may mean that a slower quadro A5000 with 48GB ram (it’s there for a reason) may be better , or failing that and you need even larger scenes, where an M1 ultra with 128GB of unified memory may be a better tool for the job.
The TLDR of the post; different workflows require different tools And different organizations have different purchasing decisions beside outright speed. As you say time is money and time is money can be measured in different ways. Absolute speed. Reliability - nVidia sell and verify their quadro lineup to reliably perform on professional workflows that use Autodesk, Maya, etc… for this very reason.

I never said it was the only workflow, merely highlighted the nuance that individual workflow determines.

In any case, my ask to see real world samples of folks 3d models (not open source precanned models ) stands… it’s to understand if folks are getting upset about a complex blender scene that makes heavy use of RTX hardware, when their own usecases doesn’t involve anything nearly as complex.

Clearly you know what you are doing… would you mind sharing a personal sample of the type of models that you build in blender and give us some insight as to the memory it takes for your use case, features that you are using in blender etc …. ?
 

LymeChips

macrumors newbie
Jan 3, 2020
27
16
Here’s my cosmetics reel - usually my style lends to a relatively small amounts of geo and textures so I’m usually <10 GB of VRAM. However I use Octane and have some hard to sample lighting so my 1,300OB pc is just enough for me and I could always use more power.

https://vimeo.com/740587239/59d958ecdf

Looking forward to seeing what happens with the Mac Pro!
 
Last edited:

innerproduct

macrumors regular
Jun 21, 2021
222
353
Do you have a link that shows this?
The website is theoryaccellrated and they used to have a page showing benchmarks. However, they just recently released version 3 and updates the web completely and for some reason there is no benchmark page anymore. I am using their discord though and the developer hav confirmed that the ultra scales very well and are in the same ballpark as the 3090. I should also mention that personally I am primarily a Houdini user and the preview release that is AS native is very nice and fast for everything except for rendering. This has been a problem for many years and still is as we clearly can tell from this massive thread. The mac doesn’t have to be class leading, it just needs to conpete. 50% of the perf you get in a similarly priced pc would be ok for most of us I think for the privilege of using macos. To me, this is the only Achilles heel on the mac platform.
As you all know we are still waiting for all foundry and autodesk sw to go native and they are tight lipped on the subjects except that they say that they are working on it. It is not unthinkable that everything will converge in spring and will be revealed in one big pro event that will make us all happy. Let’s hope so.
 
  • Like
Reactions: Boil

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I am using their discord though and the developer hav confirmed that the ultra scales very well and are in the same ballpark as the 3090.
It seems that Axiom is not as optimized as it could be for Nvidia GPUs. Instead of using Optix like OctaneRender and Blender Cycles, the previous version of Axiom used OpenCL and the current version uses CUDA.

Axiom uses industry standard technologies such as OpenVDB, OpenCL, and Metal to maximize compatibility.

With Metal and Cuda, Axiom can use your system memory if you run out of GPU memory.

OTOY® is proud to advance state of the art graphics technologies with [...] RTX raytracing GPU hardware acceleration.
 

jujoje

macrumors regular
May 17, 2009
247
288
The TLDR of the post; different workflows require different tools And different organizations have different purchasing decisions beside outright speed.
A lot of churn on these forums comes from the wildly different performance needs across different use cases and people being unable to see beyond their own. Also reflect the tricky situation that Apple's in with the Mac Pro; there's going be a lot of grumpy people no matter what they release.

I am primarily a Houdini user and the preview release that is AS native is very nice and fast for everything except for rendering.
Well Karma CPU is performs pretty much in line with what you'd expect, no? Think it might be a while before we see Karma xPU for Apple Silicon though; from Sidefx's responses it's a way off and there's still a lot to add to the Nvidia implementation (dispersion, transmission, volume multi scattering and whole load of MaterialX stuff).

As before, curious if you've given any of the pyro tool a go, particularly the minimal solver. AS wise I've only got a MBA so there's only so many voxels I can throw at it (about one pig head worth). The one area that seems much slower for me is python states. MacOS hates that overlay...

It seems that Axiom is not as optimized as it could be for Nvidia GPUs. Instead of using Optix like OctaneRender and Blender Cycles, the previous version of Axiom used OpenCL and the current version uses CUDA.
Would optix be that helpful for volume simulation? Rendering obviously, but would have thought Cuda would make more sense, be gpgpu rather than raytracing focused. Would assume that's why there don't appear to be any solvers using optix (at least from a cursory google).
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
Would you be comfortable sharing samples of your own custom 3d modelling work (not benchmark source). I’m asking because benchmarks can test extreme situations that may not actually reflect how one would use the software.

Given the discussion course on 3090, blender etc… versus advocating for Maya or other vendor backed software, I’m assuming (possibly incorrectly) that you work for a small independent studio or are an individual hobbyist.
Giving us an indication of your ‘As Is’ workflow with actual real world outputs that you personally have created will possibly allow somebody here to guide you on whether an alternative workflow might fit your need, given your interest in the M1 ultra and price/performance for your professional workflow?

Would you be comfortable sharing some of your personally created 3d modelling work - I can appreciate that if you work for a small shop that you are not allowed to share your work externally for copyright / client NDA reasons. However most folks working for such shops have a personal portfolio - it’s quite common in the industry :) ?

Obviously if you are hobbyist there should be nothing really holding you back from sharing your own creations.

Same ask goes to @mi7chy - I’ve asked you personally before to share your 3d modelling work samples on this and other threads - but I guess you are probably very busy with work so may not have had the chance to do so.
If you could squeeze in a minute or two to share some sample output, it would be really beneficial to helping you once and for all determine if the Mac is the right tool for you from a 3d rendering perspective and see if we can find you an even better Mac optimized workflow.


Thanks :)
I’m pretty sure many users here post without being 3D artist at all, so it’s likely that you will not get a straight answer from some of them.
Some people here just report things (real or fake) they have seen online, regurgitating meaningless benchmark forgetting that: 1) real world experience can not always be measured by benchmark, 2) almost any modern hardware is fast enough to produce 3D masterpiece, and 3) all the hardware power you have at your disposal means nothing if you are a poor artist.
 

jujoje

macrumors regular
May 17, 2009
247
288
I’m pretty sure many users here post without being 3D artist at all, so it’s likely that you will not get a straight answer from some of them.
TBH people posting without being 3D artist or with vague knowledge is fine; there's often an interesting conversation to be had (some of the more interesting discussions start with someone saying something totally wrong). Besides I've learned a fair bit about the various areas that I only tangentially work in.

I think the main problem is people posting benchmarks or making assertions or comparisons in bad faith and, as you say, repeating as fact claims that they only superficially understand. It just degrades the discussion and is inherently unhelpful.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
I‘m genuinely surprised that you find that patronizing?

This thread is going around in circles and everybody is speaking in definitive terms as to what ‘professionals use’.

I think you need to re-read my other post. I never said NOT to buy a 4090 for blender Or other paid professional software. I simply highlighted constraints of the 4090 that may mean that a slower quadro A5000 with 48GB ram (it’s there for a reason) may be better , or failing that and you need even larger scenes, where an M1 ultra with 128GB of unified memory may be a better tool for the job.
The TLDR of the post; different workflows require different tools And different organizations have different purchasing decisions beside outright speed. As you say time is money and time is money can be measured in different ways. Absolute speed. Reliability - nVidia sell and verify their quadro lineup to reliably perform on professional workflows that use Autodesk, Maya, etc… for this very reason.

I never said it was the only workflow, merely highlighted the nuance that individual workflow determines.

In any case, my ask to see real world samples of folks 3d models (not open source precanned models ) stands… it’s to understand if folks are getting upset about a complex blender scene that makes heavy use of RTX hardware, when their own usecases doesn’t involve anything nearly as complex.

Clearly you know what you are doing… would you mind sharing a personal sample of the type of models that you build in blender and give us some insight as to the memory it takes for your use case, features that you are using in blender etc …. ?
Again you will not get answer from some users because they just don’t work in 3D or they happen to have gaming rig with Nvidia GPU and use open-source software as hobbiest.
Before people ask me to see some work, here are a couple examples of what I do (I’m an archiviz specialized in luxury interior/exterior but I do modern stuff as well):


A 360° panorama of a modern building: https://roundme.com/tour/789686/view/2509168
And some interior/exterior views:
DA7BAB9D-03D7-449C-B107-52D19D54CD67.jpeg
9BB8B33F-D14F-4944-9F3F-D1759B01C7B6.jpeg
39561700-829F-4621-9769-C2EEAA5DCF41.jpeg
0F0E8510-5C1C-4355-8F32-D4B256AEAD28.jpeg
2B099116-5B6C-4BC6-A691-D31872B061C9.jpeg
246A9799-5C8B-4767-9514-3962D7807754.jpeg


Some statistics about those projects.
Exterior views:
-15million polygons + render/multi instances + Vray Proxy. 2D displacement for the cobblestone.
-Physical sun for both day and night sky, plus area and sphere light for the night views(not uploaded here).
-Everything modeled by me in Cinema4D except for the palm trees, cars (from Cosmos library) and a couple moldings. -About 10 days of work, it was a tight deadline so there was no time to play with pool caustics.
-Render time for 4K images between 30 and 60 minutes on a single 8core system, small tuning using Cryptomatte and minor editing in PS.

Interior views:
-about 40million polygons plus render instances—over 250 lights(spherical, ies, and mesh light plus a physical sun/sky for the day light views)
-large use of Vray Fur coupled with Vray Hair material for the carpet. The area was quite large (about 250 square meters of carpet)
-I've used C4D R26, Vray6, Marvelous Designer, all modeled by me except for some laser scans
-most textures and shaders created from scratch for this project, I've also used a few metal from my bundle
-tone mapping and post production done 99.9% inside the frame buffer, just some highlights compression, a LUT, and some cryptomatte mask here and there, some white balance when needed
-10 days of work including render time
-about 4/5h per image @4K resolution on an old 8core system

In both projects render time for tests and final images took no more than 15% of the total time for the job, just to say that even a 64core machine would have not significantly speedup the overall workflow..
 
Last edited:

sirio76

macrumors 6502a
Mar 28, 2013
578
416
TBH people posting without being 3D artist or with vague knowledge is fine; there's often an interesting conversation to be had (some of the more interesting discussions start with someone saying something totally wrong). Besides I've learned a fair bit about the various areas that I only tangentially work in.

I think the main problem is people posting benchmarks or making assertions or comparisons in bad faith and, as you say, repeating as fact claims that they only superficially understand. It just degrades the discussion and is inherently unhelpful.
I totally agre, I appreciate some users that brings value to the conversation because even without being 3D artist at least they have good technical knowledge about how stuff works.
Regurgitating meaningless benchmark from biased people every time is not that useful though.. they don't even work in 3D and they want to give advice on what is the best option for 3D artist, the worst thing is that many unexperienced readers tend to believe them..
 
Last edited:
  • Like
Reactions: dmr727 and tomO2013

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Would optix be that helpful for volume simulation?
Honestly, I don't know. I have always thought that OptiX is better than CUDA.

NanoVDB is the only reference to OptiX related to volume rendering that I could find. It claims to work with OptiX.
We have tested NanoVDB with CUDA, OpenCL, OpenGL, DirectX 12, OptiX, Vulkan, HLSL, and GLSL. We are also working on adding support for WebGL.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
Ffs, people are anonymous on forums for a reason. No need to get heated and requiring proof of work. Greatful that @LymeChips shared some though. Perfect example example use case where you want to use a mac for design and lookdev etc but final rendering on cloud or your own micro-farm. (In many cases a single big pc with 3-4 gpus will do the job). Currently octane x 2022 is in late beta/rc state and can only be used using m1/m2 arch. Apple did som many changes with their arch that otoy had to drop amd support. So you cannot use a 2019 macpro for example without resorting to an older version that is not comaptible with pc. Sooo that leaves us with the m1 ultra as the best there is. And it performs about the same as a non XT 6800. Not seen big benches and a lot of reliable data but it doesn’t seem to scale that well even if it’s slightly better that RS. Anyway, the point here is that we all just need something that is reasonable and not stuck at similar perf as 2017 tech. Sure you can get by using “whatever “ if you got a lot of time or only do a few stills from time to time. But why would you not want something that at least gets you 50% of what the competition offers for the same price? I am ready and willing to pay a premium for the privilege of using macos but there are limits to how far I can stretch that. By the way, at my real work we only use pcs with nvidia cards for rendering needs and since our stuff can easily fit inside of 24 Gb we use 3090s packed in servers. We considerd a5000 cards (that also have 24gb ram) but those provided no benefit. A6000 card are of no benfit at all for our use case. Super important to consider the actual workload. When I write here it’s more from a hobbyist/freelancer perspective for my personal gear and there I dream of having a nice single machine that is “good enough”. We’ll see if that comes out and if I find it worth investing in. Tbh, almost at the point where a cpu renderer makes more sense since m series cpu is mighty fine.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
a cpu renderer makes more sense since m series cpu is mighty fine.
That's what I've done, no driver issues, no special code, no optimization needed (running natively since more than a year) and is rock solid.
If needed I can run the GPU engine on the CPU (Vray can do that on both Intel and M1 Mac) and use external PC box loaded with GPU, but to be honest I prefer the CPU engine for his solidity and if for whatever reason I need more horsepower I can use my PC render slaves or a cloud service.
 

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
Ffs, people are anonymous on forums for a reason. No need to get heated and requiring proof of work. Greatful that @LymeChips shared some though. Perfect example example use case where you want to use a mac for design and lookdev etc but final rendering on cloud or your own micro-farm. (In many cases a single big pc with 3-4 gpus will do the job).

Freedom of speech, anonymity should not mean that all opinions and statements of fact should be held to the same level of credibility.

Asking for demonstrations or examples of peoples work (as mentioned already) serves a number of purposes…

1. It helps provide insight and context to the types of scenes that would warrant purchasing a 4090 (for time savings in rendering) over a M1 Ultra, or A5000 or whatever. Benchmarks are just that benchmarks. They provide datapoints but are only relevant if the benchmarks reflect your individual use cases.

2. It helps those who may not know much about the 3d modelling world to identify those who do (from a rubber meets road perspective) in order to ask questions and further their own knowledge. I’m just trying to identify who are the folks who CAN speak with some level of authority.

3. It allows us to explore alternative workflows (on an apple silicon forum no less, for an apple silicon orientated workflow). Is a persons experience with Blender reflective of what they might experience in Octane etc..?

4. Finally it does provide a mechanism to demonstrate credibility and form trust (See the trust equation belo) and for the community here to identify those whose inputs may have an higher level of real world experience to learn from. The motive here is to help sift through those with an opinion having watched an episode of snoopy while running a benchmark on their gaming PC and those like sirio76 who have demonstrable real world tangible experience to ask questions against. There are lots of questions I’d like to get back on topic and ask serio76 about with respect to apple silicon as an option.



But why would you not want something that at least gets you 50% of what the competition offers for the same price? I am ready and willing to pay a premium for the privilege of using macos but there are limits to how far I can stretch that. By the way, at my real work we only use pcs with nvidia cards for rendering needs and since our stuff can easily fit inside of 24 Gb we use 3090s packed in servers. We considerd a5000 cards (that also have 24gb ram) but those provided no benefit. A6000 card are of no benfit at all for our use case. Super important to consider the actual workload.

In my humble opinion, that value proposition may very well come down to workflow. For you - quite probably and may not make sense for your workflow.
For others that maybe have a different workflow that requires that they work with Blender, Photoshop, Finalcut - blender may be a single link in a workflow chain where the overall performance proposition is more greatly impacted by another area of their workflow than solely blender (As an example). Or as leman and others have mentioned way earlier, looking at the code commits to blender project, there have been not-inconsequential performance improvements to Blender in the short time since the first AS compatible build. That it will ever close the gap without fixed function ray trace hardware is very doubtful, but there could well be further significant performance improvements nonetheless downstream to close the gap and make the Mac OS benefits tangible again. These like all things are just considerations.

Hopefully this clarified my individual opinion on this. Again no individual offence is intended and if I have upset you personally, I publicly apologize innerproduct and wish you a happy great day :)
 
Last edited:
  • Like
Reactions: unsui_grep

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
Again you will not get answer from some users because they just don’t work in 3D or they happen to have gaming rig with Nvidia GPU and use open-source software as hobbiest.
Before people ask me to see some work, here are a couple examples of what I do (I’m an archiviz specialized in luxury interior/exterior but I do modern stuff as well):


A 360° panorama of a modern building: https://roundme.com/tour/789686/view/2509168
And some interior/exterior views:
View attachment 2113303 View attachment 2113306 View attachment 2113307 View attachment 2113304 View attachment 2113305 View attachment 2113308

Some statistics about those projects.
Exterior views:
-15million polygons + render/multi instances + Vray Proxy. 2D displacement for the cobblestone.
-Physical sun for both day and night sky, plus area and sphere light for the night views(not uploaded here).
-Everything modeled by me in Cinema4D except for the palm trees, cars (from Cosmos library) and a couple moldings. -About 10 days of work, it was a tight deadline so there was no time to play with pool caustics.
-Render time for 4K images between 30 and 60 minutes on a single 8core system, small tuning using Cryptomatte and minor editing in PS.

Interior views:
-about 40million polygons plus render instances—over 250 lights(spherical, ies, and mesh light plus a physical sun/sky for the day light views)
-large use of Vray Fur coupled with Vray Hair material for the carpet. The area was quite large (about 250 square meters of carpet)
-I've used C4D R26, Vray6, Marvelous Designer, all modeled by me except for some laser scans
-most textures and shaders created from scratch for this project, I've also used a few metal from my bundle
-tone mapping and post production done 99.9% inside the frame buffer, just some highlights compression, a LUT, and some cryptomatte mask here and there, some white balance when needed
-10 days of work including render time
-about 4/5h per image @4K resolution on an old 8core system

In both projects render time for tests and final images took no more than 15% of the total time for the job, just to say that even a 64core machine would have not significantly speedup the overall workflow..


These are fantastic! May I ask, how long does it currently take to render the first image - soup to nuts and what is currently the slowest part of your workflow ?
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
That's what I've done, no driver issues, no special code, no optimization needed (running natively since more than a year) and is rock solid.
Yes and underestimated values of CPU render engines. Speed is more sexy though than lack of driver issues.
 
  • Like
Reactions: tomO2013

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Freedom of speech, anonymity should not mean that all opinions and statements of fact should be held to the same level of credibility.

Asking for demonstrations or examples of peoples work (as mentioned already) serves a number of purposes…

1. It helps provide insight and context to the types of scenes that would warrant purchasing a 4090 (for time savings in rendering) over a M1 Ultra, or A5000 or whatever. Benchmarks are just that benchmarks. They provide datapoints but are only relevant if the benchmarks reflect your individual use cases.

2. It helps those who may not know much about the 3d modelling world to identify those who do (from a rubber meets road perspective) in order to ask questions and further their own knowledge. I’m just trying to identify who are the folks who CAN speak with some level of authority.

3. It allows us to explore alternative workflows (on an apple silicon forum no less, for an apple silicon orientated workflow). Is a persons experience with Blender reflective of what they might experience in Octane etc..?

4. Finally it does provide a mechanism to demonstrate credibility and form trust (See the trust equation belo) and for the community here to identify those whose inputs may have an higher level of real world experience to learn from. The motive here is to help sift through those with an opinion having watched an episode of snoopy while running a benchmark on their gaming PC and those like sirio76 who have demonstrable real world tangible experience to ask questions against. There are lots of questions I’d like to get back on topic and ask serio76 about with respect to apple silicon as an option.





In my humble opinion, that value proposition may very well come down to workflow. For you - quite probably and may not make sense for your workflow.
For others that maybe have a different workflow that requires that they work with Blender, Photoshop, Finalcut - blender may be a single link in a workflow chain where the overall performance proposition is more greatly impacted by another area of their workflow than solely blender (As an example). Or as leman and others have mentioned way earlier, looking at the code commits to blender project, there have been not-inconsequential performance improvements to Blender in the short time since the first AS compatible build. That it will ever close the gap without fixed function ray trace hardware is very doubtful, but there could well be further significant performance improvements nonetheless downstream to close the gap and make the Mac OS benefits tangible again. These like all things are just considerations.

Hopefully this clarified my individual opinion on this. Again no individual offence is intended and if I have upset you personally, I publicly apologize innerproduct and wish you a happy great day :)
All this word salad is basically ignoring the title of the thread. It specifically mentions 3D rendering (which ranges from simple cubes to VFX heavy shots) which is quite often the end output of a 3D work and has been traditionally a power/time hog.
No amount of workflow shenanigans will work around the performance issues that is expected in current gen ‘pro’ hardware.
Forget RTX. Even Cuda based render speeds leave every Mx series in the dust.
If the ultra had shown even 90% uptick over the max variety, I would be anticipating the Mac Pro. As it happens it hasn’t
So I am sceptical about adding another chain in the link (extreme Mx series )

I find working in the viewport pretty ok. It’s just that come final output, the render time balloons to 17 hours on my M1 Max GPU (vs 75 minutes on my PC )

Shuffling stuff from Mac to PC is a hassle for each render so I just do most work on the PC.
With the result that I end up doing almost ALL the work on the PC itself.
There goes the workflow.
 
Last edited:

ader42

macrumors 6502
Jun 30, 2012
436
390
Sirio76 demonstrates that actual render times are far less important than the overall experience, as 3D artists don’t spend all their time rendering - they spend most of it sculpting, modelling, texturing, scene-setup etc.

Some of us will not compromise on the OS and money is not an issue as the software costs far more than the Apple hardware, others have to compromise due to the cost and can only use free software.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
May I ask, how long does it currently take to render the first image - soup to nuts and what is currently the slowest part of your workflow ?
The first image rendered in 4K took about 4h on an 8core MacPro7.1, on the new M1Ultra it should take about 1.5h.
The slowest part in my workflow is (well.. was since I've updated to the Ultra a couple days ago) the PS performance on large multilayer images, the opening time of complex 3D laser scans and scenes and the loading of the assets before the render starts.
Those bottlenecks come mostly from the slow single threaded performance of the Xeon and the disk speed, the M1 Ultra address both issues. The important thing in my work is the continuity of the flow, the worst part is when some tasks interrupt the flow even if only for a few seconds, longer tasks are usually not an issue since in that case I can always go to take a cup of coffe or spend some time with my family.
I use most of the time creating the scene, modeling all the assets, creating the textures, then the shading, lighting etc. I can get a render preview of my scenes in fullHD in a few seconds and for the most complex scenes (way more complex than the one posted) it took about 2/3 minutes, so that do not impact my workflow that much.
I do final renders while working on other scenes or in the spare time, during the lunch brake, or over night. If I'm in a hurry I use my Windows slaves or I use render farm online for animations, so render time is rarely an issue.
It's important to notice that if you plan your workflow carefully you can limit the wasted time a lot, for example while I'm rendering I can still work on scenes, postproduction etc.
To be honest today the biggest issue in my workflow is the organization of all my assets (over 20 years of assets..), but that depends mostly on me and not on the hardware/software I use. Probably a better organization of my assets can bring a significant benefit and I should be able to save 20/30% of the overall time, unfortunately between the work and the family (I've a small kid) I do not have that much time to dedicate to fix this issue..
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
Sirio76 demonstrates that actual render times are far less important than the overall experience, as 3D artists don’t spend all their time rendering - they spend most of it sculpting, modelling, texturing, scene-setup etc.
Yes, render time is the latest of my problems, of course that depends on the workflow.
For some specific job you absolutely need as much power as you can get, but for general use (that covers 90% of the user base) any modern computer is more than fine, no matter if is a Mac or a PC.
As said before workflow organization, software knowledge, and artist talent are far (FAR!!!) more important to work fast, produce great artwork and increase your income.
Beside specific situations anyone that think that a fast system will transform him/her in a better artist is living an illusion, in the end a poor job will be just a poor job done a bit faster.
 
  • Like
Reactions: tomO2013

vinegarshots

macrumors 6502a
Sep 24, 2018
982
1,349
Yes, render time is the latest of my problems, of course that depends on the workflow.
For some specific job you absolutely need as much power as you can get, but for general use (that covers 90% of the user base) any modern computer is more than fine, no matter if is a Mac or a PC.
As said before workflow organization, software knowledge, and artist talent are far (FAR!!!) more important to work fast, produce great artwork and increase your income.
Beside specific situations anyone that think that a fast system will transform him/her in a better artist is living an illusion, in the end a poor job will be just a poor job done a bit faster.

Not true. If you're rendering a few still frames, then whatever I guess. You probably can just use a Macbook Pro from 5 years ago if you don't care about performance. But if you're working on any kind of animation, then you absolutely need as much speed as possible. I work on animations for tech industry stuff (in high volume), and rendering locally saves me ~$2,000 on average per render versus using a render farm.

Plus, the faster those animations get done, the more of them I can bill in a month. When a 4090 can render one of my frames in 5 seconds, and the fastest Mac takes a minute and a half, that makes a HUGE difference. If I'm rendering 1000 frames, that's 1.5 hours on the 4090, and 25 hours on the Mac.

One of the more complicated animations I finished recently was 30 seconds per frame render time. Took about 12 hours locally. Rendering that same animation locally on a Mac would have taken about 10 days. The profit margin rendering that locally versus a render farm was HUGE.
 
  • Like
Reactions: innerproduct

sirio76

macrumors 6502a
Mar 28, 2013
578
416
I see your point but I don’t necessarily agree, when I need to do an animation I just use online farm that will render faster than any hardware you can buy as a freelance/small studio. And that do not cost me a single penny since I pass the farm cost to the client. Anyway, I think both scenario are valid but you should not pretend that what is working for you will work for anyone, for example many of my scenes won’t even fit in GPU memory;) GPU are great for product animation, motion graphics etc, but for anything complex CPU engines are still king due to stability and versatility, as I always say, there is a reason why any Hollywood blockbuster is still rendered on CPU using Renderman, Vray, Arnold etc.
 
Last edited:
  • Like
Reactions: Lone Deranger

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Not true. If you're rendering a few still frames, then whatever I guess. You probably can just use a Macbook Pro from 5 years ago if you don't care about performance. But if you're working on any kind of animation, then you absolutely need as much speed as possible. I work on animations for tech industry stuff (in high volume), and rendering locally saves me ~$2,000 on average per render versus using a render farm.

Plus, the faster those animations get done, the more of them I can bill in a month. When a 4090 can render one of my frames in 5 seconds, and the fastest Mac takes a minute and a half, that makes a HUGE difference. If I'm rendering 1000 frames, that's 1.5 hours on the 4090, and 25 hours on the Mac.

One of the more complicated animations I finished recently was 30 seconds per frame render time. Took about 12 hours locally. Rendering that same animation locally on a Mac would have taken about 10 days. The profit margin rendering that locally versus a render farm was HUGE.
Great that we get a discussion about specific scenarios instead of benchmarks. 1.5h for a high res still and 5-30 seconds per frame (X1000) says it all in regard to different demands and likely the complexity of the scene.

How much time does it take to model and animate (not the rendering) the 1000 frames?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.