Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

sirio76

macrumors 6502a
Mar 28, 2013
578
416
Apple Silicon is not good to rendering 3D. All 3D rendering benchmarks indicate a big advantage for Intel CPU.
The fact that AS doesn’t match the fastest PC offering doesn’t mean that is not good for rendering. I‘ve Threadripper machines in my studio and I happily choose to work on my Ultra because is more than fast enough, silent, reliable.

Benchmark don’t tell you nothing about the user experience.
Just think about the noise for example, you can be sure that an WinIntel machine will be a lot louder, and I mean uncomfortably louder, you don’t see that in benchmark.
Some render software have trouble using Intel new performance/efficiency cores and underperform in some tasks, you don’t see that in benchmark.
People don‘t spend all their time rendering, it’s quite the opposite, an average 3D workflow is composed by many tasks and most tasks won’t use all the cores, so having a 64core Threadripper or an AS won’t change the speed of your work that much, you don’t see that in benchmark.
Etc.

I‘ve done high end 3D visualizations for more than 20 years (using both Mac and PC) and the Mac Studio Ultra is probably the best 3D system I’ve worked with so far. The fastest system? No. The most pleasant to work with? Yes

As always my feeling is that most people that write here don’t work in 3D at all or have only limited experience (not talking about you in particular) and only support their claim based on benchmarks that may not be representative of what it means to actually work on a selected system.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Im talking about gaming performance because that’s what almost everyone uses graphics for, tflops predicts compute yeah but also depends on the gpu actually being actively used at all times, not waiting for the CPU to encode instructions and transferring between RAM and VRAM, which Nvidia has a much bigger problem with compared to Apple Silicon.

If you are talking about gaming performance, Apples rendering is on average more compute and bandwidth-efficient than Nvidia (with caveats of course). UMA is not much of a benefit for games by the way. PCIe is more than capable of transferring the relatively small command and data packages required per frame.
 

olimerkido2

macrumors newbie
Feb 23, 2023
19
27
Yeah your right, the only thing tech people post about is the performance when your running at full speed regardless of the problems which come with achieving that speed if you got these kind of tech people to make the best computer it would as big as a house and as noisy as an airplane because would win in the benchmarks.

I also have a Mac Studio and its my favourite desktop because I can do 3D graphics on it but then I unplug and put in my backpack with mouse and keyboard (and my backpack isn’t even that big) and then unpack at another location, connect to a tv or something and connect the Bluetooth mouse and keyboard, I’ve never been able to do that with a desktop before, I can even take it on a bus or fly to another country with it in my backpack.
 

kasakka

macrumors 68020
Oct 25, 2008
2,389
1,078
Yeah your right, the only thing tech people post about is the performance when your running at full speed regardless of the problems which come with achieving that speed if you got these kind of tech people to make the best computer it would as big as a house and as noisy as an airplane because would win in the benchmarks.

I also have a Mac Studio and its my favourite desktop because I can do 3D graphics on it but then I unplug and put in my backpack with mouse and keyboard (and my backpack isn’t even that big) and then unpack at another location, connect to a tv or something and connect the Bluetooth mouse and keyboard, I’ve never been able to do that with a desktop before, I can even take it on a bus or fly to another country with it in my backpack.
Honestly if I wanted, I could do that with my desktop PC in a NR200P case, just would need a bigger bag and mine isn't even anywhere near the smallest PC ITX system out there. It's not something unique Apple does. Granted, the number of actually good, small form factor, pre-built PCs is not much so on PC it's more of a SFF enthusiast thing.

Benchmarks are just a way to compare different systems. Yes, often things like noise etc can be factor for what is a good experience. My 13600K+4090 ITX system is very low noise in general and vastly outperforms the Mac Studio (especially for GPU performance) yet actually costs less money.

Apple's current desktop system sit in a weird place where benchmarks show the new M2 Max Macbook Pros equaling or even outperforming the M1 Studio Ultra in many cases despite the Studio having massively bigger heatsinks that should allow for better performance as it wouldn't need to be as thermal and power limited. But either Apple limits them or they just don't scale that well. I wonder if in the future Apple will design separate SoCs for Mac Studio/Pro that are less about low power draw and more about raw performance.

Despite energy prices being at an all time high, at least for me the cost of running a PC with high power draw GPU is not really a factor for a desktop system. Apple is great for laptops where being quiet with excellent battery life are way more preferable.
 

dmr727

macrumors G4
Dec 29, 2007
10,668
5,767
NYC
Honestly if I wanted, I could do that with my desktop PC in a NR200P case, just would need a bigger bag and mine isn't even anywhere near the smallest PC ITX system out there.

To be fair that case is much, much larger than a Studio - to the point where I don't consider it to be in the same class. My MBA is a really nice machine, but I don't benchmark it against an MSI Titan either. I think the Studio compares very favorably to the various UCFF machines out there on the PC side.
 
Last edited:

kasakka

macrumors 68020
Oct 25, 2008
2,389
1,078
To be fair that case is much, much larger than a Studio - to the point where I don't consider it to be in the same class. My MBA is a really nice machine, but I don't benchmark it against an MSI Titan either. I think the Studio compares very favorably to the various UCFF machines out there on the PC side.
Obviously the NR200P is a good chunk larger, which is why I mentioned it's not anywhere even near as small as you could make a similar PC. I just don't have an interest in doing that as I'd rather have something more compatible and less cramped. The desk footprint even on this larger SFF case is actually just 16.3 cm wider with slightly less depth compared to the Studio.

I think the comparison is apt if you want to compare what you are getting for similar money on the PC side. PCs don't need to be big boxes under your desk anymore and while an integrated system like Apple's can be smaller, I don't think that's enough to make up for the performance disparity, lack of upgradeability etc.

I'm happy to give some of that stuff up on the laptop side but to me it's just a bad deal on the desktop computers.
 
  • Like
Reactions: dmr727

swartzfeger

macrumors member
Feb 9, 2012
43
6
Are there any forums an enthusiast/hobbyist like me can learn/lurk? I find this topic fascinating but would also like to expand it beyond ASi.

Reddit seems kinda... well, Reddit-y.
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
Are there any forums an enthusiast/hobbyist like me can learn/lurk? I find this topic fascinating but would also like to expand it beyond ASi.

Reddit seems kinda... well, Reddit-y.
Unfortunately Reddit is the best I’ve found for general. The hardware sub is the best I’ve seen, although there was an influx of PCMR types ~2020 which has somewhat reduced the quality of conversation but there is meaningful pushback on that (aside from announcement periods for the typical punching bags).
 
  • Like
Reactions: swartzfeger

aytan

macrumors regular
Dec 20, 2022
161
110
The fact that AS doesn’t match the fastest PC offering doesn’t mean that is not good for rendering. I‘ve Threadripper machines in my studio and I happily choose to work on my Ultra because is more than fast enough, silent, reliable.

Benchmark don’t tell you nothing about the user experience.
Just think about the noise for example, you can be sure that an WinIntel machine will be a lot louder, and I mean uncomfortably louder, you don’t see that in benchmark.
Some render software have trouble using Intel new performance/efficiency cores and underperform in some tasks, you don’t see that in benchmark.
People don‘t spend all their time rendering, it’s quite the opposite, an average 3D workflow is composed by many tasks and most tasks won’t use all the cores, so having a 64core Threadripper or an AS won’t change the speed of your work that much, you don’t see that in benchmark.
Etc.

I‘ve done high end 3D visualizations for more than 20 years (using both Mac and PC) and the Mac Studio Ultra is probably the best 3D system I’ve worked with so far. The fastest system? No. The most pleasant to work with? Yes

As always my feeling is that most people that write here don’t work in 3D at all or have only limited experience (not talking about you in particular) and only support their claim based on benchmarks that may not be representative of what it means to actually work on a selected system.
I am agreed with you, I am quite happy M1 Ultra and agreed with you ''it is all about user experience''.
In the early 2000's ( 15 years ago or more I really can not remember just now ) one of my friend gave me a MBP for a project which we have worked together at that time. At that point I have been use any computer (actually first 3D Graphic Software I have been use first time on an Amiga at 80's) at the end of 80's and early 90's, 286/386DX/486/Pentiums ext. even Xeons, Legendary Toshiba Satelite's for earning money ( I am an old man and can not remember all of them, sorry if I am not correct about titles or years, it is a little bit confusing right now ). After that experience with a MBP I never need a PC.
I have been working nearly 30 years on Motion Picture/Advertisement/Broadcasting sector as Director/Supervisor/Creative, last 20 years I have been add 3D/2D animation workflows, illustration/digital painting more and more on my work fields. I have learnt and use Solidworks/Maya/MentalRay/Arnold/VRay/C4D/Redshift/Octane/Blender/Zbrush and any kind of hardware related
( and lots of uniqe systems/softwares even a lot of them not exists today ). Currently I own a couple of Mx Macs and a PC at home. Honestly it is rare I fired up my PC on a regular day. For example today at studio I have worked with a 10 year old trashcan MacPro for basic video editing and some Photoshop/Illustrator works than switch to M1 Studio Max for motion graphics and than use a while iPadPro for planing upcoming shooting, sketching and now writing on M1 Ultra. All of them are random use, any of them could be used for anything. All of them just tools and thats all. Current M1 Ulta/Max or M1 MBP's are just enough for me and my friends needs, they are power efficient/silent and you can carry them over any place/desk easily. Yes it would be great if these computers will have more GPU power but stay at recent power usage :)
I really wonder about why people care that much Mac's poor GPU performance even they don't own or don not want to use them. It is a matter of choice of course. I respect that.
I have had a lot of computers and still own or use for work a lot of them also lots of my friends too. From Samsung pads/iPads to iPad Pro's. Wide range of Cintiq's. From 5,1 MacPro to trashcan MacPro to Mac Studio Ultra/Max, from server grade multi CPU PC's to mid grade 2080ti/3070/3080/4090 PC's, a couple of systems with RX6000 series GPU's as Hackintoshes. They all have their own way to work. Thats all. It should't be that much complicate. All of them are just tools like a wrench or screwdriver.
Just think about you have a studio and had to run 15 or more computers everyday, 15X4090's with 15X Threadrippers/Xeons or 15XStudio Ultra/Max/Mini's. I think this could provide a different perspective at least about power usage.
As an enthusiastic for any 3D Hardware/Software (even today) I can use any Mac or PC, it is not matter at all. As a Freelancer/Owner there is too many things had to consider about which I had to choose. At the end the day any job had to be done somehow. No one really cares how you did it, if it is correct/desired and finished before deadline :)
 
  • Like
Reactions: sirio76

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
What would require less effort for Apple: adapting MoonRay to Apple hardware or creating from scratch a rendering engine with the same quality as MoonRay?
 

jujoje

macrumors regular
May 17, 2009
247
288
I think Moonray would be a bit of a faff to fully optimise - it looks like it's heavily optimised for CUDA and x86. It sounds like it should be easy to build for MacOS (more so than for Windows - will windows ever be good for 3D :p), although going by their support forums no one's been brave enough to give it a try (I'm curious, but not that curious).

Would be really curious to see Apple build a renderer for MacOS (they should build a hydra delegate and integrate it in Finder / Preview - preview AR assets in quicklook with full path tracing support would be neat). I kind of feel it would make sense to build a renderer to their hardware rather than trying to optimise an existing renderer, be it Renderman or MoonRay, both of which had years of low level optimisations for Intel/x86/nVidia hardware going back years. Just getting something optimised for the GPU and memory architecture on AS would probably pay off pretty nicely.
 

vel0city

macrumors 6502
Original poster
Dec 23, 2017
347
510
As always my feeling is that most people that write here don’t work in 3D at all or have only limited experience (not talking about you in particular) and only support their claim based on benchmarks that may not be representative of what it means to actually work on a selected system.

Very much this. I work in 3D day in day out on an AS Mac and love the performance, workflow, stability and how silent and easy to maintain the system is. Cinema 4D/Redshift/ZBrush/Photoshop are all an absolute dream on Apple Silicon. Final rendering speed is the least important aspect to me. I want fast and responsive particle simulations, fast interactive sculpting, snappy timelines and no lag when editing 4k images. Apple Silicon delivers all of that effortlessly and silently.

If a final render is going to take too long I simply add it to the budget and render in the cloud.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Would be really curious to see Apple build a renderer for MacOS (they should build a hydra delegate and integrate it in Finder / Preview - preview AR assets in quicklook with full path tracing support would be neat). I kind of feel it would make sense to build a renderer to their hardware rather than trying to optimise an existing renderer, be it Renderman or MoonRay, both of which had years of low level optimisations for Intel/x86/nVidia hardware going back years. Just getting something optimised for the GPU and memory architecture on AS would probably pay off pretty nicely.
If large studios render on x86 and Nvidia-based farms, wouldn't an Apple-only rendering engine be useful for freelancers and small studios only? Would studios hesitate to adopt Apple's rendering engine if it produces a different result than the farm rendering engines?
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
If large studios render on x86 and Nvidia-based farms, wouldn't an Apple-only rendering engine be useful for freelancers and small studios only? Would studios hesitate to adopt Apple's rendering engine if it produces a different result than the farm rendering engines?
Why would it produce a different result?
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Why would it produce a different result?
I have always thought that two different rendering engines produce slightly different results. Don't GPU-based rendering engines produce slightly different results than CPU-based rendering engines? Does all popular engines use the same algorithms for rendering?
 
  • Like
Reactions: aytan

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Every rendering engine produces slightly different results. Even different CPUs can produce different results with CPU renderers. That is why render farms typically need to be all the same CPUs.
 
  • Like
Reactions: Xiao_Xi

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
I see no reason why Apple would want to make its own render engine, unless it ties in with their AR/VR ambitions.

For standard DCC applications , most come with their own render engines. Or there are several 3rd party ones. It might be easier to help Renderman or V-ray etc to take advantage of whatever HW Apple’s providing (like Blender’s Cycles, E-Vee etc)
 

vinegarshots

macrumors 6502a
Sep 24, 2018
982
1,349
Very much this. I work in 3D day in day out on an AS Mac and love the performance, workflow, stability and how silent and easy to maintain the system is. Cinema 4D/Redshift/ZBrush/Photoshop are all an absolute dream on Apple Silicon. Final rendering speed is the least important aspect to me. I want fast and responsive particle simulations, fast interactive sculpting, snappy timelines and no lag when editing 4k images. Apple Silicon delivers all of that effortlessly and silently.

If a final render is going to take too long I simply add it to the budget and render in the cloud.

It's not just about final render, though. I want a fast, interactive ray-traced rendered viewport that I can use when setting up scenes, materialing, and animating. Apple Silicon is not able to give me that.

It's always easy to think things are working fine when you haven't really dug into the alternatives. You can't miss what you never had. For myself, having used both Mac and PC, I really couldn't be satisfied with the losses going back to Mac at this point.
 

aytan

macrumors regular
Dec 20, 2022
161
110
I have always thought that two different rendering engines produce slightly different results. Don't GPU-based rendering engines produce slightly different results than CPU-based rendering engines? Does all popular engines use the same algorithms for rendering?
''I have always thought that two different rendering engines produce slightly different results'' they reproduce images as renders very very different. Most of the time it is impossible to use same scene structure or same elements with different render engines.
Also only CPU and only GPU render engines (bias v.s. unbias) could reproduce quite different results. Arnold has one of the most close results with CPU v.s. GPU, however Arnold recommends to use CPU for final renders. At least last time I use Arnold it was that way. Also Redshift can use both of them and results are close, but render times quite different. VRay is quite impressive with it's unique futures and structure and I can not figured out how it could be work that way.
There are Hybrid solutions but these are limited by your system or/and rare. VRay is the most close render engine for Hybrid rendering I guess. You can use CPU and GPU at the same time (Redshift - VRay - Cycles) for rendering but I do not have any evidence this is close or faster/better than only GPU base rendering. Actually this method slows down whole system and render engine.
USD scene structure and Universal Material could be a solution across 3D softwares but I do not try this type of workflow yet.
However these days CPU and GPU rendering close to each other in sake of ''Final Image'' when you use identical computers, differences are less than ever.
But even with same software/GPU render engine you could reproduce different images with different series GPU's , like 1080/2080/3080. At least I have experienced a few times this issue. It is not reliable for different series GPU's which works on different systems.
On the other other hand CPU renderers 'should' be more accurate, same software/render engine with different computer systems or different CPU brands 'should' reproduce same results. At least they should be identical on paper.
I did not check CPU rendered images with AS v.s. intel v.s. amd but I guess final results 'should' be identical. Only render times for each frame could be vary for each CPU based on it's specs.
I have tried this workflow with Arnold as CPU renderer and different OS/Systems in the past it was work for me with three different computers.
I don't think so all popular render engines use same algorithm maybe I am completely have wrong knowledge about the subject. I think this is not possible for so many reasons, there had to be Legal restrictions I supposed. I think fundamentals are same, but implementation had to be vary.
Same of them has unique ways to reproduce an image, an example is Octane v.s. Redshift.
Apple has its own way or structure with Mx. As long as if it can not implement HW Raytrace it will stay on this unique position. Redshift/VRay/Cycles are ok with AS. I don't know it is good or bad for anyone. I use M1 for basic 3D with Redshift/Cycles/Evee and its just fine for me. Yes it is slower than any HW Raytrace solution. Maybe Apple could serve better solution in the near future at least I hope taht way.
 
  • Like
Reactions: jujoje and Xiao_Xi

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Usually wonky results from different CPUs occurs when doing a lot of procedurals for materials. Different CPUs return different values for the same random seeded numbers.

As far as interactive viewports. VPR on my Mac Pro is pretty damn good in Lightwave; but no where close to what can be had on Windows. AfterEffects is decent. Have not messed a whole lot with Unreal Engine 5 on my Mac Pro, but I imagine it works good except for all the neat new things that require CUDA.
 
  • Like
Reactions: jujoje and aytan

jujoje

macrumors regular
May 17, 2009
247
288
If large studios render on x86 and Nvidia-based farms, wouldn't an Apple-only rendering engine be useful for freelancers and small studios only? Would studios hesitate to adopt Apple's rendering engine if it produces a different result than the farm rendering engines?

Usually wonky results from different CPUs occurs when doing a lot of procedurals for materials. Different CPUs return different values for the same random seeded numbers.

Had exactly that problem; had a farm back in the day that was a mix of Intel and AMD machines and we got back different results. Turned out there was a FP difference in the seed calculation meaning that the fractal patterns were different between the two processor types. Which was fun, just before a deadline :) Oddly enough this was in lightwave.

It's always been one of the arguments against GPUs, that driver updates can change the look of the renderer and break features. Back in the day AMD was unable to display geometry correctly in the viewport, let along consistent results for GPU renders and Nvidia break their drivers now and then too.

Also only CPU and only GPU render engines (bias v.s. unbias) could reproduce quite different results. Arnold has one of the most close results with CPU v.s. GPU, however Arnold recommends to use CPU for final renders. At least last time I use Arnold it was that way. Also Redshift can use both of them and results are close, but render times quite different. VRay is quite impressive with it's unique futures and structure and I can not figured out how it could be work that way.

I think Renderman was going for 1:1 for the whole xPU architecture and it looked like they were close. Arnold, Karma and Renderman all still seem to be recommending the xPU versions for lookdev, but final renders on the CPU due to lingering differences in feature sets or sampling. Although I think the next version of Renderman should bring production ready xPU (don't follow Renderman development much, so could be wrong). Impressed that VRay seems to have got there.

USD scene structure and Universal Material could be a solution across 3D softwares but I do not try this type of workflow yet.

USD + MaterialX will be awesome, once it all works. It's getting there but still a fair way to go in terms of getting support across the various DCC Apps. Still probably a year out at least, but the idea is picking up steam.
 
  • Like
Reactions: aytan and Xiao_Xi

aytan

macrumors regular
Dec 20, 2022
161
110
Had exactly that problem; had a farm back in the day that was a mix of Intel and AMD machines and we got back different results. Turned out there was a FP difference in the seed calculation meaning that the fractal patterns were different between the two processor types. Which was fun, just before a deadline :) Oddly enough this was in lightwave.

It's always been one of the arguments against GPUs, that driver updates can change the look of the renderer and break features. Back in the day AMD was unable to display geometry correctly in the viewport, let along consistent results for GPU renders and Nvidia break their drivers now and then too.



I think Renderman was going for 1:1 for the whole xPU architecture and it looked like they were close. Arnold, Karma and Renderman all still seem to be recommending the xPU versions for lookdev, but final renders on the CPU due to lingering differences in feature sets or sampling. Although I think the next version of Renderman should bring production ready xPU (don't follow Renderman development much, so could be wrong). Impressed that VRay seems to have got there.



USD + MaterialX will be awesome, once it all works. It's getting there but still a fair way to go in terms of getting support across the various DCC Apps. Still probably a year out at least, but the idea is picking up steam.
I have tried Renderman on intel base Mac via Blender. It has great results and wide range of tools which could customize by user. Material system was really good. On intel Mac of course I did not use a few advance option like GPU rendering. All I experienced based on CPU usage. As I said whole system, lighting build up, materials and results were great only downside was quite long rendering times. I still look for Renderman and cross my fingers for AS but I think it will not happen in a short period :(. They have released an update last month and there was no option for any kind ARM CPU I guess. I just check and still only windows 10, 64-bit Mac OSX Compatible with versions 10.13, 10.14 and 10.15.
I look for a project which I can use/explore USD + MaterialX workflow, but could not happen with tight deadlines of course.
I am following Moonray too. Waiting for a demo version since day 1 :)
 
  • Like
Reactions: jujoje

jujoje

macrumors regular
May 17, 2009
247
288
I see no reason why Apple would want to make its own render engine, unless it ties in with their AR/VR ambitions.

For standard DCC applications , most come with their own render engines. Or there are several 3rd party ones. It might be easier to help Renderman or V-ray etc to take advantage of whatever HW Apple’s providing (like Blender’s Cycles, E-Vee etc)

That was why I went with the quicklook / preview - feels like an area Apple could take advantage of their own render engine (plus they already assumably have to hooks as Preview uses Pixar's Storm GL renderer). It would be awesome if you could send clients turntables or shots and they can just view it on their phone / iPad / Mac exactly as you see it, no additional software.

I don't see them doing it, but perhaps someone at Apple has an itch to build their own renderer...
 
  • Haha
Reactions: aytan

sirio76

macrumors 6502a
Mar 28, 2013
578
416
Impressed that VRay seems to have got there
Absolutely not;) Vray CPU and Vray GPU are supported as two separate engines, developers recommend to avoid switching in production between the two engines since results may differ significantly and the feature set is more limited on GPU.
One thing that you can do is to let the CPU run the GPU engine as a CUDA device, this way you will get 100% the same identical results obtained with a GPU, but of course this will be much slower than running the CPU using the native CPU engine.
This possibility is there for 3 main reason:
-if your graphic card runs out of VRAM you can still use your CPU to run the GPU engine and complete the job
-according to developers is easier to debug GPU engine issues running the code on the CPU
-if you have a very fast CPU (like a 64core Threadripper) it can help to complete the job faster using both CPU and GPU

As far as I know there is still no engine that can really provide efficiently pixel perfect results and identical feature set, the "hybrid" mode in Vray can do that but you are essentially giving up quite a bit of the performance of the CPU, so is not optimal, and if you have an average CPU it may even slow down the render when combined with an high end GPU.
 
  • Like
Reactions: iPadified and aytan
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.