Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Great :) How could I search the post. Sorry I can not find it

Literally (as of my posting this) right below this thread (3D Rendering on Apple Silicon, CPU&GPU)...

Tracking 3D software for Apple Silicon

The listed post creator (Xiao_Xi) and the "WikiPost" tag are also heavy indicators...

To be fair, the title could be misleading, one might think it is a thread about 3D tracking software for ASi, like for motion tracking or something...?
 

aytan

macrumors regular
Dec 20, 2022
161
110
Literally (as of my posting this) right below this thread (3D Rendering on Apple Silicon, CPU&GPU)...

Tracking 3D software for Apple Silicon

The listed post creator (Xiao_Xi) and the "WikiPost" tag are also heavy indicators...

To be fair, the title could be misleading, one might think it is a thread about 3D tracking software for ASi, like for motion tracking or something...?
Thanks :) No I don't think so but your point is fair. It could be misleading.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
To be fair, the title could be misleading, one might think it is a thread about 3D tracking software for ASi, like for motion tracking or something...?
What would be a better title? If you can't change it, I can.
 

Herbert123

macrumors regular
Mar 19, 2009
241
253
Let's start small.

3D modeling software:
- Blender
- Houdini
- Maya
- Zbrush

3D rendering software:
- Cinema 4D
- Corona
- Cycles
- Eevee
- Octane
- Redshift
- V-Ray

What information should we include?
Cinema 4D's legacy render engine is deprecated, CPU only and outdated. It is replaced by Redshift.

As pointed out earlier, Cinema 4D provides good modeling features, just like any of the other options in that top list.

I have done this type of testing before, comparing 3d DCCs, and the two core benchmarks would be

[1] viewport performance: the speed (frames per second) that a scene or object can be displayed at and worked with. This type of test is easy to prepare for direct comparison between DCCs.

Typical scenarios to test for are:
  • raw single object display speed (no editing) orbiting, transforming, panning FPS. The object to test with would be a high density poly mesh without any texturing. A 3d scanned object is a good candidate.
  • a scene comprised of thousands of singular mesh objects without textures to test how well a DCC handles a scenario with many individual objects. One scene with low poly object, one with medium complex poly objects.
  • A medium-size production scene, fully textured with a heavy load of PBR textures. Test display quality modes between different DCCs.
  • a heavy landscape scene with millions of instances (trees, for example).
  • a raw single object edit speed performance: editing a high poly mesh is very different from merely panning and orbiting around it. Measure typical transformation edits: grabbing a single polygon and moving it. Select a larger group of polygons and move these.
These tests will tell a lot about how a DCC handles realtime display and memory management during daily work.

[2] render speed: the time it takes to render a scene. This is much MUCH harder to compare between DCCs and different render engines, if it can be compared at all. There are too many variables and differences between render technologies.

The only thing that can be absolutely compared are rendering speeds of the same render engine using the same scene on different hardware, either CPU, GPU or a combination of both.

It is possible to create identical scenes between render engines though, for a subjective measure how well a render engine copes with different types of scenes.

Typical scenes to test for are:
  • a typical architectural indoor scene
  • a typical architectural outdoor scene with trees, etc.
  • a typical character animation production scene, rendering a full animation sequence
  • a typical production quality heavy environment scene with loads of instances that would tax very high end machines and test if a render engine/DCC can handle the rendering. For example: https://www.disneyanimation.com/resources/moana-island-scene/
Aside from these specific things like caustics could be tested for as well. Preparing these scenes takes a lot of effort and knowledge of each DCC and render engine. Each render engine and scene can be optimized in various ways, and that requires someone who is very familiar and experienced with each render engine.

Your list amended:

3D modeling software3D rendering software
  • Blender
  • Cinema 4D
  • Maya
  • Houdini
  • Zbrush
  • 3dCoat
  • Modo
  • LightWave*
  • Cinebench (C4d benchmarking)
  • V-Ray
  • Octane
  • Redshift
  • Cycles
  • Corona
  • Karma/Mantra
  • Arnold
  • AMD Pro
  • Pixar Renderman

  • Eevee (realtime)
  • Unreal (realtime)
* LightWave was resurrected from development hiatus the other day, so I felt inclined to include it.
 
Last edited:

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
I woul
Cinema 4D's legacy render engine is deprecated, CPU only and outdated. It is replaced by Redshift.

As pointed out earlier, Cinema 4D provides good modeling features, just like any of the other options in that top list.

I have done this type of testing before, comparing 3d DCCs, and the two core benchmarks would be

[1] viewport performance: the speed (frames per second) that a scene or object can be displayed at and worked with. This type of test is easy to prepare for direct comparison between DCCs.

Typical scenarios to test for are:
  • raw single object display speed (no editing) orbiting, transforming, panning FPS. The object to test with would be a high density poly mesh without any texturing. A 3d scanned object is a good candidate.
  • a scene comprised of thousands of singular mesh objects without textures to test how well a DCC handles a scenario with many individual objects. One scene with low poly object, one with medium complex poly objects.
  • A medium-size production scene, fully textured with a heavy load of PBR textures. Test display quality modes between different DCCs.
  • a heavy landscape scene with millions of instances (trees, for example).
  • a raw single object edit speed performance: editing a high poly mesh is very different from merely panning and orbiting around it. Measure typical transformation edits: grabbing a single polygon and moving it. Select a larger group of polygons and move these.
These tests will tell a lot about how a DCC handles realtime display and memory management during daily work.

[2] render speed: the time it takes to render a scene. This is much MUCH harder to compare between DCCs and different render engines, if it can be compared at all. There are too many variables and differences between render technologies.

The only thing that can be absolutely compared are rendering speeds of the same render engine using the same scene on different hardware, either CPU, GPU or a combination of both.

It is possible to create identical scenes between render engines though, for a subjective measure how well a render engine copes with different types of scenes.

Typical scenes to test for are:
  • a typical architectural indoor scene
  • a typical architectural outdoor scene with trees, etc.
  • a typical character animation production scene, rendering a full animation sequence
  • a typical production quality heavy environment scene with loads of instances that would tax very high end machines and test if a render engine/DCC can handle the rendering. For example: https://www.disneyanimation.com/resources/moana-island-scene/
Aside from these specific things like caustics could be tested for as well. Preparing these scenes takes a lot of effort and knowledge of each DCC and render engine. Each render engine and scene can be optimized in various ways, and that requires someone who is very familiar and experienced with each render engine.

Your list amended:

3D modeling software3D rendering software
  • Blender
  • Cinema 4D
  • Maya
  • Houdini
  • Zbrush
  • 3dCoat
  • Modo
  • LightWave*
  • Cinebench (C4d benchmarking)
  • V-Ray
  • Octane
  • Redshift
  • Cycles
  • Corona
  • Karma/Mantra
  • Arnold
  • AMD Pro
  • Pixar Renderman

  • Eevee (realtime)
  • Unreal (realtime)
* LightWave was resurrected from development hiatus the other day, so I felt inclined to include it.
I would amend the list further :
Remove Zbrush, 3DCoat from the list of 3D ‘modeling’ software.
Replace 3D ‘modeling’ sofrware with 3D DCC (full blown functionality, even if some are weak in certain areas - modeling, rigging, animation, simulation, rendering/shading etc)
Add unreal to that list (yes it’s weak in modeling, even rigging but it’s there)

Zbrush and 3D coat do not have DCC functionality
Add Zbrush, 3D coat, Mudbox to 3D apps (do one or a few things well and add some new names to that list - even if there is a future plan to port to Mac : like Gaea, Embergen, World creator )

Finally add Keyshot and Clarisse to the rendering section.

Finally those into CAD/BIM may want to chip in with a list.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Should we split the renderers running on CPU and GPU into two as Blender(CPU) and Blender(GPU)?
 

aytan

macrumors regular
Dec 20, 2022
161
110
Should we split the renderers running on CPU and GPU into two as Blender(CPU) and Blender(GPU)?
I don't think so, using CPU in Blender is not common. GPU render is where Blender shines with Cuda or Optix. Evee Metal/Open GL is more common.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
Should we split the renderers running on CPU and GPU into two as Blender(CPU) and Blender(GPU)?
Yea, I think so. Since in the case of Blender they have 1: rewritten the basic viewports in Metal and 2) Ported eevee to Metal, 3: ported cycles to support ARM nativ Cpu processing and then finally 4: cycles for gpu processing via Metal
Other renderers like Arnold and vray just recently released arm cpu rewrites/native binaries but doesn’t seem to have any plans for GPU support on mac.
 

aytan

macrumors regular
Dec 20, 2022
161
110
Yea, I think so. Since in the case of Blender they have 1: rewritten the basic viewports in Metal and 2) Ported eevee to Metal, 3: ported cycles to support ARM nativ Cpu processing and then finally 4: cycles for gpu processing via Metal
Other renderers like Arnold and vray just recently released arm cpu rewrites/native binaries but doesn’t seem to have any plans for GPU support on mac.
ok make sense could be useful for compare M1/M2/Mx CPU's.
Is it possible to compare AS CPU's base on same scene file/software v.s. other CPU's even they use different OS ?
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
Fyi: the latest version of octane x (updated 28 march) is about 30% faster on ASi. Sometimes even faster it seems. One user (yes, anecdotal) reported that the m2max finishes a specific test frame at about 2.1x the time a of a desktop 3080ti. This is actually quite impressive development.
 
  • Like
Reactions: Standard and aytan

aytan

macrumors regular
Dec 20, 2022
161
110
Fyi: the latest version of octane x (updated 28 march) is about 30% faster on ASi. Sometimes even faster it seems. One user (yes, anecdotal) reported that the m2max finishes a specific test frame at about 2.1x the time a of a desktop 3080ti. This is actually quite impressive development.
Yes :) I am using it right now.Somehow Octane reach that level. Before that update it was barely worked on C4D and I gave up. Suddenly Octane starts to work very stabile and fast. It works nearly %40 - %50 faster than Redshift right now. I am researching differences with Redshift v.s. Octane. Octane might have a few downsides at first look. I should look it deeper.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
An OTOY forum user has shared this graph with benchmarks. Can anyone confirm if this graph shows correctly the improvement?
E0BBF019-386C-4309-BC3F-A92A68B23454.jpg


By the way, none have filled in the Octane information, but I assume it is fully optimized.
 

aytan

macrumors regular
Dec 20, 2022
161
110
An OTOY forum user has shared this graph with benchmarks. Can anyone confirm if this graph shows correctly the improvement?
View attachment 2186182

By the way, none have filled in the Octane information, but I assume it is fully optimized.
I can say yes there is a big improvement. Regardless from render speed it is working correctly right now. A few hours later I will post a comparison with Redshift.
 

vel0city

macrumors 6502
Original poster
Dec 23, 2017
347
510
Wow, I have an Octane sub but don't even have it installed. That's something to do over Easter, then.
 
  • Like
Reactions: aytan

teak421

macrumors member
Sep 17, 2020
78
143
Looks like Unreal Engine 5.2 will have full Apple Silicon support for its editor.
 
  • Like
Reactions: terminator-jq

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Looks like Unreal Engine 5.2 will have full Apple Silicon support for its editor.
Sadly still no Nanite support.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Sadly still no Nanite support.

Apparently they did add experimental support for M2 Macs

 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Its just Lumen and Nanite right now right?
Just Lumen at least officially (and only Software RT for Lumen, if you are using RT).
Apparently they did add experimental support for M2 Macs

Oh wow, so does this mean there really isn't hardware support for it on the M1? Is it safe to assume they are talking about the 64-bit atomics? So weird that it would be a hardware limitation and not just an API one.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Oh wow, so does this mean there really isn't hardware support for it on the M1? Is it safe to assume they are talking about the 64-bit atomics?


M1 only has hardware support for 32-bit int atomics from what we know. Floating-point atomics (added in Metal 2.1 or 2.2 if I remember correctly) are emulated using integer atomic compare and exchange and retry loops on M1 hardware. M2 appears to add limited support for 64-bit atomics (only min/max operation though) as well as native floating point atomics. It is possible that M2 supports more 64-bit atomics, but they are buggy, or maybe they just did some quick and dirty hardware patch specifically with Nanite in mind.

So weird that it would be a hardware limitation and not just an API one.

Why do you think it is weird? Wouldn't it be weirder I they had hardware support but decided not to expose it in the API?

P.S. Apple patents describe a much more complete atomics and memory ordering support on the GPU, but I suppose it's either not implemented or maybe is buggy.
 
  • Like
Reactions: Xiao_Xi

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
M1 only has hardware support for 32-bit int atomics from what we know. Floating-point atomics (added in Metal 2.1 or 2.2 if I remember correctly) are emulated using integer atomic compare and exchange and retry loops on M1 hardware. M2 appears to add limited support for 64-bit atomics (only min/max operation though) as well as native floating point atomics. It is possible that M2 supports more 64-bit atomics, but they are buggy, or maybe they just did some quick and dirty hardware patch specifically with Nanite in mind.



Why do you think it is weird? Wouldn't it be weirder I they had hardware support but decided not to expose it in the API?

P.S. Apple patents describe a much more complete atomics and memory ordering support on the GPU, but I suppose it's either not implemented or maybe is buggy.
I guess it is my naive thinking that Apple would have "basic" feature parity with "regular" GPU's.

EDIT: Is this because Apple is essentially scaling up a mobile (read: phone) solution for "desktop/laptop" use?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I guess it is my naive thinking that Apple would have "basic" feature parity with "regular" GPU's.

EDIT: Is this because Apple is essentially scaling up a mobile (read: phone) solution for "desktop/laptop" use?

Yeah, probably. It's unfortunate that they didn't manage to gain desktop feature when they have released A14. They might have though that 64-bit atomics is not important enough to invest resources into...
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
How is it possible that Apple needs more transistors than Nvidia and still lacks some functionality?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.