With the good news that Apple's getting on board with implementing Metal for GPU rending in Blender (Cycles), was wondering what everyone else was thinking with regards 3D on the shiny new Apple Silicon goodness.
One thing I had been wondering about was whether the unified memory would have a significant benefit here. For heavy GPU renders one of the main hits comes from having to move data back to system memory from gpu memory, which is particularly the case when rendering volumes. If you have a lot of memory available to the GPU (32Gb+) swapping is reduced and everything is pretty fast. Would I be correct in thinking that the unified memory would prevent swapping here?
For an example of GPU rendering volumes check out the example at the end of the Houdini 19 sneak peak at the 9:40 mark; 5min a frame at 4K on a A6000 (I think that's 48Gb $5000 - $7000 US). Which is pretty nuts speed wise. To get fully resolved renderers on CPU I'd guess you'd be looking at 20 min+, less with denoising.
For me, as someone who primarily uses Houdini, the main problem atm is the AMD graphics drivers, and being stuck on OpenCL 1.2; there are an increasing amount of features that aren't supported on the mac being CUDA only (for example the vellum pressure solver in H18).
Really hoping that Apple steps up in getting 3D DCC up to snuff on the platform (I really hoped they will given that they need them for their AR development). Their emphasis on this and photogrammetry has been interesting to see (check out this years presentations at WDC), so cautiously optimistic.
Anyway's that's my 2c Wondering what everyone else's take on this is? Excited for Apple Silicon, or thinking that nvidia and win/linux is going to be the way to go? And will any of the custom modules on the AS chips bring anything unique to the table?
One thing I had been wondering about was whether the unified memory would have a significant benefit here. For heavy GPU renders one of the main hits comes from having to move data back to system memory from gpu memory, which is particularly the case when rendering volumes. If you have a lot of memory available to the GPU (32Gb+) swapping is reduced and everything is pretty fast. Would I be correct in thinking that the unified memory would prevent swapping here?
For an example of GPU rendering volumes check out the example at the end of the Houdini 19 sneak peak at the 9:40 mark; 5min a frame at 4K on a A6000 (I think that's 48Gb $5000 - $7000 US). Which is pretty nuts speed wise. To get fully resolved renderers on CPU I'd guess you'd be looking at 20 min+, less with denoising.
For me, as someone who primarily uses Houdini, the main problem atm is the AMD graphics drivers, and being stuck on OpenCL 1.2; there are an increasing amount of features that aren't supported on the mac being CUDA only (for example the vellum pressure solver in H18).
Really hoping that Apple steps up in getting 3D DCC up to snuff on the platform (I really hoped they will given that they need them for their AR development). Their emphasis on this and photogrammetry has been interesting to see (check out this years presentations at WDC), so cautiously optimistic.
Anyway's that's my 2c Wondering what everyone else's take on this is? Excited for Apple Silicon, or thinking that nvidia and win/linux is going to be the way to go? And will any of the custom modules on the AS chips bring anything unique to the table?