An Apple engineer has said on the Blender forum:If Apple cant improve this, then their high end market will be limited only for video and photo.
- The structure of the renderer is much aligned with the existing path taken for CPU and CUDA, with little leverage of Apple Silicon’s more unique architecture as of yet. We do leverage the unified memory architecture to avoid duplication of resources, but there’s much more to do on this front and we’re keen to see that leaned on in CPU+GPU rendering modes. There is certainly scope to use the Apple Neural Engine for denoise in the viewport too.
- Correctness has definitely been a focus for us, with ensuring we get solid results and a renderer users can use and rely on. This is intended to be a tech demo - it is aimed to be a tool that users can use all day every day. Some of the early R&D we’ve done has resulted in render performance being more than doubled over where it is now, but taking these prototypes and productising them is another matter, and takes significant time. The avenue to performance on Apple Silicon means driving the GPU in the way that is most efficient for its architecture. Each GPU architecture is different though, and we need to be able to cleanly drive our GPUs more efficiently but without compromising the existing performance on other GPUs.
- Optimisation is going to be an ongoing effort, rather than a task we tackle just the once, and I’m hoping the team can see some improvements land in every release. We have big ambitions.
Cycles Apple Metal device feedback
The structure of the renderer is much aligned with the existing path taken for CPU and CUDA, with little leverage of Apple Silicon’s more unique architecture as of yet. We do leverage the unified memory architecture to avoid duplication of resources, but there’s much more to do on this front and...
devtalk.blender.org