Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
With the good news that Apple's getting on board with implementing Metal for GPU rending in Blender (Cycles), was wondering what everyone else was thinking with regards 3D on the shiny new Apple Silicon goodness.

One thing I had been wondering about was whether the unified memory would have a significant benefit here. For heavy GPU renders one of the main hits comes from having to move data back to system memory from gpu memory, which is particularly the case when rendering volumes. If you have a lot of memory available to the GPU (32Gb+) swapping is reduced and everything is pretty fast. Would I be correct in thinking that the unified memory would prevent swapping here?

For an example of GPU rendering volumes check out the example at the end of the Houdini 19 sneak peak at the 9:40 mark; 5min a frame at 4K on a A6000 (I think that's 48Gb $5000 - $7000 US). Which is pretty nuts speed wise. To get fully resolved renderers on CPU I'd guess you'd be looking at 20 min+, less with denoising.

For me, as someone who primarily uses Houdini, the main problem atm is the AMD graphics drivers, and being stuck on OpenCL 1.2; there are an increasing amount of features that aren't supported on the mac being CUDA only (for example the vellum pressure solver in H18).

Really hoping that Apple steps up in getting 3D DCC up to snuff on the platform (I really hoped they will given that they need them for their AR development). Their emphasis on this and photogrammetry has been interesting to see (check out this years presentations at WDC), so cautiously optimistic.

Anyway's that's my 2c :) Wondering what everyone else's take on this is? Excited for Apple Silicon, or thinking that nvidia and win/linux is going to be the way to go? And will any of the custom modules on the AS chips bring anything unique to the table?
 

altaic

Suspended
Jan 26, 2004
712
484
The short answer is that your 2nd paragraph is exactly what’s expected from the coming ASi. Your specific questions place me out of my depth, but I’m very excited for GPU-related tasks on ASi. However there is a long answer: @leman has discussed this in great detail (a search may turn up some gold), and will probably entertain you shortly ?
 
Last edited:

Romanesco

macrumors regular
Jul 8, 2015
126
65
New York City
My guess is Apple is laying the groundwork for the upcoming Mac Pro replacement. They severely lack in the 3D rendering department and need to step it up if more (3D) professionals are to consider them viable again.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
One thing I had been wondering about was whether the unified memory would have a significant benefit here. For heavy GPU renders one of the main hits comes from having to move data back to system memory from gpu memory, which is particularly the case when rendering volumes. If you have a lot of memory available to the GPU (32Gb+) swapping is reduced and everything is pretty fast. Would I be correct in thinking that the unified memory would prevent swapping here?

You are spot on. With the UMA model, the GPU has fast access to all the available system RAM, which obviously makes this model very attractive for production renderers.


For an example of GPU rendering volumes check out the example at the end of the Houdini 19 sneak peak at the 9:40 mark; 5min a frame at 4K on a A6000 (I think that's 48Gb $5000 - $7000 US). Which is pretty nuts speed wise. To get fully resolved renderers on CPU I'd guess you'd be looking at 20 min+, less with denoising.

Apples strategy for production renderers appears to involve their Metal raytracing API which includes several features targeted at pro software (larger volume limits, animated objects etc.). It has been pointed out that production renderers usually aim to produce consistent results on all platforms, so it’s not clear whether they will utilize this API, but if one is writing a new renderer, it’s certainly worth a look as it will definitely save you a lot of work…

Regarding everything else, we’ll any current GPU renderer code can be ported to Metal. It simply requires effort. If these new Macs can establish themselves as solid machines for this work, I am sure that the software will follow.

@leman has discussed this in great detail (a search may turn up some gold), and will probably entertain you shortly ?
Is that what I do now? ?
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
I am really excited about the Blender announcement, it will be good to see 3D on the Mac utilizing the Metal API and getting to properly use all the horsepower within the ASi SoC(s)...!

I say SoC(s) in anticipation of the (rumored) Jade 2C / Jade 4C chips...

I plan to dump my PC (Asus Crosshair VIII Impact mDTX mobo / AMD Ryzen 3900X 12-core CPU with 240 AIO cooler / 64GB RAM / Two PCIe 4.0 1TB M.2 NVMe SSDs / nVidia GTX 650Ti with 2GB VRAM / 750W Platinum-rated SFX PSU with custom cables / Fractal ERA ITX chassis) and switch over to a top-end M1X Mac mini (10-core CPU / 64GB RAM / 32-core GPU / 1TB SSD / 10Gb Ethernet)...!
 
  • Like
Reactions: singhs.apps

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
You are spot on. With the UMA model, the GPU has fast access to all the available system RAM, which obviously makes this model very attractive for production renderers.

Good to know I wasn’t too far of base :)

That could be a strong selling point, particularly given the prices of GPUs with decent memory at the moment (12Gb is probably the minimum for anything remotely complex). While the GPU’s might not be as fast as the latest Nvidia, not having to go out of core for textures and volumes might beat raw speed.

Apples strategy for production renderers appears to involve their Metal raytracing API which includes several features targeted at pro software (larger volume limits, animated objects etc.).

Do you have any sources for this or what indicates them heading in that direction? Would be curious to know more. Had a look through the WWDC talks and didn't recall much on this; some stuff real time raytracing, shadows and motion blur which was pretty cool. I think Metal still has a way to go to get up to speed on the requirements for 3D apps (iirc it still lacks geometry tessellation shader).

It has been pointed out that production renderers usually aim to produce consistent results on all platforms, so it’s not clear whether they will utilize this API, but if one is writing a new renderer, it’s certainly worth a look as it will definitely save you a lot of work…

I remember back in the AMD-64 days, getting different fractal patterns on AMD and Intel hardware (something to do with floating point differences when generating fractal seeds iirc). That was a fun thing to discover just before delivery :D Having a heterogenous render farm was not a good idea ?

Really curious how heterogeneous renderers like Renderman xPU and Houdini's Karma work given different processors and GPUs (although I think both of them are Nvidia only at the moment and the former optimised towards Intel).

I'm hoping that someone will benchmark Octane and Redshift (both of which have been ported to metal) when the new MacBooks get released; will be interesting to see how it compares both the CUDA and the old AMD GPUs on Metal.

Hopefully we get some 3D software demoed on Monday, not just the Final Cut / AE benchmarks.
 

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
My guess is Apple is laying the groundwork for the upcoming Mac Pro replacement. They severely lack in the 3D rendering department and need to step it up if more (3D) professionals are to consider them viable again.

I hope so! Apple’s 3D support has always been really all over the place.

On one hand the integration of USD files and the Hydra Storm delegate in Monterey is really powerful (although why they crammed it into the Preview app I don’t know - it makes no sense). On the other the graphic drivers on my iMac are error prone, and I'm stuck with OpenGL and OpenCL from the 2000s...

Funnily enough Houdini (through Rosetta) runs better on my M1 mba in terms of responsive, viewport interaction than on my iMac Pro. It appears that Apple’s OpenGL/CL to Metal translation is more robust than running it natively ? *

* Excludes height fields which cause the dreaded beachball of doom .
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Hopefully we get some 3D software demoed on Monday, not just the Final Cut / AE benchmarks.
I would think a Blender demo would be a given, even if it is a super alpha variant of 3.0 being shown, just something to show the M1X SoC flexing its new 3D Metal muscles...?!?
 

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
I would think a Blender demo would be a given, even if it is a super alpha variant of 3.0 being shown, just something to show the M1X SoC flexing its new 3D Metal muscles...?!?

That’d be pretty sweet! It did seem that they were still on pretty early stages targeting a post 3.0 release for the metal stuff. Would be great to get a sneak peak and the announcement of the pro MacBooks would be the time to start making the case that they’re actively pursuing 3D on the Mac.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Do you have any sources for this or what indicates them heading in that direction? Would be curious to know more. Had a look through the WWDC talks and didn't recall much on this; some stuff real time raytracing, shadows and motion blur which was pretty cool.

It’s in the WWDC sessions on ray tracing. Motion blur is one of the features aimed at production renderers that AFAIK has no alternatives in other APIs. Metal also supports much larger volumes than other APIs with the extended RT profile.

I think Metal still has a way to go to get up to speed on the requirements for 3D apps (iirc it still lacks geometry tessellation shader).

Metals lack of speed here is simply due to GPU performance and lack of hardware level RT acceleration. Metal fully supports tessellation. Metal lacks geometry shaders, because geometry shaders have always been a badly designed feature that does not fit well to how GPUs work. Since Metal removes most of the legacy API cruft and modernizes the approach to programming GPUs, geometry shaders were one of the first to go. In Metal, you use compute shaders with GPU-driven pipelines to implement things geometry shaders are supposed to do, this allows you to optimize the algorithms and utilize the GPU more efficiently.
 

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
It’s in the WWDC sessions on ray tracing. Motion blur is one of the features aimed at production renderers that AFAIK has no alternatives in other APIs. Metal also supports much larger volumes than other APIs with the extended RT profile.

Cheers for that; will have to give that another watch (just skimmed through it at the time looking for pretty pictures ? ). That's interesting about volumes, being one of those things that all GPU renderers have a hard time with, being difficult to fit into memory and rather intensive to path trace. Iirc Renderman xPU can't do volumes at all yet...

Metals lack of speed here is simply due to GPU performance and lack of hardware level RT acceleration. Metal fully supports tessellation. Metal lacks geometry shaders, because geometry shaders have always been a badly designed feature that does not fit well to how GPUs work. Since Metal removes most of the legacy API cruft and modernizes the approach to programming GPUs, geometry shaders were one of the first to go.

I think was conflating geometry shaders and tessellation. Had a bit of a look into it and came across this which amused me :)

Sounds like geometry shaders going the way of all flesh with Metal and Vulcan Hopefully this means that as apps move towards modernising their viewports that things will be a bit more straightforward cross platform wise; it certainly feels like the Houdini viewport on Mac is a house of cards working around OpenGL 4.1 limitations.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I think was conflating geometry shaders and tessellation. Had a bit of a look into it and came across this which amused me :)

Sounds like geometry shaders going the way of all flesh with Metal and Vulcan Hopefully this means that as apps move towards modernising their viewports that things will be a bit more straightforward cross platform wise; it certainly feels like the Houdini viewport on Mac is a house of cards working around OpenGL 4.1 limitations.

Vulkan still supports geometry shaders and apparently some people even use them. You see, this is part of the problem - even if these committees do something new they are still overly focused on backwards compatibility even if the feature is known to be a mistake. The result is that the bad design is kept to appease some legacy users, which means that some new users will invariably use it, which means that it will probably never be removed, and in the end we are stuck with a crappy system. The only way to ensure a high quality moving forward is to mercilessly cut bad legacy features. Which is one of the reasons I am a fan of how Apple does things. They are not afraid to do unpopular changes, and despite the usual outcry these changes tend to become a new standard few years down the road.
 
  • Like
Reactions: Boil

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
Vulkan still supports geometry shaders and apparently some people even use them.

On my brief google for stuff on geometry shaders the general consensus on Vulkan and geometry shaders was that you shouldn't use them and that they're slow and buggy so there's no advantage to it. Hopefully this means that usage will slowly die off. Although as you say, pretty sure some people will continue to stubbornly use them.

The only way to ensure a high quality moving forward is to mercilessly cut bad legacy features. Which is one of the reasons I am a fan of how Apple does things.

Otherwise you end up with Windows ? Definitely agree; Apple's willingness to abandon legacy is one of their strengths.
 
  • Like
Reactions: PeterLC

jjcs

Cancelled
Oct 18, 2021
317
153
Otherwise you end up with Windows ? Definitely agree; Apple's willingness to abandon legacy is one of their strengths.
Good way to rid yourself of the customers who in fact NEED those features and APIs. I remember when OS X was pitched as a Unix workstation to attract the scientific computing segment. That requires legacy API support - like OpenGL.
 

altaic

Suspended
Jan 26, 2004
712
484
Good way to rid yourself of the customers who in fact NEED those features and APIs. I remember when OS X was pitched as a Unix workstation to attract the scientific computing segment. That requires legacy API support - like OpenGL.
No, that requires CUDA, which is a nonstarter for Apple/Nvidia. Even Vulkan hasn’t caught on well, which is depressing since just about every piece of hardware in existence supports it. Not sure if you meant OpenCL, but either way they’ve been replaced with more performant and efficient APIs.

A lot of scientific software is mired in large stanky code bases that only every academic who has ever worked on it could understand— only if they collaborated. Unfortunately, that means that a lot of scientific software will bitrot until some enterprising student decides to rewrite the whole damn thing from the white papers. Or, more likely, from new white papers and with modern APIs/libraries. In other words, a whole new piece of software.
 

jjcs

Cancelled
Oct 18, 2021
317
153
No, that requires CUDA, which is a nonstarter for Apple/Nvidia. Even Vulkan hasn’t caught on well, which is depressing since just about every piece of hardware in existence supports it. Not sure if you meant OpenCL, but either way they’ve been replaced with more performant and efficient APIs.

A lot of scientific software is mired in large stanky code bases that only every academic who has ever worked on it could understand— only if they collaborated. Unfortunately, that means that a lot of scientific software will bitrot until some enterprising student decides to rewrite the whole damn thing from the white papers. Or, more likely, from new white papers and with modern APIs/libraries. In other words, a whole new piece of software.
No, that doesn't necessarily require CUDA, although tool development for a GPU cluster is not going to happen on Mac OS at all without it. Homebrew makes up for a lot of the limitations of the OS X development toolchain, but there are a lot of OpenGL pre and post-processing tools that are legacy and aren't going to be rewritten to target Metal, as we have better things to do with our time - using the tools in question. If support was dropped on all platforms, it would be a different story, but those of us who "switched" from commercial UNIX workstations have another alternative on the desktop. We just lose some consumer commercial software (Office, mostly). Sad, as that was the push to get us to adopt OS X in the first place.

Frankly, Mac OS X machines made a fine pre- and post-processing workstation for a lot of workflows. Now, despite the attractiveness of the new architecture, it's clearly a "prosumer" and consumer only platform.

As to your viewpoints on scientific software, that's just your opinion and not grounded in reality.
 
  • Like
Reactions: Slartibart

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
Homebrew makes up for a lot of the limitations of the OS X development toolchain, but there are a lot of OpenGL pre and post-processing tools that are legacy and aren't going to be rewritten to target Metal, as we have better things to do with our time - using the tools in question.

OpenGL isn't dead yet and Rosetta does a surpsingly good job of translating it. Ymmv depending on what features the application requires. Apple's OpenGL deprecation was not their best moment; they didn't update it for a decade, and then deprecated it before Metal was ready.

Frankly, Mac OS X machines made a fine pre- and post-processing workstation for a lot of workflows. Now, despite the attractiveness of the new architecture, it's clearly a "prosumer" and consumer only platform.
This whole 'prosumer because it doesn't support my narrow use case' needs to die tbh. For Film, 3D, editing and music creation the new MBPs are very much 'pro' laptops. Just because it doesn't support a specific niche doesn't mean it is not 'professional'. Sure it sucks it doesn't support your specific workflow, but that doesn't make it 'prosumer'.

There are definitely issue with how Apple has gone about supporting various segments of the professional market; 3D and, I assume scientific software, are two areas where they've been pretty inconsistent over the years.

I would think a Blender demo would be a given, even if it is a super alpha variant of 3.0 being shown, just something to show the M1X SoC flexing its new 3D Metal muscles...?!?

Think there were a few shots of Blender in the MacBook advert, and also the Octane dev waxing lyrical about all the memory, so at least we got some hints of 3D support. Hadn't realised that C4D had gone all in on Metal for the viewport. Also the benchmarks for redshift on the product look pretty decent :)
 

jjcs

Cancelled
Oct 18, 2021
317
153
This whole 'prosumer because it doesn't support my narrow use case' needs to die tbh. For Film, 3D, editing and music creation the new MBPs are very much 'pro' laptops. Just because it doesn't support a specific niche doesn't mean it is not 'professional'. Sure it sucks it doesn't support your specific workflow, but that doesn't make it 'prosumer'.

"Film, 3D, editing and music creation" aren't exactly genome sequencing or any number of other fields in the harder sciences and engineering. That's the market Apple dropped. The entertainment biz at the laptop level is nice and all. I guess. Honestly, they dropped out of the real "professional" market when they dropped XServe.

The "pro" video and audio people are a very narrow use case, by the way.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
There are definitely issue with how Apple has gone about supporting various segments of the professional market; 3D and, I assume scientific software, are two areas where they've been pretty inconsistent over the years.

Think there were a few shots of Blender in the MacBook advert, and also the Octane dev waxing lyrical about all the memory, so at least we got some hints of 3D support. Hadn't realised that C4D had gone all in on Metal for the viewport. Also the benchmarks for redshift on the product look pretty decent :)

Well, with the Apple/Blender "partnership" & Cinema4D going "all in on Metal" for the viewport, maybe more 3D software suites will take a long hard look at their macOS code and start rewriting for Apple silicon...

Jules (the Octane guy) has been on the hype train (in a good way) for Apple silicon since he had Octane running on iPhones...!

I am going to have to go back & look over the Event video again...!
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
I would wait for the tower with proper air circulation before doing any serious rendering. Since OP's concern is out-of-core I guess he is into some wild sequences or simulation.
 

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
Came across this article on rendering on the 14" Max (CPU based rather than GPU).

Thought it might be of interest, particularly the speed vs watt comparisons with various Intel and AMD CPUs. The energy difference is crazy, and performance is pretty impressive; only loses by a bit to a 16 core Mac Pro while using about a third of the energy.

 

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
As usual the initial batch of reviews have been all about Final Cut and Da Vinci. Hoping someone gives redshift or octane a go soon :)

I would wait for the tower with proper air circulation before doing any serious rendering. Since OP's concern is out-of-core I guess he is into some wild sequences or simulation.

I know I should wait for the Mac Pro's too but it's so tempting :D

It's pretty easy to run out of memory on my 16Gb Vega64, particularly with simulations :) The other real advantage of such a large amount of memory is being able to simulate multiple versions in the background.

For example GPU cloth sims (vellum) are 2-4 times faster on the GPU compared to CPU; with 64 gb of video ram you could do three of four variations in the background while continuing to work and get the results back in 10min, rather than sending it to a CPU based render farm and waiting 40+ min for the result.
 
  • Like
Reactions: Boil

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
I should be getting my mbp around 1st week of Nov. Will run Redshift benchmark on it to gauge performance.

That said I am not expecting it to beat my 3x Titan RTXs. But so long as it does well, I will be happy.

And start saving for the Mac Pro M system ?.

I am hoping the Karma and Renderman XPUs show up around launch time of the Mac pros and run natively on it. At least I wish Renderman should at the very least. XPU type workflows are why I will be interested in the M Mac pros… Access to huge Ram for GPUs.??
 

jujoje

macrumors regular
Original poster
May 17, 2009
247
288
I should be getting my mbp around 1st week of Nov. Will run Redshift benchmark on it to gauge performance.

That said I am not expecting it to beat my 3x Titan RTXs. But so long as it does well, I will be happy.

Damn 3 Titan RTXs! Yeah pretty sure it's not going to beat that :p Be curious to hear how it fairs though - without having delved into it too much I think the metal version of redshift is pretty decent, roughly comparable to CUDA so interested as to how it stacks up.

Also, if you happen to have Houdini kicking about, would be curious as to how high res a Pyro OpenCL / Minimal Solve you can fit in that amount of memory.

I am hoping the Karma and Renderman XPUs show up around launch time of the Mac pros and run natively on it. At least I wish Renderman should at the very least. XPU type workflows are why I will be interested in the M Mac pros… Access to huge Ram for GPUs.??

That would be awesome; Karma's been coming along really nicely and seems to be a pretty solid production renderer (after a bit of a rocky start). I think they're both pretty Optix based on the GPU front though. Speaking of which really hoping the new Mac Pro gets hardware based ray trace acceleration...

I remember when they announced the trash can Mac Pro demoing Maria painting Pixar assets from Monsters University. That didn't age well :D
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.