Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

chouki

macrumors member
Oct 24, 2015
35
5
I have been using a 7,1 since march, 16 Core, 32Gb RAM at purchase + 128GB Aftermarket, Radeon Pro 580X 8 GB.
I mainly do 3D animation and rendering on it, we produce about 6-8 minutes of 3d animation every month.

As much as I'd like to use the GPU for rendering, I see pointless to update the video card to something more powerful as our pipeline is Maya + Arnold (CPU only in Mac). Been thinking to get and Nvidia and boot in windows for rendering in GPU, or learn redshift and move the postproduction to use the GPU in macOS. Also at least here AE doesn't use any processing on the GPU, just eats the VRAM and only some third party plugins use the GPU processing (I always keep and eye on what's going on wit iStat menus).

Anyway I do all the post processing in AE and I understand very well your frustration as I encounter these problems almost every day, AE has been very sluggish and nothing different from using a MacBook Pro 15" 2019.

At least for AE 2022 keep ing mind that many plugins are still single core and will do a bottleneck for all the multithreading.

I've found that working with .exr sequences is super slow as AE has to re-render everything every time, I always try to render all the sequences (beauty, albedo, indirect lighting, color id, sss, z depth, etc) of the timeline as .movs without any compression and work with them as movie files to create a more manageable situation. It gets super slow when I'm starting to add Depth of field effects, glows or anything that has to tax the cpu. Also I connect a 500 GB samsung SSD T5 for the cache files, I have to purge it and ram every 30-40 mins, if not AE gets even slower.

Today I had to export a 4 min video and it took more than 2 hours with media encoder, even when I have everything the most optimized possible.

So yeah that's my experience, and I always think why Adobe didn't create something better... even in AE 2022 I haven't been able to use the full CPU, just about a jump from 10-15% to 30-35% CPU usage tops... not 95-100%. I wonder if it would be better to move everything to Nuke or Davinci.
Thank you, it's kinda crazy to see this solid machine struggle like that for a normal task.
I don't know where the bottleneck is between AE and the Mac Pro but something is definitely wrong

I don't even use .exr sequences, last project was just .png and .jpeg, photoshop layers, with very few effects here and there, comp of 100 layers or so... Impossible to work with.
 

shuto

macrumors regular
Oct 5, 2016
195
110
Thank you, it's kinda crazy to see this solid machine struggle like that for a normal task.
I don't know where the bottleneck is between AE and the Mac Pro but something is definitely wrong

I don't even use .exr sequences, last project was just .png and .jpeg, photoshop layers, with very few effects here and there, comp of 100 layers or so... Impossible to work with.
sounds very weird. that can't be normal behaviour, must be some bug / glitch somehow. If it was me I'd erase the system and do a clean install. And then I'd try AE 2022 and AE 2021 and see if there are any differences.
 

chouki

macrumors member
Oct 24, 2015
35
5
sounds very weird. that can't be normal behaviour, must be some bug / glitch somehow. If it was me I'd erase the system and do a clean install. And then I'd try AE 2022 and AE 2021 and see if there are any differences.
Weird indeed
The system was new and clean, it was a refurbished 16core 7,1 that came with Big Sur. I only installed C4D / Octane / AE with plugins.
I tried all versions of AE from 17.5 to 2022. All the same. Updating to Monterey didn't change anything.

I was working all the time (on my 5,1), so I didn't have much time to try different configurations on the 7,1.
As I had reach the deadline for sending back the machine to Apple (they even gave me 15 more days to decide), I sent it back. Frustrated by not having more time to experiment on this problem, and mostly by the fact that I wouldn't be able anymore to render 3D on my 2X RX6900xt. But I use After Effects everyday, so it has to work.

It's not clear to me if some people can work flawlessly on AE with this machine, I read several reports describing the same s**t I had (see Gobbledigook to whom I was responding above), or this thread:

Most people talk about long render times, but not much about the interface response, clicks etc, wich was horrible in my case. So maybe something was wrong somewhere, yep.

Today I ordered a maxed out M1 Max MBP. I guess I'll farm my 3D renders until an AS Mac Pro comes out.
Anyway workflow / response on this MBP seems very encouraging, so in a way I think it's not a bad idea that I did not keep an Intel machine. Only thing everyone is wondering is how expandable the next Mac Pro will be. But that's another story
 
Last edited:

shuto

macrumors regular
Oct 5, 2016
195
110
Weird indeed
The system was new and clean, it was a refurbished 16core 7,1 that came with Big Sur. I only installed C4D / Octane / AE with plugins.
I tried all versions of AE from 17.5 to 2022. All the same. Updating to Monterey didn't change anything.

I was working all the time (on my 5,1), so I didn't have much time to try different configurations on the 7,1.
As I had reach the deadline for sending back the machine to Apple (they even gave me 15 more days to decide), I sent it back. Frustrated by not having more time to experiment on this problem, and mostly by the fact that I wouldn't be able anymore to render 3D on my 2X RX6900xt. But I use After Effects everyday, so it has to work.

It's not clear to me if some people can work flawlessly on AE with this machine, I read several reports describing the same s**t I had (see Gobbledigook to whom I was responding above), or this thread:

Most people talk about long render times, but not much about the interface response, clicks etc, wich was horrible in my case. So maybe something was wrong somewhere, yep.

Today I ordered a maxed out M1 Max MBP. I guess I'll farm my 3D renders until an AS Mac Pro comes out.
Anyway workflow / response on this MBP seems very encouraging, so in a way I think it's not a bad idea that I did not keep an Intel machine. Only thing everyone is wondering is how expandable the next Mac Pro will be. But that's another story
I think you made the right decision to return the Mac Pro and get an Apple Silicon MacBook Pro. Fair enough the people that bought a Mac Pro when they came out, but doesn't seem to make sense to get one anymore, unless you need mega GPUs or RAM.

Still blows my mind that these aren't good for After Effects, I mean after effects is pretty inefficient software, but beach balling all over the places sounds very wrong. In that linked other thread they are just talking about after effects not using all the CPU, and not talking about the user interface being impossible to work with as you described. Personally I think there was something wrong with your system causing that. But better to get an amazing laptop instead anyway.

start savings for that apple silicone Mac Pro :) it sure isn't going to be cheap. I don't think it will be expandable either. Ram and GPU all on same chip. If it can beat a PC thats a happy price to pay.
 
Last edited:
  • Like
Reactions: vel0city

shuto

macrumors regular
Oct 5, 2016
195
110
yeah thats good that video isn't it.

I like the test he does against 3080 ti / 2080 ti / 1080ti.

So hypertherical m1 max quad could possibly be at 3080ti speed ?‍♂️

it seems apple are doing a good job of getting their chips to intel / amd speed. Hope they can keep pushing GPU improvements tho. Hard to know how much of this is software being better optimised for PC also. Redshift is still classed as beta I think.


Octane render test YouTube video above screenshot - lower time is better
Screenshot 2021-11-10 at 22.44.07.png
 

chouki

macrumors member
Oct 24, 2015
35
5
I think you made the right decision to return the Mac Pro and get an Apple Silicon MacBook Pro. Fair enough the people that bought a Mac Pro when they came out, but doesn't seem to make sense to get one anymore, unless you need mega GPUs or RAM.

Still blows my mind that these aren't good for After Effects, I mean after effects is pretty inefficient software, but beach balling all over the places sounds very wrong. In that linked other thread they are just talking about after effects not using all the CPU, and not talking about the user interface being impossible to work with as you described. Personally I think there was something wrong with your system causing that. But better to get an amazing laptop instead anyway.

start savings for that apple silicone Mac Pro :) it sure isn't going to be cheap. I don't think it will be expandable either. Ram and GPU all on same chip. If it can beat a PC thats a happy price to pay.
Thanks
Well the thing is that I do need mega GPUs for 3D rendering, so that the main thing I'm concerned with the fact that I returned the Mac Pro. I prefer to render myself, but I'll farm the renders and I'll be OK.

And well, the MBP is half the price of the MP, so that's already good saving, if I may say, for the AS Mac Pro.
 

shuto

macrumors regular
Oct 5, 2016
195
110
more apple silicon v Nvidia redshift tests here...


Screenshot 2021-11-14 at 14.49.04.png

I mean that is like the quickest PC you can get for redshift currently I think.

Interesting to think about a 4 x M1 Max chip that would hypothetically give a render time of 2:05, so close enough.

excite!
 

vel0city

macrumors 6502
Original poster
Dec 23, 2017
347
510
more apple silicon v Nvidia redshift tests here...


View attachment 1910215

I mean that is like the quickest PC you can get for redshift currently I think.

Interesting to think about a 4 x M1 Max chip that would hypothetically give a render time of 2:05, so close enough.

excite!

This is yet another one of those Youtubers running benchmarks with zero clue about what they are doing. I'm getting really frustrated with this.

I can tell from watching a couple of seconds of the rendering section that he hasn't set Redshift up properly, in that the bucket size is far too small. You need to go into the RS system settings and increase bucket size to 512 for best results with the M1 Max. RS itself gives you a warning about this. Doing so has halved some of my render times on the Max. I noticed massive savings when rendering scenes with Quixel Bridge assets which leads me to think that displacement/tessellation are affected by bucket size at render time as well as textures. I'll do more tests into this. The RS devs have said that this might also see improvements over time.

Back to this Youtuber, without disclosing his render settings these tests are completely useless. How do we know how optimised/unoptimised the scene is? Has he even setup the RS camera correctly and compensated for exposure? Is he working in ACES or Linear? Are the renders being saved to disk or just rendered to the Picture Viewer? 8/16/32bit? Any active AOVs? What are his GI settings? All of this will make a difference and is important.

Not a dig at you at all @shuto - I just wish there were more reliable and knowledgable Youtubers out there as these benchmarks are misleading.
 
  • Like
Reactions: shuto

shuto

macrumors regular
Oct 5, 2016
195
110
Thanks for your thoughts @vel0city sounds like you need your own YouTube channel ?

Hmm well, I’d love to see a Mac v PC comparison by someone who knows what they are doing!
 

richinaus

macrumors 68020
Oct 26, 2014
2,432
2,186
Thanks for your thoughts @vel0city sounds like you need your own YouTube channel ?

Hmm well, I’d love to see a Mac v PC comparison by someone who knows what they are doing!
I have thought of doing a channel as there are never any reviews for what I use or do! I just take all of them as a guide and order what I think I will need , test it and if not good enough it goes back. My MBP pro max is a keeper - it’s by far the best laptop I have used.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
I can tell from watching a couple of seconds of the rendering section that he hasn't set Redshift up properly, in that the bucket size is far too small. You need to go into the RS system settings and increase bucket size to 512 for best results with the M1 Max. RS itself gives you a warning about this. Doing so has halved some of my render times on the Max. I noticed massive savings when rendering scenes with Quixel Bridge assets which leads me to think that displacement/tessellation are affected by bucket size at render time as well as textures. I'll do more tests into this. The RS devs have said that this might also see improvements over time.
He does address the bucket and did the m1max render again with a larger bucket, managing to improve render times by 25% over the initial test.

I am unclear about increasing bucket size though. Is it because it allows redshift to take advantage of larger memory size of the M1 max ? If so, he should have tried to do the same in the dual 3090 (and possibly invest in an nvlink) because the render test doesn’t look like it will max out the 24 GB vram of the 3090s.

That said 33% performance of the m1 max viz dual 3090 is impressive and seems in line with my calculations that the AS x4 Soc GPU will be playing in the 4090 league ( with massive memory capacity to boot)
 
  • Like
Reactions: vel0city and shuto

shuto

macrumors regular
Oct 5, 2016
195
110
@vel0city how are you finding the m1 MacBook Pro for GPU rendering? How do you think it compares to Mac Pro? I think maybe you have one of those? Thanks :)
 

vel0city

macrumors 6502
Original poster
Dec 23, 2017
347
510
@vel0city how are you finding the m1 MacBook Pro for GPU rendering? How do you think it compares to Mac Pro? I think maybe you have one of those? Thanks :)

For lookdev and rendering stills (C4D/X-Particles/Redshift) it's the best Mac I've ever used. Viewport performance is the most responsive I've ever seen, simulations/mograph/particles are much more interactive and responsive, as is working with thousands of clones and instances. You can really crank the particles into the millions in X-P and the viewport won't choke and beachball into a freeze. Redshift IPR is very fast with time to first pixel quicker and more responsive than any card I've used on the Mac Pro.

I work primarily on stills so rendering final artwork is fast for me. Redshift renders in the background with zero fan noise so I can continue working in Photoshop or whatever with no impact on the rest of the system. Photoshop also absolutely flies on this machine (64GB M1 Max) with large format PSB (2GB+) files opening in a couple of seconds. Brush tools and selection tools that can be laggy in large files are perfectly responsive and interactive.

However. If I was working on mostly animated renders and I needed to output 4k movies from C4D, then I would be looking at farming those or using a PC with a 3090. In an ideal world, you would have an M1 Max 64GB, complimented with a Threadripper with a 3090. That would give you the horsepower and flexibility you need for serious heavy lifting in 3D. Lookdev and design on the M1 Max, render on the PC. If Apple releases a serious alternative workstation next year then great, but until then you cannot beat a 3090 for rendering.

Overall the M1 Max is an incredible machine with a beautiful screen and feels like the pinnacle of Apple's vision. An amazing machine for the applications I mentioned. But if you need serious grunt for final renders, you'll still need a desktop and ideally a 3090.
 

vel0city

macrumors 6502
Original poster
Dec 23, 2017
347
510
He does address the bucket and did the m1max render again with a larger bucket, managing to improve render times by 25% over the initial test.

I am unclear about increasing bucket size though. Is it because it allows redshift to take advantage of larger memory size of the M1 max ? If so, he should have tried to do the same in the dual 3090 (and possibly invest in an nvlink) because the render test doesn’t look like it will max out the 24 GB vram of the 3090s.

That said 33% performance of the m1 max viz dual 3090 is impressive and seems in line with my calculations that the AS x4 Soc GPU will be playing in the 4090 league ( with massive memory capacity to boot)

I think you're right that a larger bucket size increases the amount of memory that Redshift is able to access for rendering. In some of my scenes with the latest Redshift 3.0.60 (which improves rendering speed with scenes with many lights) I'm seeing a 40% increase in rendering times with larger bucket sizes. The overall responsiveness and interactivity with Redshift is an absolute pleasure to use. C4D R25 on Monterey is stable and fast.
 

Romanesco

macrumors regular
Jul 8, 2015
126
65
New York City
I just installed an RTX A6000 along with my older Vega II Duo. It runs as expected under Windows, even with the AMD GPU as primary. Geekbench scores are in line with the average reports. I didn’t even have to disconnect the XDR from the MPX module, but it makes me wonder if I’m getting the whole performance.

Any way to test it in comparison with PC tower alternatives or find bottlenecks? How does one go about it?

For reference, I’m on a Mac Pro 2019 with 16c, 384GB RAM, Vega II Duo configuration.
 

LymeChips

macrumors newbie
Jan 3, 2020
27
16
I just installed an RTX A6000 along with my older Vega II Duo. It runs as expected under Windows, even with the AMD GPU as primary. Geekbench scores are in line with the average reports. I didn’t even have to disconnect the XDR from the MPX module, but it makes me wonder if I’m getting the whole performance.

Any way to test it in comparison with PC tower alternatives or find bottlenecks? How does one go about it?

For reference, I’m on a Mac Pro 2019 with 16c, 384GB RAM, Vega II Duo configuration.
You could run an Octane or RS benchmark. It should fall just below a 3090 in performance.

Also anyone looking to farm out renders should look into the RNDR network it’s insanely fast and reasonably priced.

Right now they just support Octane but they announced that they are working with Maxon and Autodesk for Redshift and Arnold support and they’re negotiating with Insydium for XP support/license. Eventually one can upload straight .C4D files but in the meantime you export ORBX files that are basically wrapped alembics.
 

AndreeOnline

macrumors 6502a
Aug 15, 2014
704
495
Zürich
This from the Blender camp:

The latest 3.1 beta and 3.2 alpha builds TOGETHER with MacOS 12.3+ gives Metal support for AMD cards in Blender. We've recently seen the enabling for Metal on Apple Silicon, but now AMD cards work as well.

I rendered Classroom on my 12c with a single Vega Pro II in 1 min 28 seconds. Would be cool to see what some of you with Duo setups get.

It's still early days and they are tuning workloads for CPU+GPU and things like that, but it's still nice to see things happening.


--------------------------
A few pointers for those who don't know:

There are a few popular scenes that can be downloaded from Blender's demo scenes, such as Classroom (a bit down the page under Cycles). You can run anything you like obviously, but more popular scenes have more to compare against. Opening the scene and hitting F12 will start a render. Make sure to select GPU in the panel to the right.

The first time you render with Metal on GPU (go into settings and check MetalRT experimental too, apart from selecting GPU in your properties panel) it will take a few minutes to compile everything and get ready. After that, you're good to go. So the first render after installing/enabling won't count.
 

vinegarshots

macrumors 6502a
Sep 24, 2018
983
1,349
This from the Blender camp:

The latest 3.1 beta and 3.2 alpha builds TOGETHER with MacOS 12.3+ gives Metal support for AMD cards in Blender. We've recently seen the enabling for Metal on Apple Silicon, but now AMD cards work as well.

I rendered Classroom on my 12c with a single Vega Pro II in 1 min 28 seconds. Would be cool to see what some of you with Duo setups get.

It's still early days and they are tuning workloads for CPU+GPU and things like that, but it's still nice to see things happening.


--------------------------
A few pointers for those who don't know:

There are a few popular scenes that can be downloaded from Blender's demo scenes, such as Classroom (a bit down the page under Cycles). You can run anything you like obviously, but more popular scenes have more to compare against. Opening the scene and hitting F12 will start a render. Make sure to select GPU in the panel to the right.

The first time you render with Metal on GPU (go into settings and check MetalRT experimental too, apart from selecting GPU in your properties panel) it will take a few minutes to compile everything and get ready. After that, you're good to go. So the first render after installing/enabling won't count.

Man, I'm really hopeful that someday Blender will work well enough on a Mac that I can switch back, but my 3080 renders Classroom in 16 seconds. I would assume that the Vega Pro II should be at least roughly in the same performance category as my Geforce card, so seeing that big of a performance gap is disturbing. I know it's still early, but... geesh.
 

AndreeOnline

macrumors 6502a
Aug 15, 2014
704
495
Zürich
Man, I'm really hopeful that someday Blender will work well enough on a Mac that I can switch back, but my 3080 renders Classroom in 16 seconds.
16 seconds sounds.... fast. I'd say too fast?

On Open Data (yes I know it's a bit out of date), the fastest times with RTX3080 Ti rendering using CUDA is just over 55 seconds. Most results have times over a minute.

Was there a change in CUDA with Blender 3 that cut the render times so much? Seems incredible to go from a minute to 16 seconds? Kind of cool if it's all correct, though.
 

vinegarshots

macrumors 6502a
Sep 24, 2018
983
1,349
16 seconds sounds.... fast. I'd say too fast?

On Open Data (yes I know it's a bit out of date), the fastest times with RTX3080 Ti rendering using CUDA is just over 55 seconds. Most results have times over a minute.

Was there a change in CUDA with Blender 3 that cut the render times so much? Seems incredible to go from a minute to 16 seconds? Kind of cool if it's all correct, though.
Video attached to prove I'm not lying lol.

That's rendering with Nvidia Optix instead of Cuda. It's quite a bit faster (my Cuda render time in Classroom is 26 seconds vs 16.5 seconds using Optix).

Also, Blender 3.0 uses Cycles X, which is the new and improved Cycles with added performance enhancements, so that might have something to do with it. The viewport performance with Optix Cycles is also kind of crazy (second attached video). If you turn on Optix Denoiser, you can get a very workable, clean raytraced viewport.



 

AndreeOnline

macrumors 6502a
Aug 15, 2014
704
495
Zürich
That's rendering with Nvidia Optix instead of Cuda.

Yeah, that's the problem with Open Data — it's old and incompatible with Blender 3 (but it's being worked on and will be updated).

Well, seeing is believing! There generally is no 'magic' when it comes to computers, which made the jump from 55s-60s to 16s seem a bit too big. But even the jump to your 26s (in Blender 3 I guess) is awesome.

Nonetheless, once the dust settles, I'm hoping that graphic cards with similar compute power will perform "in the same ballpark" on both MacOS, Windows and Linux—and with your information, it's obvious things aren't working the same yet.

On some level, we will always be comparing apples to oranges due to Optix vs Metal. I just want to achieve general equal compute efficiency across platforms. Let's see what happens.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.