Thank you, it's kinda crazy to see this solid machine struggle like that for a normal task.I have been using a 7,1 since march, 16 Core, 32Gb RAM at purchase + 128GB Aftermarket, Radeon Pro 580X 8 GB.
I mainly do 3D animation and rendering on it, we produce about 6-8 minutes of 3d animation every month.
As much as I'd like to use the GPU for rendering, I see pointless to update the video card to something more powerful as our pipeline is Maya + Arnold (CPU only in Mac). Been thinking to get and Nvidia and boot in windows for rendering in GPU, or learn redshift and move the postproduction to use the GPU in macOS. Also at least here AE doesn't use any processing on the GPU, just eats the VRAM and only some third party plugins use the GPU processing (I always keep and eye on what's going on wit iStat menus).
Anyway I do all the post processing in AE and I understand very well your frustration as I encounter these problems almost every day, AE has been very sluggish and nothing different from using a MacBook Pro 15" 2019.
At least for AE 2022 keep ing mind that many plugins are still single core and will do a bottleneck for all the multithreading.
I've found that working with .exr sequences is super slow as AE has to re-render everything every time, I always try to render all the sequences (beauty, albedo, indirect lighting, color id, sss, z depth, etc) of the timeline as .movs without any compression and work with them as movie files to create a more manageable situation. It gets super slow when I'm starting to add Depth of field effects, glows or anything that has to tax the cpu. Also I connect a 500 GB samsung SSD T5 for the cache files, I have to purge it and ram every 30-40 mins, if not AE gets even slower.
Today I had to export a 4 min video and it took more than 2 hours with media encoder, even when I have everything the most optimized possible.
So yeah that's my experience, and I always think why Adobe didn't create something better... even in AE 2022 I haven't been able to use the full CPU, just about a jump from 10-15% to 30-35% CPU usage tops... not 95-100%. I wonder if it would be better to move everything to Nuke or Davinci.
sounds very weird. that can't be normal behaviour, must be some bug / glitch somehow. If it was me I'd erase the system and do a clean install. And then I'd try AE 2022 and AE 2021 and see if there are any differences.Thank you, it's kinda crazy to see this solid machine struggle like that for a normal task.
I don't know where the bottleneck is between AE and the Mac Pro but something is definitely wrong
I don't even use .exr sequences, last project was just .png and .jpeg, photoshop layers, with very few effects here and there, comp of 100 layers or so... Impossible to work with.
Weird indeedsounds very weird. that can't be normal behaviour, must be some bug / glitch somehow. If it was me I'd erase the system and do a clean install. And then I'd try AE 2022 and AE 2021 and see if there are any differences.
I think you made the right decision to return the Mac Pro and get an Apple Silicon MacBook Pro. Fair enough the people that bought a Mac Pro when they came out, but doesn't seem to make sense to get one anymore, unless you need mega GPUs or RAM.Weird indeed
The system was new and clean, it was a refurbished 16core 7,1 that came with Big Sur. I only installed C4D / Octane / AE with plugins.
I tried all versions of AE from 17.5 to 2022. All the same. Updating to Monterey didn't change anything.
I was working all the time (on my 5,1), so I didn't have much time to try different configurations on the 7,1.
As I had reach the deadline for sending back the machine to Apple (they even gave me 15 more days to decide), I sent it back. Frustrated by not having more time to experiment on this problem, and mostly by the fact that I wouldn't be able anymore to render 3D on my 2X RX6900xt. But I use After Effects everyday, so it has to work.
It's not clear to me if some people can work flawlessly on AE with this machine, I read several reports describing the same s**t I had (see Gobbledigook to whom I was responding above), or this thread:
The frustration of using After Effects on the new Mac Pro.
I knew what I was getting in to and how poorly Adobe's software is optimized. Still, just wanted to share this chart of my CPU/GPU usage during a regular workday with After Effects doing some light-medium hefty motion graphics. This is on a 16core with 192gb ram and a Vega II. You can see when I...forums.macrumors.com
Most people talk about long render times, but not much about the interface response, clicks etc, wich was horrible in my case. So maybe something was wrong somewhere, yep.
Today I ordered a maxed out M1 Max MBP. I guess I'll farm my 3D renders until an AS Mac Pro comes out.
Anyway workflow / response on this MBP seems very encouraging, so in a way I think it's not a bad idea that I did not keep an Intel machine. Only thing everyone is wondering is how expandable the next Mac Pro will be. But that's another story
yeah thats good that video isn't it.
ThanksI think you made the right decision to return the Mac Pro and get an Apple Silicon MacBook Pro. Fair enough the people that bought a Mac Pro when they came out, but doesn't seem to make sense to get one anymore, unless you need mega GPUs or RAM.
Still blows my mind that these aren't good for After Effects, I mean after effects is pretty inefficient software, but beach balling all over the places sounds very wrong. In that linked other thread they are just talking about after effects not using all the CPU, and not talking about the user interface being impossible to work with as you described. Personally I think there was something wrong with your system causing that. But better to get an amazing laptop instead anyway.
start savings for that apple silicone Mac Pro it sure isn't going to be cheap. I don't think it will be expandable either. Ram and GPU all on same chip. If it can beat a PC thats a happy price to pay.
more apple silicon v Nvidia redshift tests here...
View attachment 1910215
I mean that is like the quickest PC you can get for redshift currently I think.
Interesting to think about a 4 x M1 Max chip that would hypothetically give a render time of 2:05, so close enough.
excite!
I have thought of doing a channel as there are never any reviews for what I use or do! I just take all of them as a guide and order what I think I will need , test it and if not good enough it goes back. My MBP pro max is a keeper - it’s by far the best laptop I have used.Thanks for your thoughts @vel0city sounds like you need your own YouTube channel ?
Hmm well, I’d love to see a Mac v PC comparison by someone who knows what they are doing!
He does address the bucket and did the m1max render again with a larger bucket, managing to improve render times by 25% over the initial test.I can tell from watching a couple of seconds of the rendering section that he hasn't set Redshift up properly, in that the bucket size is far too small. You need to go into the RS system settings and increase bucket size to 512 for best results with the M1 Max. RS itself gives you a warning about this. Doing so has halved some of my render times on the Max. I noticed massive savings when rendering scenes with Quixel Bridge assets which leads me to think that displacement/tessellation are affected by bucket size at render time as well as textures. I'll do more tests into this. The RS devs have said that this might also see improvements over time.
@vel0city how are you finding the m1 MacBook Pro for GPU rendering? How do you think it compares to Mac Pro? I think maybe you have one of those? Thanks
He does address the bucket and did the m1max render again with a larger bucket, managing to improve render times by 25% over the initial test.
I am unclear about increasing bucket size though. Is it because it allows redshift to take advantage of larger memory size of the M1 max ? If so, he should have tried to do the same in the dual 3090 (and possibly invest in an nvlink) because the render test doesn’t look like it will max out the 24 GB vram of the 3090s.
That said 33% performance of the m1 max viz dual 3090 is impressive and seems in line with my calculations that the AS x4 Soc GPU will be playing in the 4090 league ( with massive memory capacity to boot)
You could run an Octane or RS benchmark. It should fall just below a 3090 in performance.I just installed an RTX A6000 along with my older Vega II Duo. It runs as expected under Windows, even with the AMD GPU as primary. Geekbench scores are in line with the average reports. I didn’t even have to disconnect the XDR from the MPX module, but it makes me wonder if I’m getting the whole performance.
Any way to test it in comparison with PC tower alternatives or find bottlenecks? How does one go about it?
For reference, I’m on a Mac Pro 2019 with 16c, 384GB RAM, Vega II Duo configuration.
This from the Blender camp:
The latest 3.1 beta and 3.2 alpha builds TOGETHER with MacOS 12.3+ gives Metal support for AMD cards in Blender. We've recently seen the enabling for Metal on Apple Silicon, but now AMD cards work as well.
I rendered Classroom on my 12c with a single Vega Pro II in 1 min 28 seconds. Would be cool to see what some of you with Duo setups get.
It's still early days and they are tuning workloads for CPU+GPU and things like that, but it's still nice to see things happening.
--------------------------
A few pointers for those who don't know:
There are a few popular scenes that can be downloaded from Blender's demo scenes, such as Classroom (a bit down the page under Cycles). You can run anything you like obviously, but more popular scenes have more to compare against. Opening the scene and hitting F12 will start a render. Make sure to select GPU in the panel to the right.
The first time you render with Metal on GPU (go into settings and check MetalRT experimental too, apart from selecting GPU in your properties panel) it will take a few minutes to compile everything and get ready. After that, you're good to go. So the first render after installing/enabling won't count.
16 seconds sounds.... fast. I'd say too fast?Man, I'm really hopeful that someday Blender will work well enough on a Mac that I can switch back, but my 3080 renders Classroom in 16 seconds.
Video attached to prove I'm not lying lol.16 seconds sounds.... fast. I'd say too fast?
On Open Data (yes I know it's a bit out of date), the fastest times with RTX3080 Ti rendering using CUDA is just over 55 seconds. Most results have times over a minute.
Was there a change in CUDA with Blender 3 that cut the render times so much? Seems incredible to go from a minute to 16 seconds? Kind of cool if it's all correct, though.
That's rendering with Nvidia Optix instead of Cuda.