Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

aytan

macrumors regular
Dec 20, 2022
161
110
No. Hardware RT will be huge.

AI de-noisers are mostly incredible for viewport stuff because you can generate a very good estimate of your final frame using just a tiny handful of samples that would otherwise look incredibly noisy. It's less useful for final renders as aytan mentions.
You are exactly right, I believe it is a myth using denoiser with lower samples for final renders/still frames or animations. Sometimes it could work, not works every single times.
What I experienced with M1 is when using Global Illumination as default on your scenes, M1 renders out slower than expected. Without GI M1 works better/faster for restarisation similar with AMD GPU's. I don't know exact technical details I m not a tech engineer, but I guess main reason is there absence of any kind of RT cores on SOC.
 

aytan

macrumors regular
Dec 20, 2022
161
110
Yeah Optix denoiser is mostly a separate piece of technology from the main Optix raytracing engine. The denoiser doesn't use the RT cores at all and just runs on the tensor cores (which are nVidia's version of the neural engine)
Thanks for info, I always wonder why there is different types of cores placed on GPU. Which means if you have more/ better/faster Tensor cores than Optix denoiser could be more efficient or useful.
 

aeronatis

macrumors regular
Sep 9, 2015
198
152
M2 Max is more or less the same as 3070 RTX mobile in blender using compute only (CUDA, Metal), obviously much slower when Nvidia’s hardware RT is used (Optix). Neural engine has nothing to do with any of it. M2 family performs as well as Nvidia relative to the raw compute capability of the GPU, so I’d think the software optimizations are already reasonably mature. Maybe Apple can get another 10% out of it, who knows.

But next step for them has to be hardware RT, that’s non-negotiable.

Isn't denoising handled by tensor cores with Nvidia GPUs which, I thought, work similarly to Apple Neural Engine. I will try to find ti but I remember Apple - Blender already working on utilizing Neural Engine for denoiser.
 
  • Like
Reactions: aytan

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I will try to find ti but I remember Apple - Blender already working on utilizing Neural Engine for denoiser.
Blender uses Apple's BNNS library (via Intel's Open Image Denoise) for denoising.

 

otaku_nige

macrumors newbie
Feb 9, 2023
2
0
Hiya all,

I'm currently thinking of upgrading my 2019 iMac i5 for a Mac Mini Pro M2. I have Maxwell Studio and I'm also interested in Indigo Render (studio version). I've emailed both about their plans to support Apple Silicon but they haven't replied and I can't see any info on their respective forums. Anyone have any knowledge about M2 support for these platforms?

Thanks,

Nige.
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
Any particular reason why you choose those two engines?

Don't hold my word for it but Maxwell materials were the most physically correct materials in any commercial renderer back in the day. It all depends what you want to achieve and what you do. I could totally see people sticking with Maxwell over V-ray, Arnold and others. Problem with Maxwell is GPU transition, it doesn't scale right and not all features are available in GPU just yet. Also Maxwell had steep learning curve so once you had invested tons of time in it to get kick ass images or frames you don't just go somewhere else.
 
  • Like
Reactions: aytan

TechnoMonk

macrumors 68030
Oct 15, 2022
2,606
4,117
Honestly I do not curious about PC GPU's. I have a PC which has got RTX 3070 side by side with MS Ultra. In real world workloads these benchmarks do not mean anything. Some scenes 3070 delivers 3 times faster, but sometimes it did not ( also there is VRAM issue even if you have 24 GB GPU Memory with 64 GB RAM, for complex simulations even 24 GB GPU memory is not working well ). Here as an example which I had yesterday. On my PC C4D stops working. There is no particular reason, it was fine last time I rendered out a scene last week. But now it won't open, start up screen loads and noting happens further. I wrote to Maxon and send them crash reports yesterday, uninstall and reinstall did not work also. There is no response from Maxon untill now and I can not use PC anymore. I loose 1 day which is critical while there is a deadline. I will try to fix C4D issue on my PC later today but I m not optimistic at all. Long story short, I am not sure about faster PC is better overall. A 3070 PC on Blender final render times sometimes 3x faster, for some complex scenes not too much shorter and also it is not reliable, it varies even frame by frame also crashes more than usual.
I have had same issues on 4090 and 3090. Are you limiting the power to 70-80%? I basically under clock, and limit the power to 80%. I also upgrade my PS to 1600 W. My M1 max doesnt run out of memory, is stable thought not as fast as 3090 or 4090. Nvidia needs to bump up the RAM to more than 24 GB, I run in to those issues more often than I like.
 
  • Like
Reactions: aytan

aytan

macrumors regular
Dec 20, 2022
161
110
Hiya all,

I'm currently thinking of upgrading my 2019 iMac i5 for a Mac Mini Pro M2. I have Maxwell Studio and I'm also interested in Indigo Render (studio version). I've emailed both about their plans to support Apple Silicon but they haven't replied and I can't see any info on their respective forums. Anyone have any knowledge about M2 support for these platforms?

Thanks,

Nige.

I have had same issues on 4090 and 3090. Are you limiting the power to 70-80%? I basically under clock, and limit the power to 80%. I also upgrade my PS to 1600 W. My M1 max doesnt run out of memory, is stable thought not as fast as 3090 or 4090. Nvidia needs to bump up the RAM to more than 24 GB, I run in to those issues more often than I like.
I am using both M1 Studio Max / M1 Studio Ultra / A few midgrade PC's. Memory pool of M series chips is huge + for simulations, high ploycounts or subdividing geometries. M1 Max is not ok for working 3D as a freelancer, it has its own issues and I can not recommend to anyone, iı could not lift heavy scenes as well as Ultra. It works well with Davinci/Premier and AE.
M1 Ultra completely different animal compare to M1 Max. It is stable under 3D workloads, Sometimes freezes on C4D/Redshift maybe once or twice in a week, I figured out most of the time it is my fault or being in a hurry.
But overall Ultra is right choice besides Max which I have bought and use both.
Best working native workflow I tried is ZBrush/C4D/Redshift. Blender has a huge improvement last couple of weeks, looks like Apple works a lot for Blender Metal backend.
Right now there is a few Native 3rd party renderers, Best of them is Redshift I guess, Octane release a new version last week which did not try yet. V-Ray works well on C4D, in fact slightly better on Ultra than my PC.
What I experienced was, if a DCC or 3rd party renderer does not optimize for AS SOC it works weird and uses much much more CPU or memory than you need. If it is well optimized and Native for M series AS, works way much better and efficient. At least Blender UI is completely useable with Metal backend, viewport works way better.
These are what I have been observed a while.
 

aytan

macrumors regular
Dec 20, 2022
161
110
I have had same issues on 4090 and 3090. Are you limiting the power to 70-80%? I basically under clock, and limit the power to 80%. I also upgrade my PS to 1600 W. My M1 max doesnt run out of memory, is stable thought not as fast as 3090 or 4090. Nvidia needs to bump up the RAM to more than 24 GB, I run in to those issues more often than I like.
I did not do anything on my PC, every is default and works as it had to be. I use windows 10 Pro and will not go for windows 11 until they forced me to do. my 3070 is EVGA FTW3 Ultra and it stays really cool and silent under 3-4 minutes single frame render times, it did not utilized more than %10 while rendering. Long render sessions for like 2 hours long degrees did not go over 55-56, it works fine for me.
As you told big issue is VRam amount for 3070 or 3080. They are capable but they are not server grade, simply optimized for gaming. I should prefer A5000/A6000 GPU's but sound and heat with this class GPU's is a big problem, you should not use them in a standart ATX case near your working area.
Mx series are way slower than any Nvidia or AMD GPU for final renders, but Mx works efficient, silent and cool, I guess for basic 3D workflows more than enough for lots of users. Advanced 3D or heavy rendering is completely different thing and a standart PC with single GPU could not work good either.
 

ader42

macrumors 6502
Jun 30, 2012
436
390
I use my 64GB M1 Max fine for very complex models/sculpts in ZBrush for example. It was great with ZBrush 2022 with 50 million of polygons. Very responsive to work with and nice and quick to render as far as I’m concerned with for my 8K/A2 size renders for art prints.

Also been using ZBrush 2023 now and I see about 30% render speed improvement over ZBrush 2022 as ZBrush 2023 is now Apple Silicon native.
 

galad

macrumors 6502a
Apr 22, 2022
611
492
It enables the MetalRT option on AMD Navi2 GPUs, the question is, does Metal Raytracing uses the dedicated raytracing hardware on those GPUs?
 
  • Like
Reactions: Xiao_Xi

hovscorpion12

macrumors 68040
Sep 12, 2011
3,044
3,123
USA
I have had same issues on 4090 and 3090. Are you limiting the power to 70-80%? I basically under clock, and limit the power to 80%. I also upgrade my PS to 1600 W. My M1 max doesnt run out of memory, is stable thought not as fast as 3090 or 4090. Nvidia needs to bump up the RAM to more than 24 GB, I run in to those issues more often than I like.

Curious. Would you be better with the RTX 6000 workstation with 48GB VRAM?

Granted, the vast majority of 3090, 3090 TI, 4090 buyers are gamers, CryptoMiners, Graphic Artists and Video editors who probably don’t even touch 10% of the total 24GB VRAM which is why Nvidia doesn’t want to offer higher VRAM on consumer cards.


 

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Curious. Would you be better with the RTX 6000 workstation with 48GB VRAM?

Granted, the vast majority of 3090, 3090 TI, 4090 buyers are gamers, CryptoMiners, Graphic Artists and Video editors who probably don’t even touch 10% of the total 24GB VRAM which is why Nvidia doesn’t want to offer higher VRAM on consumer cards.


They probably do not want to pay the $5000 to get the A6000; even though that is the one that should be bought for 3D work, Virtual Production, and any other real time 3D system being used for broadcast.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Curious. Would you be better with the RTX 6000 workstation with 48GB VRAM?

Granted, the vast majority of 3090, 3090 TI, 4090 buyers are gamers, CryptoMiners, Graphic Artists and Video editors who probably don’t even touch 10% of the total 24GB VRAM which is why Nvidia doesn’t want to offer higher VRAM on consumer cards.


The consumer cards using GDDR6X isn't helping the situation either. I wonder if you ugys would even notice the BW loss going to the non-x variant.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
For large hyper complex scenes with lots of volume objects and non-instanced geometry , I do not know of a gpu renderer that you can trust. The flexibility and robustness of cpu renderers is a major benefit. To me it seems you either do massive stuff that needs reliability and need a lot of ram or you do quicker turnaround work, maybe in motion graphics, and then you don’t need much vram at all.
So as always, you really gotta know your workload.
With that said, the last few years since the intro of threadrippers for CPU and the nvidia 1080ti for GPU really made it possible to have enormous power available at the hands of the artist and not just at the farm. With the 2019 MP apple also tapped into this albeit with an overpriced Xeon instead of thredripper and amd vega 2 instead of nvidias.
To me it seems we are in a transition period where it is not absolutely clear what is the “best solution”. Maybe apples UMA and xpu rendering with great quality control over drivers is the best that could happen. After all, stability and that your workstation actually works is more important than the last few % of performance.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
For large hyper complex scenes with lots of volume objects and non-instanced geometry , I do not know of a gpu renderer that you can trust. The flexibility and robustness of cpu renderers is a major benefit. To me it seems you either do massive stuff that needs reliability and need a lot of ram or you do quicker turnaround work, maybe in motion graphics, and then you don’t need much vram at all.
So as always, you really gotta know your workload.
With that said, the last few years since the intro of threadrippers for CPU and the nvidia 1080ti for GPU really made it possible to have enormous power available at the hands of the artist and not just at the farm. With the 2019 MP apple also tapped into this albeit with an overpriced Xeon instead of thredripper and amd vega 2 instead of nvidias.
To me it seems we are in a transition period where it is not absolutely clear what is the “best solution”. Maybe apples UMA and xpu rendering with great quality control over drivers is the best that could happen. After all, stability and that your workstation actually works is more important than the last few % of performance.
Pound for pound GPUs are way faster ( and more flexible than CPUs, never mind Threadrippers - I have one and still try to render on the GPU via nvlink )
The only issue is ram. IF apple can solve that, comparing its render speeds to Threadrippers would be very interesting.

It may not take performance crown from Nvidia in the near future, but it just might make CPUs obsolete as rendering solutions.

I think that’s where Apple is going. Dedicated accelerators.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
They probably do not want to pay the $5000 to get the A6000; even though that is the one that should be bought for 3D work, Virtual Production, and any other real time 3D system being used for broadcast.
You ask CG software developers and they’ll say consumer cards are fine. The quadro/A series is a rip off, price wise.
 

TechnoMonk

macrumors 68030
Oct 15, 2022
2,606
4,117
Curious. Would you be better with the RTX 6000 workstation with 48GB VRAM?

Granted, the vast majority of 3090, 3090 TI, 4090 buyers are gamers, CryptoMiners, Graphic Artists and Video editors who probably don’t even touch 10% of the total 24GB VRAM which is why Nvidia doesn’t want to offer higher VRAM on consumer cards.


I do use A6000 in the cloud, but spending 4-5K on a 3-year-old GPU isnt in my plans. A6000 performance is half that of 4090 in running most of my inferences. M1 Max is adequate, though I can use some RT cores in the future Apple GPU. I debated a lot before I chose thread ripper over EPYC/ 3090 vs. A4000/5000. One of the factors was heat and noise. I use A100 for most of my training and would love to save cloud costs by running/testing inferences locally.

Apple, with unified memory, has hit a sweet spot but need to add more types of cores and library support.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
more flexible than CPUs
it’s quite the opposite, by most metrics CPU are far more flexible than GPU, that’s why you can code for any CPU in no time and struggle for years to get your code running efficiently or stably on a GPU. RAM amount is not the only issue affecting GPU computing. As a matter of fact even a simple GPU driver update can make your render engine unusable.
Not saying that there are no advantages in GPU computing, but flexibility is not among them;)
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
it’s quite the opposite, by most metrics CPU are far more flexible than GPU, that’s why you can code for any CPU in no time and struggle for years to get your code running efficiently or stably on a GPU. RAM amount is not the only issue affecting GPU computing. As a matter of fact even a simple GPU driver update can make your render engine unusable.
Not saying that there are no advantages in GPU computing, but flexibility is not among them;)
I had meant swapping out to a more powerful one. With CPU, upgrades upends the system as a whole.
Granted programming is easier on the CPU (flexible as you say) but where GPUs shine, they leave CPUs in the dust.

Vram is the limiting factor for GPUs. If Apple can crack it with their SOC approach, expect more support for GPUs, playing to their strength.

Already we can see some apps built keeping GPUs in mind or CPU ones harnessing the power of GPUs.

Example : one would imagine Side-Fx knows a thing or two about such things.
 
Last edited:

TechnoMonk

macrumors 68030
Oct 15, 2022
2,606
4,117
it’s quite the opposite, by most metrics CPU are far more flexible than GPU, that’s why you can code for any CPU in no time and struggle for years to get your code running efficiently or stably on a GPU. RAM amount is not the only issue affecting GPU computing. As a matter of fact even a simple GPU driver update can make your render engine unusable.
Not saying that there are no advantages in GPU computing, but flexibility is not among them;)
Yes and No. Lot of those issues are being mitigated. I can train a model using CudA A100, convert it in to Core-ML and use it with mac or even an iPad Pro for lighter models. It’s limited portability with conversion, but much better than 2-3 years back. There are certain tasks which take an hr on cpu, but couple of minutes on GPU. And lot of new tools, check for type of GPU and use appropriate drivers/code.
 
  • Like
Reactions: singhs.apps

aytan

macrumors regular
Dec 20, 2022
161
110
it’s quite the opposite, by most metrics CPU are far more flexible than GPU, that’s why you can code for any CPU in no time and struggle for years to get your code running efficiently or stably on a GPU. RAM amount is not the only issue affecting GPU computing. As a matter of fact even a simple GPU driver update can make your render engine unusable.
Not saying that there are no advantages in GPU computing, but flexibility is not among them;)
Agreed with you, I believe CPU rendering results are better/safer but way slower in any case. So that I had to leave behind Arnold renderer which has almost best result from my point of view. Autodesk release native AS version but it is quite weird and do not feel like ''Arnold'' anymore and unfortunately somehow slower than older rosetta version.
I read a lot about driver issues for AMD/Nvidia GPU's in Redshift forums. Lots of user had to be use old drivers or can not get out enough from their new released GPU's until drivers will stabilized for render engines/system or sometimes can not fire up GPU's after every software update. On the other hand I did not get any update/driver or any kind of issues with M1 from day1 until now, which feels good. Also there is a kind of material/material language usage restrictions for each 3rd party renderer or DCC for GPU rendering.
Agreed that GPU rendering is too fast but it's fragile in some conditions.
I wish Apple could provide more CPU cores (at least 42 or more cores) which could make CPU rendering an option again for me.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.