Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
pple has no chance to compete with them. Many 3D software require CUDA to work and Nvidia is just superior in external GPU market by 80~90% of market share. AMD still sucks in many ways. Apple Silicon GPU itself is not better than Nvidia so it's both hardware and s

I don't really understand some people. Why compare 3090 or 4090 to AS? Before AS, did you have the option to purchase an iMac with 3090? No. Did you have the option to purchase a Macbook Pro with 3070M? No.

Why not compare the current AS offerings to the previous Intel offerings from Apple? If you compare Apples to Apples, any AS mac kicks ass of any non AS mac of the same category.

And you can be sure that when Apple releases an AS Mac Pro, it'll be faster than the current Intel one, both in CPU and GPU. Otherwise Apple won't release it.
Why some of us compare with RTX GeForce because Apple themselves do in their keynotes.
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
I don't really understand some people. Why compare 3090 or 4090 to AS? Before AS, did you have the option to purchase an iMac with 3090? No. Did you have the option to purchase a Macbook Pro with 3070M? No.

Why not compare the current AS offerings to the previous Intel offerings from Apple? If you compare Apples to Apples, any AS mac kicks ass of any non AS mac of the same category.

And you can be sure that when Apple releases an AS Mac Pro, it'll be faster than the current Intel one, both in CPU and GPU. Otherwise Apple won't release it.
Who really compare their computers to previous gen? We compare computers within a similar generation of CPU and GPU and at that time RTX 30 series were the latest.

Beside, it's Apple who compared their computer to RTX series AND Nvidia also compared their GPU to Apple Silicon Mac. So what's the point of NOT comparing them?
 

aytan

macrumors regular
Dec 20, 2022
161
110
I read conversations and this is my experience about

3D Rendering on Apple Silicon, CPU&GPU​

or 3D workflows on M1 Macs.

I have been work with M1 Ultra for a while, before that I have a Studio Max but I sold it and switch to an Ultra.
Native C4D is very solid on M1, Redshift is at least %50 slower at M1 Ultra compare with a 3070 PC on render times (I have use both of them side by side everyday at home ).

M1 Ultra may par on or a little behind with 3070 on RS Render view considering long working sessions, more than enough for lookdev and Node Base Shader dev,.

For small scenes and fewer textures 3070 is 3 times faster than Ultra when it comes to rendering out frames.
But on the other hand, a lot of and large texture files, more VRam demands 3070 starts to slowing down or failed. Large scenes which demands lots of VRam is a problem on any PC for GPU rendering ( I do not use 4080 or 4090 yet so I have no idea about their performance). Octane is good for still frames or short animations but there are some issues.

For CPU base rendering Ultra is very solid, reliable, silent (and there is no heat at all) unless you have a 2XCPU workstation or 32/64 core Threadreaper solution on PC (There will be heat and sound sometimes).

C4D Standart/Physical renderer is fast and Arnold is ok, VRay is surprisingly fast, reliable and painless on M1 Ultra, somehow Corona have some issues and triger higher degrees on CPU side while working on viewport or rendering on M1 Ultra.

Compare the render times (GPU Rendering) windows is not quite solid as a Mac, it can delivers different render times frame by frame or failed or have some artifacts time to time, you should check every frame after rendering sessions over (this issue should be related with 3070 itself, in fact for 3D I should have RTX A5000 or RTX A6000 workstation GPU'S). on the other hand Ultra is very impressive and precise for each frame, I have never seen any artifacts on any render with C4D, a few times with Blender but it is not often. But overall Ultra is slower for GPU rendering.

My experience for both systems is different and I prefer Ultra for 3D workflow, PC is for render out the scenes only when I need them as soon as possible.

For Modeling wise specially on ZBrush Ultra is a beast, very fast texturing and painting workflow, way faster on UV unwraping anything I used before, Remeshing is solid and impressive in some cases, sculpting is vey smooth, up to 35.000.000 polys for a single object, I did not calculate polys for multiple subtools I have no idea, very very fast Displacement map calculations and so go on for every aspect.

Blender is complete different thing I guess, Ultra is fast enough for working but somehow viewport animations way slower than C4D. Evee is quite fast on viewport and very fast rendering times even large scenes. Cycles is ok but not fast enough with M1 Ultra compare to any midrange NVidia or AMD GPU. More's the pity Pro Render did not work on M1 Chips on Blender. I tried it for couple of iterations and still not working properly.

I use time to time for different cases 2019 i9 iMac, a few 2013 Mac Pro's, couple of different PC's with 2080Ti / 3070 / 1080Ti, M1 Mac mini, M1 16 inch Max MBP, Studio Max and Studio Ultra still today.

For a workflow starting with ZBrush/Photoshop/Substance - C4D/Maya/Blender - Premier or/and Davinci ends up with Premier/After Effects/Davinci my sorting high to low is Studio Ultra - Studio Max (not base model need to have 64 GB RAM) - recent midrange PC with midrange GPU - 2013 Mac Pro on par with 2019 i9 iMac - M1 16 inch Max MBP and Mac mini is the last one.

There are many reasons M1 Ultra is enough for any 3D workflow, except GPU base 3D rendering. There are issues with native M1 Maya and clearly it is not ready yet for solid workflow. Arnold was always a CPU base renderer and somehow on Native M1 version not good enough. Maybe Autodesk will gain improvements in time for Apple silicon.

Render times are only a small part of this equation, before that we have a whole massive workload for every 2D/3D animation or scene. I need a solid workflow before rendering for research, modeling, design, texturing, painting, previz, converting scenes for different file types, video workloads and compositing + had to work bunch of softwares at same time and switch one to another continuously for long hours. An M1 Ultra is amazing overall tool for this kind of workload. It is small, very silent and have bunch of ports.

I think what we have right now ( M1 Ultra/Max ) as an Apple silicon is quite enough, painless and cheaper on long term usage for Freelancers/Small or Mid Range Studios.

I don't think Apple silicon needs an eGPU solution, because there is no problem with M1 Macs. They are what they are and it is absolutely fine. Even it will stay on this level of performance.
By the way I have used 2 external eGPU's which are runs AMD GPU's for two years with an iMac and that was also fine for me at that time. However energy consumption is really big issue for 2X or 3X GPU systems, horrible...
Thanks to Apple silicon cuts out my energy bills more than %50.
I am not playing games on Mac, if I want I can use my Pc for games and its not a big deal for me.
 

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
I read conversations and this is my experience about

3D Rendering on Apple Silicon, CPU&GPU​

or 3D workflows on M1 Macs.

I have been work with M1 Ultra for a while, before that I have a Studio Max but I sold it and switch to an Ultra.
Native C4D is very solid on M1, Redshift is at least %50 slower at M1 Ultra compare with a 3070 PC on render times (I have use both of them side by side everyday at home ).

M1 Ultra may par on or a little behind with 3070 on RS Render view considering long working sessions, more than enough for lookdev and Node Base Shader dev,.

For small scenes and fewer textures 3070 is 3 times faster than Ultra when it comes to rendering out frames.
But on the other hand, a lot of and large texture files, more VRam demands 3070 starts to slowing down or failed. Large scenes which demands lots of VRam is a problem on any PC for GPU rendering ( I do not use 4080 or 4090 yet so I have no idea about their performance). Octane is good for still frames or short animations but there are some issues.

For CPU base rendering Ultra is very solid, reliable, silent (and there is no heat at all) unless you have a 2XCPU workstation or 32/64 core Threadreaper solution on PC (There will be heat and sound sometimes).

C4D Standart/Physical renderer is fast and Arnold is ok, VRay is surprisingly fast, reliable and painless on M1 Ultra, somehow Corona have some issues and triger higher degrees on CPU side while working on viewport or rendering on M1 Ultra.

Compare the render times (GPU Rendering) windows is not quite solid as a Mac, it can delivers different render times frame by frame or failed or have some artifacts time to time, you should check every frame after rendering sessions over (this issue should be related with 3070 itself, in fact for 3D I should have RTX A5000 or RTX A6000 workstation GPU'S). on the other hand Ultra is very impressive and precise for each frame, I have never seen any artifacts on any render with C4D, a few times with Blender but it is not often. But overall Ultra is slower for GPU rendering.

My experience for both systems is different and I prefer Ultra for 3D workflow, PC is for render out the scenes only when I need them as soon as possible.

For Modeling wise specially on ZBrush Ultra is a beast, very fast texturing and painting workflow, way faster on UV unwraping anything I used before, Remeshing is solid and impressive in some cases, sculpting is vey smooth, up to 35.000.000 polys for a single object, I did not calculate polys for multiple subtools I have no idea, very very fast Displacement map calculations and so go on for every aspect.

Blender is complete different thing I guess, Ultra is fast enough for working but somehow viewport animations way slower than C4D. Evee is quite fast on viewport and very fast rendering times even large scenes. Cycles is ok but not fast enough with M1 Ultra compare to any midrange NVidia or AMD GPU. More's the pity Pro Render did not work on M1 Chips on Blender. I tried it for couple of iterations and still not working properly.

I use time to time for different cases 2019 i9 iMac, a few 2013 Mac Pro's, couple of different PC's with 2080Ti / 3070 / 1080Ti, M1 Mac mini, M1 16 inch Max MBP, Studio Max and Studio Ultra still today.

For a workflow starting with ZBrush/Photoshop/Substance - C4D/Maya/Blender - Premier or/and Davinci ends up with Premier/After Effects/Davinci my sorting high to low is Studio Ultra - Studio Max (not base model need to have 64 GB RAM) - recent midrange PC with midrange GPU - 2013 Mac Pro on par with 2019 i9 iMac - M1 16 inch Max MBP and Mac mini is the last one.

There are many reasons M1 Ultra is enough for any 3D workflow, except GPU base 3D rendering. There are issues with native M1 Maya and clearly it is not ready yet for solid workflow. Arnold was always a CPU base renderer and somehow on Native M1 version not good enough. Maybe Autodesk will gain improvements in time for Apple silicon.

Render times are only a small part of this equation, before that we have a whole massive workload for every 2D/3D animation or scene. I need a solid workflow before rendering for research, modeling, design, texturing, painting, previz, converting scenes for different file types, video workloads and compositing + had to work bunch of softwares at same time and switch one to another continuously for long hours. An M1 Ultra is amazing overall tool for this kind of workload. It is small, very silent and have bunch of ports.

I think what we have right now ( M1 Ultra/Max ) as an Apple silicon is quite enough, painless and cheaper on long term usage for Freelancers/Small or Mid Range Studios.

I don't think Apple silicon needs an eGPU solution, because there is no problem with M1 Macs. They are what they are and it is absolutely fine. Even it will stay on this level of performance.
By the way I have used 2 external eGPU's which are runs AMD GPU's for two years with an iMac and that was also fine for me at that time. However energy consumption is really big issue for 2X or 3X GPU systems, horrible...
Thanks to Apple silicon cuts out my energy bills more than %50.
I am not playing games on Mac, if I want I can use my Pc for games and its not a big deal for me.
Very in depth; too bad you do not have 2019 Mac Pro with a w6800 duo or W6900; I think those would be very comparable to the M1 when it comes to viewport rendering and flat out beat it on GPU rendering. There are several of us on here that have done benchmarks with having W6800, W6800 duo and 2x W6800 duo and Octane is a beast with these setups for GPU rendering. Of course we are still stuck with the 32GB with AMD, so as you said high poly and massive texture maps the Apple silicon is going to have the advantage.
 
  • Like
Reactions: aytan

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
I have been back tracing a few pages and lots of lots of people throwing CUDA around. 3D applications do not *require* CUDA to work. CUDA support may make things faster, but it isn't a requirement for the software to actually work.

I also think people are confusing real time 3D rendering (Unreal, etc) with post style 3D such as Blender, Maya, Lightwave, Houdini, C4D, etc etc. These are 2 completely different animals.

There are a lot of post style 3D rendering packages that either render with the CPU or the GPU. GPU rendering such as Redshift or Octane are going to be bat **** crazy on Apple silicon once memory pools start getting above 48GB (nVidia's biggest dGPU memory pool.) Even with multiple cards; GPU rendering is still limited to the memory on 1 card.

I am glad this thread exists. However, I generally use Lightwave and well, Lightwave is more than likely dead after the Newtek acquisition by Vizrt. But for my needs Lightwave runs decent on the Mac Pro; would love it if someone could spill the beans on how it runs on Apple Silicon (probably rather poor due to Rosetta being used.)
 

jujoje

macrumors regular
May 17, 2009
247
288
I am glad this thread exists. However, I generally use Lightwave and well, Lightwave is more than likely dead after the Newtek acquisition by Vizrt. But for my needs Lightwave runs decent on the Mac Pro; would love it if someone could spill the beans on how it runs on Apple Silicon (probably rather poor due to Rosetta being used.)

If there's a benchmark scene kicking around, willing to grab the trial give it a go (not on ultra, but M1 Max so should give you some idea of how well it runs). Haven't used Lightwave since v7 though so not sure how far I'll get. Iirc the Mac version of lightwave used Wine to run, so it wasn't a native port to start with (although this was back in the day so might be wrong now).

There are a lot of post style 3D rendering packages that either render with the CPU or the GPU. GPU rendering such as Redshift or Octane are going to be bat **** crazy on Apple silicon once memory pools start getting above 48GB (nVidia's biggest dGPU memory pool.) Even with multiple cards; GPU rendering is still limited to the memory on 1 card.

Somewhere in the mists of this thread was benchmarks of the Ultra beating the 3090 on Disney's Moana test scene. On large scale scenes it seems like that large memory pool will pan out. The problem is 90% of the benchmarks are on very simple scene that don't take advantage of the large memory.

Was giving the Axiom solver for Houdini a go the other day with some sims that were taking up around 28GB and it's crazy fast (faster than a 3090 at the time). An Nvidia card with that amount of memory alone would cost 2/3rds of the laptop.

When AS gets hardware raytracing the gpu will be pretty awesome for heavy shots. Also approaching the tipping point where you can actually use GPU sims for actual production shots rather that as a fast preview (had a Vega 16GB and it could only really handle low res smoke and fire sims, due to memory constraints).
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
I read conversations and this is my experience about

3D Rendering on Apple Silicon, CPU&GPU​

or 3D workflows on M1 Macs.

I have been work with M1 Ultra for a while, before that I have a Studio Max but I sold it and switch to an Ultra.
Native C4D is very solid on M1, Redshift is at least %50 slower at M1 Ultra compare with a 3070 PC on render times (I have use both of them side by side everyday at home ).

M1 Ultra may par on or a little behind with 3070 on RS Render view considering long working sessions, more than enough for lookdev and Node Base Shader dev,.

For small scenes and fewer textures 3070 is 3 times faster than Ultra when it comes to rendering out frames.
But on the other hand, a lot of and large texture files, more VRam demands 3070 starts to slowing down or failed. Large scenes which demands lots of VRam is a problem on any PC for GPU rendering ( I do not use 4080 or 4090 yet so I have no idea about their performance). Octane is good for still frames or short animations but there are some issues.

For CPU base rendering Ultra is very solid, reliable, silent (and there is no heat at all) unless you have a 2XCPU workstation or 32/64 core Threadreaper solution on PC (There will be heat and sound sometimes).

C4D Standart/Physical renderer is fast and Arnold is ok, VRay is surprisingly fast, reliable and painless on M1 Ultra, somehow Corona have some issues and triger higher degrees on CPU side while working on viewport or rendering on M1 Ultra.

Compare the render times (GPU Rendering) windows is not quite solid as a Mac, it can delivers different render times frame by frame or failed or have some artifacts time to time, you should check every frame after rendering sessions over (this issue should be related with 3070 itself, in fact for 3D I should have RTX A5000 or RTX A6000 workstation GPU'S). on the other hand Ultra is very impressive and precise for each frame, I have never seen any artifacts on any render with C4D, a few times with Blender but it is not often. But overall Ultra is slower for GPU rendering.

My experience for both systems is different and I prefer Ultra for 3D workflow, PC is for render out the scenes only when I need them as soon as possible.

For Modeling wise specially on ZBrush Ultra is a beast, very fast texturing and painting workflow, way faster on UV unwraping anything I used before, Remeshing is solid and impressive in some cases, sculpting is vey smooth, up to 35.000.000 polys for a single object, I did not calculate polys for multiple subtools I have no idea, very very fast Displacement map calculations and so go on for every aspect.

Blender is complete different thing I guess, Ultra is fast enough for working but somehow viewport animations way slower than C4D. Evee is quite fast on viewport and very fast rendering times even large scenes. Cycles is ok but not fast enough with M1 Ultra compare to any midrange NVidia or AMD GPU. More's the pity Pro Render did not work on M1 Chips on Blender. I tried it for couple of iterations and still not working properly.

I use time to time for different cases 2019 i9 iMac, a few 2013 Mac Pro's, couple of different PC's with 2080Ti / 3070 / 1080Ti, M1 Mac mini, M1 16 inch Max MBP, Studio Max and Studio Ultra still today.

For a workflow starting with ZBrush/Photoshop/Substance - C4D/Maya/Blender - Premier or/and Davinci ends up with Premier/After Effects/Davinci my sorting high to low is Studio Ultra - Studio Max (not base model need to have 64 GB RAM) - recent midrange PC with midrange GPU - 2013 Mac Pro on par with 2019 i9 iMac - M1 16 inch Max MBP and Mac mini is the last one.

There are many reasons M1 Ultra is enough for any 3D workflow, except GPU base 3D rendering. There are issues with native M1 Maya and clearly it is not ready yet for solid workflow. Arnold was always a CPU base renderer and somehow on Native M1 version not good enough. Maybe Autodesk will gain improvements in time for Apple silicon.

Render times are only a small part of this equation, before that we have a whole massive workload for every 2D/3D animation or scene. I need a solid workflow before rendering for research, modeling, design, texturing, painting, previz, converting scenes for different file types, video workloads and compositing + had to work bunch of softwares at same time and switch one to another continuously for long hours. An M1 Ultra is amazing overall tool for this kind of workload. It is small, very silent and have bunch of ports.

I think what we have right now ( M1 Ultra/Max ) as an Apple silicon is quite enough, painless and cheaper on long term usage for Freelancers/Small or Mid Range Studios.

I don't think Apple silicon needs an eGPU solution, because there is no problem with M1 Macs. They are what they are and it is absolutely fine. Even it will stay on this level of performance.
By the way I have used 2 external eGPU's which are runs AMD GPU's for two years with an iMac and that was also fine for me at that time. However energy consumption is really big issue for 2X or 3X GPU systems, horrible...
Thanks to Apple silicon cuts out my energy bills more than %50.
I am not playing games on Mac, if I want I can use my Pc for games and its not a big deal for me.
Finally, one of the few users here that are actually working in 3D. Unfortunately here you will mostly find disappointed gamers regurgitating meaningless benchmark without any real professional experience in 3D.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
I have been back tracing a few pages and lots of lots of people throwing CUDA around. 3D applications do not *require* CUDA to work. CUDA support may make things faster, but it isn't a requirement for the software to actually work.
People that believe that have obviously no clue of what a 3D workflow is, otherwise they will know that Nvidia/CUDA is not required by any 3D DCC software. As stated many time before here most of the poster are just gamers, tech nerd in love with benchmarks or in best case 3D hobbyist.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
CUDA is a language. If software you wish to use is coded in CUDA then you will find it doesn't work on Mac until someone re-writes it in Metal (or something that can be cross-compiled)

For example Adobe's Substance suite has a bunch of AI stuff that is specifically written in CUDA and a bunch of features literally say "This template requires specific hardware. It is available only on Windows/Linux and with an Nvidia GPU."

There are parts of Houdini that are written in CUDA and don't work on Mac, and while I haven't touched Keyshot in a while I'm pretty sure it's entire GPU renderer is written in CUDA.
 
  • Like
Reactions: singhs.apps

aytan

macrumors regular
Dec 20, 2022
161
110
Very in depth; too bad you do not have 2019 Mac Pro with a w6800 duo or W6900; I think those would be very comparable to the M1 when it comes to viewport rendering and flat out beat it on GPU rendering. There are several of us on here that have done benchmarks with having W6800, W6800 duo and 2x W6800 duo and Octane is a beast with these setups for GPU rendering. Of course we are still stuck with the 32GB with AMD, so as you said high poly and massive texture maps the Apple silicon is going to have the advantage.
Your setup looks like really powerful, on video workloads and also for Maya, C4D, Octane and Houdini. I never be happy with Octane, it looks like too much complicated and lighting options not suitable for me.
Mainly I m a fan of Arnold and familiar with it. Learning new things is not easy when you are old :) But Redshift is easy to handle with lights and other aspects.
As I mentioned before VRay 6.5 is working really good with M1 Macs, I did not thought VRay until I tried last iteration
I wish I could have a Mac Pro from 2019 but at that time it was quite expensive for me, even at 2021 I really close to get one for a reasonable price mostly for C4D and Pro Render for daily still frames. Than suddenly ProRender flies away from C4D interface in the mist of the memories.
 

jujoje

macrumors regular
May 17, 2009
247
288
People that believe that have obviously no clue of what a 3D workflow is, otherwise they will know that Nvidia/CUDA is not required by any 3D DCC software.

I see the same arguments on pretty much every article on the Mac Pro; here, ars technica, reddit etc I guess Nvidia's marketing has been really effective. I can't think of much 3D software these days that is CUDA specific, particularly now most of the GPU renderers have metal support. Embergen, perhaps.

There are parts of Houdini that are written in CUDA and don't work on Mac, and while I haven't touched Keyshot in a while I'm pretty sure it's entire GPU renderer is written in CUDA.

I don't think there's much in Houdini that requires CUDA, and nothing that I can think of that doesn't work. vellum match constraint and pressure constraint are like molasses on MacOS, but still work. Karma xPU wants Optix but still works (that's the main one really). The rest of the nodes use OpenCL. Unless I've missed something?
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
I don't think there's much in Houdini that requires CUDA, and nothing that I can think of that doesn't work. vellum match constraint and pressure constraint are like molasses on MacOS, but still work. Karma xPU wants Optix but still works (that's the main one really). The rest of the nodes use OpenCL. Unless I've missed something?
Yeah, you're absolutely correct. It was OpenCL not CUDA. When I tried coding an OpenCL node in Houdini it was incredibly buggy because OpenCL is deprecated on macOS.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Unfortunately, neither the hardware nor the software for 3D on macOS is as mature as most would like, but it's good enough for some people. Clichés like "macOS is not good for 3D" are hard to change, so the value of this thread should be to help others understand why the current situation is fine for some 3D projects and how the software is improving its support on macOS.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
This, and gamers that pretend to be 3D expert fell for it.
That and CUDA did come before OpenCL, and it (CUDA) "appears" to be better supported than OpenCL. It is also faster on the majority of GPU's in the (desktop) market.

But yes in the scheme of things Apple's hardware is more than adequate for 3d rendering work.
 
  • Like
Reactions: MayaUser

jmho

macrumors 6502a
Jun 11, 2021
502
996
This, and gamers that pretend to be 3D expert fell for it.
As a professional 3D programmer, generally 3D artists and gamers tend to be equally wrong about how these things actually work under the hood. :D

The thing about 3D is that it's such an incredibly huge topic that almost nobody can be an expert in everything, so we all have to be a little bit humble.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
Great to see some real lived experiences from people with studio ultras. As we have seen here and elsewhere there are cases where the ultra chip really is great and the main issue most of us “sceptics” have brought up is the GPU rendering perf and the scaling issues. Especially for people that need to render sequences instead of a single frame every other week or so the ultra has been a disappointment and raised worry about how a mac pro will scale. There are also other pro level issues with metal like not even supporting doubles (float64) that is worrisome going forward. 2023 will obviously be the year we’ll finally get the big picture and can do a real evaluation when both he and sw is there for us to use. Maybe everything will be fine. Or maybe we’ll be stuck at mac studio level hw and missing key sw. We’re in limbo now and it feels like it could go either way.
 

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
If there's a benchmark scene kicking around, willing to grab the trial give it a go (not on ultra, but M1 Max so should give you some idea of how well it runs). Haven't used Lightwave since v7 though so not sure how far I'll get. Iirc the Mac version of lightwave used Wine to run, so it wasn't a native port to start with (although this was back in the day so might be wrong now).



Somewhere in the mists of this thread was benchmarks of the Ultra beating the 3090 on Disney's Moana test scene. On large scale scenes it seems like that large memory pool will pan out. The problem is 90% of the benchmarks are on very simple scene that don't take advantage of the large memory.

Was giving the Axiom solver for Houdini a go the other day with some sims that were taking up around 28GB and it's crazy fast (faster than a 3090 at the time). An Nvidia card with that amount of memory alone would cost 2/3rds of the laptop.

When AS gets hardware raytracing the gpu will be pretty awesome for heavy shots. Also approaching the tipping point where you can actually use GPU sims for actual production shots rather that as a fast preview (had a Vega 16GB and it could only really handle low res smoke and fire sims, due to memory constraints).
Yeah a benchmark scene for lightwave might be tough if you do not have at least 2019. 2018 is about the time NewTek actually started really fixing things on the Mac side. I'll probably skip 2020 since buying into something that appears to be dead isn't a good plan. I really resonate with the feeling about learning new things. I've dabbled with Lightwave since video toaster days so other 3D packages tend to irritate me. Lol.
 

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Another reason CUDA is very good and very fast is that it has been around for a very long time and nVidia *really* supports it. It is easy for things to be fast and good when the hardware and software/libraries for doing stuff is made by the same company; and supported.

At least Apple has convinced most of the developers to move to Metal; that will improve things on the Mac side but it will take time. Now all we need to worry about is Apple deciding next year to not use Metal and move to something else.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
Another reason CUDA is very good and very fast is that it has been around for a very long time and nVidia *really* supports it. It is easy for things to be fast and good when the hardware and software/libraries for doing stuff is made by the same company; and supported.

At least Apple has convinced most of the developers to move to Metal; that will improve things on the Mac side but it will take time. Now all we need to worry about is Apple deciding next year to not use Metal and move to something else.
Agreed, hopefully they can stick to metal compute for long enough and provide enough support to the industry that they are able to start to see similar gains that Nvidia did with CUDA.
 

iBug2

macrumors 601
Jun 12, 2005
4,540
863
Who really compare their computers to previous gen? We compare computers within a similar generation of CPU and GPU and at that time RTX 30 series were the latest.

Beside, it's Apple who compared their computer to RTX series AND Nvidia also compared their GPU to Apple Silicon Mac. So what's the point of NOT comparing them?
It's ok to compare, but people act like because of the release of Apple Silicon, somehow we now have to live with inferior GPU's compared to the Intel years. We are not. We have much faster GPU's than before, which should be the main focus.
 

iBug2

macrumors 601
Jun 12, 2005
4,540
863
Yes, it would be very cool, yes Apple probably could do it, and 100000% yes Apple should do it.

The M1 Max and below are incredible value, but who is going to make the software for your M1 Max? Do you think game developers want to work on a Mac Studio? You think the people working on high end light transport simulations want to work on a Mac Studio?

If Apple doesn't have a machine that can trade blows with a 4090, then absolutely every developer who is writing high end software will use a 4090. This is why we have the situation where 50% of the software the people in this thread want to use is written in CUDA.

Apple Silicon will live or die based on software compatibility, and without a good value high end machine, Apple Silicon is just going to have to survive on scraps, shoddy ports, and compatibility layers.
Not true at all. Before Apple Silicon, Apple never had a top of the line CUDA Gpu in their Macs either. And things were fine. What you are complaining about has nothing to do with Apple Silicon.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.