Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have a MacBook Pro M1 Max and a MacBookPro M4 Max both with 64GB.
I am running Maya 2025.3 and rendering the same 125 frame animation on both.

The M1 rendered in 55 mins.
The M4 rendered in 24 mins.
1) Certainly anything that involves shading, 3D or the like should improve with later Apple SoC like M4 vs. M1. Apple has made specific hardware improvements aimed at ray tracing, etc.

2) Given the intensity of Maya's hardware demands IMO anyone intending an app like Maya should pay for maximum available RAM, not just 64 GB. Apple's Unified Memory Architecture uses RAM quite intensively.

3) [For me] rendering is not a primary important performance characteristic. Because once one commits to a render, one's creative process is already by definition fully interrupted, for minutes, so one will for sure be going to do some other activity. The things I care about are essentially real-time changes that can be fast enough to maintain uninterrupted design thinking versus slow enough to break the thought process. The relevant timings are very fast: seconds or preferably milliseconds. Testing an image blur or color change might be examples.
 
Last edited:
So we are talking rendering with Arnold I take it? That is only using the CPU so in that case the uplift is inline with the multi core scores in other CPU renderers like Cycles (going from 250 "points" to "450 "points") If you switch to redshift the uplift will be much higher for simple project that benefit from the GPU advancements. But for more complex scenes, CPU is still king. (try 3delight!)
 
  • Like
Reactions: wilderkun
As someone who doesn't do rendering at all, what advantages does CPU rendering still have over GPU rendering?
I guess it depends on who you ask, but in general, for one, they are more stable and usually supports more features. I don’t do much right now (so this might have changed lately) in this field but I have had great experiences with both but for different tasks. For small things (likr the anim in this thread) you would benefit massively going for redshift or octane while a scene for a blockbuster movie would be a worse fit.
Also, if you use GPU, the scene has to be transferred to the GPU (etc, like kernel compiles) which slows you down when doing interactive renderng, that is, for example, tweaking the scene is some way.
 
  • Like
Reactions: crazy dave
1300 36 70 70!

Those ads will forever be etched into my brain.

Impressive performance increase on the M4, especially considering the M1 is still a reasonable chip 4 years later.
 
Now YouTubers can render videos about rendering times twice as fast :p

Jokes aside, for people who do rendering as part of their work, this can have a huge impact. For example 1 month a year of rendering instead of two. Saved time can be used on something productive.
 
As someone who doesn't do rendering at all, what advantages does CPU rendering still have over GPU rendering?

It's generally easier to code and maintain. The infrastructure (compilers/tooling) is generally more robust. The skills and competencies are more readily available. And the environment itself is more stable (you never know what weird optimizations a GP or a driver might pull on you).
 
I think the OP meant the project is 125 frames total, not that either M-chip is rendering at 125 FPS.

Going 55min to 24min is a pretty sizeable improvement to me. That reduces your work time by more than half!
I agree that it’s a good gain, but it isn’t time working in reality. It’s wait time. When doing these things, in reality, you just start the render when day is finished or when going for lunch or etc.
 
  • Like
Reactions: ignatius345
render time per frame is dependant on the complexity of the scene. I think a 100% + increase in speed from M1 to M4 is impressive.
I’d like to see the comparison between an M4 and a loaded iMac Pro. Im sure and M4 would be 10x faster or more!
 
I’m not sure which render engine was used in Maya, but if the rendering was done on GPU, the M3 and M4 processors would be significantly faster than the M1 due to the new ray tracing cores introduced with the M3. Here are some benchmarks (GPU-based) using Blender’s render engine Cycles and Maxon Cinebench (with Redshift) both of which are optimized for Apple Silicon. I’ve also seen similar results with Otoy’s Octane Render.

The latest M4 Max is in some cases as fast as an RTX 3090 or a 4070 Super in Cycles, which is pretty cool! :)

Skärmavbild 2025-01-17 kl. 20.47.19.png


Skärmavbild 2025-01-17 kl. 20.46.33.png



Skärmavbild 2025-01-17 kl. 20.53.45.png
 
The M4 Max boasts a clock speed of 4.4 GHz, while the M1 Max clocks in at 3.2 GHz. This difference alone yields a 40% performance boost, which could have been achieved for free if Apple had permitted overclocking.

Furthermore, the M4 Max features 16 cores, compared to the M1 Max’s 10 cores. This disparity accounts for approximately 60% of the performance difference.

In essence, the M4 Max offers double the speed, achieved through both overclocking and the addition of six more cores. This approach mirrors Intel’s strategy of simply creating larger and hotter chips.

If the M4 Max had a clock speed of 3.2GHz and 10 cores, it would be comparable to the M1 Max.
 
Last edited:
  • Like
Reactions: brandair
The M4 Max boasts a clock speed of 4.4 GHz, while the M1 Max clocks in at 3.2 GHz. This difference alone yields a 40% performance boost, which could have been achieved for free if Apple had permitted overclocking.

Furthermore, the M4 Max features 16 cores, compared to the M1 Max’s 10 cores. This disparity accounts for approximately 60% of the performance difference.

In essence, the M4 Max offers double the speed, achieved through both overclocking and the addition of six more cores. This approach mirrors Intel’s strategy of simply creating larger and hotter chips.

If the M4 Max had a clock speed of 3.2GHz and 10 cores, it would be comparable to the M1 Max.

Sure if we take at face value those factors would net a 2.2x increase vs the 2.3x we actually see (in this renderer at least, CB R24 shows a 2.55x increase from the M1 Max, ~800, to the M4 Max, ~2000, which still outpaces 2.2x by quite a fair margin). However, there are several things wrong with your analysis:

1) Max clocks are not the clocks used during MT workloads. The latter may have increased by 40%, they may not. Judging by the results below, they almost certainly didn't.*

2) The M1 Max had an 8+2 core configuration with the two efficiency cores "double boosted" for 2x core clocks. From the M2 onwards the Max got 4 efficiency cores with the same clocks (as did the M2 and M4 Pro, with the M3 Pro being a very different architecture). As such, the relevant metric is 50% more P-cores rather than 60% more cores.

3) As AMD has said in their description of Zen 5 vs Zen 5c, core architecture has to be designed specifically to enable much higher boost clocks. The main difference between the two Zen cores, in Strix Point at least (desktop/Halo/Fire rise also have full AVX-512), is the circuitry required to enable to higher clocks ... and even with those, the Zen 5 Strix Point mobile cores are no more overclockable than Apple Silicon is (especially not in practice more on that below). Also, not repeat point 1, overclocking for ST and overclocking for MT are very different paradigms. Further, overclocking for either requires substantial binning, only a small percentage of dies are fit for that purpose and even then only a few if any can get to 40% ST increases. Selling such binned chips is the business model of a chip-seller, not a device-seller like Apple. Finally it's not "for free". Even if possible, which is unlikely (again see below), would require massive energy for very little gain, especially as aforementioned given the likely design of Apple chips.

So rather than hypothesizing, let's take a look at actual data using a larger version of a graph that I put in a different thread in response to a post under similar misapprehension that Apple's CPU design had stagnated. The below plots Redshift rendering results from CB R24 against Power. Inside the bubbles is efficiency (Pts per W). As a point of comparison to the M4 Pro/Max, we'll use the M2 Pro which had the same 4-e-core layout as all later Max chips and shared the same CPU as the M2 Max:

Screenshot 2024-12-17 at 2.07.40 PM.png


Comparing ST efficiency of the M4 Pro to the M2 Pro, we see that it has increased by 22%. Meanwhile the 8+4 M4 Pro with the same configuration as the M2 Pro has increased its MT efficiency by about 34%. TSMC rates N3E 18% performance at the same power vs N5, but the M2 Pro was on N5P was already 5% better performance. Again this is highly dependent on chip and where on the clock speed a particular core is already at so it shouldn't be taken exactly but Apple is very much outpacing gains TSMC is claiming for its nodes. Further it's important to note that the performance of the 14-core and 16-core variants of the M4 Pro/Max increase their performance linearly along with the power such that they all have the same efficiency.

To look at what happens when one tries to overclock a chip not designed for it, we can turn to the HX 370 AMD chips. Now the Strix Point CPU is not user overclockable, according to Techpowerup the multiplier is locked, nor do I think anyone, OEM or otherwise, overclocks its single core. BUT Notebookcheck used a version of the HX 370 whose MT performance was allowed to be overclocked past AMD's recommended TDP range ("54"W) to "65W" and "80W" which actually used close to 90W (rightmost AMD HX 370 point in quoted graph) and over 100W (not pictured) respectively for little to no gain in performance. At "54W" TDP (actually uses 74W) it gets a score of 1166, at "65W" (88W) it gets about 1200, at "80W" (109W) it still only gets 1216 (not pictured). Basically going from AMD's recommended TDP settings of "54W" (74W) to "80W" (109W), the chip burns ~50% more power for only 4% more performance and almost all of that is gained before it hits "65W" 88W. You can see why AMD lists the Strix Point HX 370's max TDP as "54W". This also is par for the course for Intel chips when pushing "bigger and hotter" chips as you put it, not linear increases as with Apple silicon, but exponential ones. Since Apple does not design their cores to be pushed like this, it is highly unlikely that user overclockable AS would have achieved noticeable MT performance gains.

In conclusion, we can see that in at least this one renderer's case, the M4 Max has improved efficiency over the M2 Max by about 40% in addition to having its performance increase by 2.55x over the M1 Max (and 2x over the M2 Pro/Max). Sadly I don't have efficiency figures for the M1 Max but the improvement is probably better than 40%. Of course not every renderer will show as good results, Redshift for CB R24 seems particularly well optimized for AS. Lastly, depending on the design of the core, overclocking does not necessarily yield better performance and certainly not for free.

*EDIT: Based on Geekerwan and Anandtech the all-core clocks appear to have gone up by just under 28% in the M4 Max since the M1 Max. Based on the P-core alone we would expect a 1.28x1.5 = 1.92 gain in performance while the actual CBR24 performance gain is 2.55 which gives a ratio of 2.55/1.92 of about 1.33 and the observed, measured efficiency gain in CBR24 is 34% for the M4 Max since the M2 Max. The efficiency gain may be lower/same/higher since the M1, but any discrepancy is likely due to not taking the E-cores or their improvement into account.
 
Last edited:
I’m not sure which render engine was used in Maya, but if the rendering was done on GPU, the M3 and M4 processors would be significantly faster than the M1 due to the new ray tracing cores introduced with the M3. Here are some benchmarks (GPU-based) using Blender’s render engine Cycles and Maxon Cinebench (with Redshift) both of which are optimized for Apple Silicon. I’ve also seen similar results with Otoy’s Octane Render.

The latest M4 Max is in some cases as fast as an RTX 3090 or a 4070 Super in Cycles, which is pretty cool! :)

View attachment 2473238

View attachment 2473239


View attachment 2473243
The GPU improvements for rendering (especially since the M3 added ray tracing cores) have indeed been impressive, but this was a CPU render.
 
wish Apple would make modifications to macOS to allow for external GPU boxes connected VIA thunderbolt 4 or 5

It would give a big boost to Maya and also gaming

Would Love to connect an AMD 9070 XT or an Nvidia 5090.
 
I’m not sure which render engine was used in Maya, but if the rendering was done on GPU, the M3 and M4 processors would be significantly faster than the M1 due to the new ray tracing cores introduced with the M3. Here are some benchmarks (GPU-based) using Blender’s render engine Cycles and Maxon Cinebench (with Redshift) both of which are optimized for Apple Silicon. I’ve also seen similar results with Otoy’s Octane Render.

The latest M4 Max is in some cases as fast as an RTX 3090 or a 4070 Super in Cycles, which is pretty cool! :)

View attachment 2473238

View attachment 2473239


View attachment 2473243
windows pc are laptops?
 
  • Like
Reactions: Allen_Wentz
wish Apple would make modifications to macOS to allow for external GPU boxes connected VIA thunderbolt 4 or 5

It would give a big boost to Maya and also gaming

Would Love to connect an AMD 9070 XT or an Nvidia 5090.
I suspect that improving performance is not so simple as adding external GPU boxes connected via TB cables. The speeds that M4 hardware real-world computes at are fast. Fast enough that physical distance and controller latency become a very big deal. Hence external GPUs are inefficient, and Apple is working hard to engineer away from inefficient.

Sure it could be done, but it would be hot, inefficient and be supportive of the PC world's approach of building ever-stronger/hotter chip solutions. I doubt if Apple will go there.
 
  • Like
Reactions: crazy dave
We have computers millions of times more powerful than the one we used to land on the moon and it’s being used to render dancing walruses.
I know that it's a joke, but they're also being used for protein folding, neural networks and aerodynamic simulations. You don't actually need that much computing power for orbital calculations, it's the big-ass rocket engines that make things happen (now and then).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.