By comparing these two apps I gotta say LRCC actually is better optimised for M3 Max for now, its utilisation is better than on LRC.
I did some more extensive testing over the past few days. The "sine wave" behavior (where the temp and CPU/GPU ramp up quickly, then drop drastically, before finally leveling off a few minutes later) is inconsistent... sometimes I can get it to happen, other times not. Still trying to figure out exactly what triggers it, but I have an idea (which I'll mention below). But I'll set that aside for a moment and concentrate on LRC's performance when it's NOT doing that.
LRCC performs MUCH better on exporting, not just on the M3 Max but also on the M1 Max. But the difference is significantly greater on the M3 Max. Example, exporting 1000 images:
LRC M1 Max – 26:58
LRCC M1 Max – 14:54
LRC M3 Max – 20:16
LRCC M3 Max – 9:57
So, LRCC is twice as fast on the M3 Max, and faster (still significantly, but to a lesser degree) on the M1 Max.
Here's a screen shot that shows the last several minutes of the LRC export and the first several minutes of the LRCC export on the M3 Max:
The difference in power and CPU/GPU usage is quite stark. The clock speed drop and fluctuation in the beginning of the LRCC export is, I believe, how the actual OS-level throttling is implemented. The fans began screaming at this point (it was on "high power" mode), and the clock speed stabilizes as an equilibrium is reached.
Regardless, in all the testing I've done over the past few days, LRC seems to behave like a somewhat tentative driver, on a wide open highway out in the middle of Texas with no cops in sight but still afraid to even slightly exceed the speed limit, while LRCC simply floors the gas pedal.
I have to wonder... is this poor optimization in LRC, or is it intentional (to keep fan noise down and/or retain more resources for the user to work while exporting)? The observations I've made lead me to believe that it might very well be the latter. In particular, I don't think it's OS-level throttling, because if it were, why wouldn't the system respond the same way during LRCC exports? With LRCC, clock speed is temporarily reduced a bit as the fans catch up to the surge in activity, but the CPU stays pegged... that's clearly OS-level throttling. With LRC, it seems like the app itself is reducing the workload it's placing on the CPU.
Also, LRC's performance (at least initially) seems to be heavily dependent on what the temperature of the CPU when starting. If the computer is well-rested and is not hot at all, the "sine wave" behavior is more likely to occur...
my theory is that LRC is monitoring the computer's temperature, and it says "great, no heat here, step on the gas!", but then it overshoots as the temperature rises quickly, and it slams on the brakes. After a while, it tentatively starts to accelerate again, reaching what appears to be its target of keeping the CPU at around 80-85 degrees. Similarly, if the temperature is already in the 80-90 range when I begin an export, LRC's CPU usage will be moderated from the very beginning. All this makes me think that it's deliberate... because if it were just poor optimization of the export code, it wouldn't matter what the CPU temperature was (as long as it wasn't excessively hot, so as to trigger the OS to throttle)... whether it was 50C or 90C, I would expect to see identical resource usage when exporting the same batch of images over and over.
I'm no programmer, and I have no idea if this kind of active resource management based on temperature is something that would be incorporated into an app like this. But it's the best explanation I can come up with at present. On the other hand, if it IS the case that Adobe is intentionally limiting LRC's use of resources while exporting, why would Adobe not also choose to apply this same "throttling" (for lack of a better term) to LRCC?