This is completely irrelevant workflow because high end production such as this "Moana" will always be put in renderfarm. There will be no Mac Studio GPU renderfarms unless your company buys Studio for each cubicle person and then performs night time render sessions. Users with Mac Studio or single/dual RTX cards will never ever be put in such intensive situation. So this much like synthetic benchmarks are useless.
I'm going to have to vaguely disagree on this one as well. The Moana data set is useful as its indicative of the kind of the data set in film and, increasingly, tv industry. Totally agree that there would be no Mac Studio GPU render farms, but that's not what makes the Moana dataset / benchmark interesting from a high end production point of view.
While it is primarily designed as an offline render test, its interesting to see how performant it is for interactive rendering, and that's an area that the Mac Studio is more designed for, and a workflow that film studios seem to be heading towards. The more representative and accurate scene you can load locally the better it would be for doing, lighting, shading and general lookdev and layout tasks on your workstation. This is very much what the discussion on that arstechnica thread was leaning towards.
Take for example the train Coco scene that Pixar use to sell the idea of xPU and USD; you can load the entire set, do set dressing, define shot cameras, do lookdev, lighting and switch to different render delegates all in on file (with no need to split out sets, worry about continuity or publishing things to multiple shots). This seems to be where things are heading, and having a GPU on your workstation that can handle that sort of workflow is obviously going to be a massive benefit, particularly as using GPU render delegates for lookdev seems to be the goal, at least as far as Renderman and Karma xPU go.
But even if your not aboard the usd / hydra delegate hype train take a typical FX shot where you have, say, some high resolution explosions and some destruction, a high res set. Your scene data is going to be, what, 10Gb a frame and that's just geometry caches, then you've got dicing, displacement, subframe motion blur and so forth, so let's say to render it's around 40Gb. With a Mac Studio you can load that data onto the gpu and do lookdev, getting feedback in realtime, with minimal scene prep and no out of core cacheing, because unified memory. Final frames can go to the farm, because farm time is cheaper that artist time, but you're maximising your artist time and getting faster time to first pixel and quicker iteration time.
To a certain extent all these benchmarks of how long it take to get to final frame on the Blender BMW benchmark are somewhat missing the point in terms of this kind of workflow. You're not going to have artists waiting 20 min staring at a render bar for that last 10% of the render - it's the time to first pixel, interactivity and first 10% - 20% of the render time which is important.
In terms of pro 3D workflows the GPU architecture seems to me to be a bet on a vision of the future, not entirely dissimilar to the vision of the trash can Mac Pro, with it's dual graphics cards and compute power. Let's hope it pans out better.