But what workflow? Video work with Final Cut? Sure, that's what the Mx is optimized for, among a few other things.
I'm not so sure about tools, I mean, what tools would I need once I've arrived at the inference and left development of the model behind? It's just running at that point. Integration? Sure. And inference, I somehow can't see Apple providing cloud solutions for pure inference tasks or who would even use that. On some computer on-site, maybe..., but if I need a large A100 cluster for inference, something else is wrong.
But even if that's the case, the M1 Ultra with 64Gb is still beaten by a V100 (not A100!) with 16GB when it comes to inference (
https://github.com/lucadiliello/pytorch-apple-silicon-benchmarks). The old A100 is already beaten by the 3080, except when memory is an issue or using multiple GPUs (
https://bizon-tech.com/gpu-benchmarks/NVIDIA-Tesla-V100-vs-NVIDIA-RTX-3080/535vs578). And even the 1080Ti is faster than the Ultra for inference (
https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html). For every single task? Maybe not, but for general models I wouldn't trade a 4090 for anything Apple has right now. Time will tell if that might change one day.
If we look at production applications (robotics, digital twins, etc.), wouldn't a Nvidia Jetson with it's dedicated hardware/pipelines that are not even available in GPGPUs be a better option than some computer?
In the end, I don't think that's what Apple want anyway. I still hope we'll see a more powerful Mac Pro in the future.
I think that's just their response to the ChatGPT hype. We already have cards with 80GB per card and are able to put up to 8 of these into a workstation and server with SXM being fast enough to utilize the 640GB very well. It's "good enough". And bluntly speaking, when someone scans the whole internet, throws in attention to create a model that understands input and creates the best fitting average answer, which is often not fully correct or simply wrong in other cases (because the internet as a source only contains the truth and facts...), then yes, more memory is always better. Or in other words, they upped the memory a little from 80 to 94GB and doubled the specs to create a tighter "package".
One thing is for certain, the upcoming Mac Pro will give us a good idea of where Apple heading, even if it falls short of what Nvidia is offering.