Couldn't agree more with this although since i do develop ML algorithms I might be biased. its a sad state of affairs that nvidia has basically a monopoly on CUDA and we are all forced to use it. Not saying that its bad to develop with, but whenever we are stuck with 1 platform essentially it's never good to be beholden to it.Apple might be taken off guard with the current explosion of ML interest.
They have worked slow and steady with the user side of ML, but not the developer side. Not having a possibility to add dedicated crunching power might be the most important omission in the MacPro. It can of course be rectified, I am curious if they will do that.
It's a shame ROCm and Apple/coreML haven't gotten more upswing on the ML developer side, but my thinking is that nvidia had such a tremendous headstart with developers, engineers, and scientists, they are the ones reaping the reward now.
Someone please correct me if this is wrong, but as far as i know, to port any pytorch/tensorflow model to coreML you have to go through onnx, which while a useful tool, isn't has frequently updated as pytorch and thus won't support the latest and greatest pytorch functions in your model and thus its hard to port it to an apple device. The only work around I have found is implementing the version of that specialized function using onnxscript mansually. I might have to do this later so I'll see how that is but it seems like a pain in the neck.