Clarification on Metal/ANE and PT/TF for ML on Silicon.
Reading various Apple pages and other blogs here's what I understand.
PyTorch
To accelerate the training of ML models, PT takes advantage of the hardware acceleration of the ANE, but any model you use needs to be translated/compiled as a CoreML version of the model.
PT also supports GPU-accelerated training on Mac via Metal.
If we don't intend to deploy transformers/models on Apple devices, we don't need to take into consideration how many cores the ANE has on any machine we are considering using for training? In this case, we can just look at number of GPU cores when choosing a Mac?
Without deploying transformers/models on Apple devices can we still use ANE for training purposes only? Can we use both ANE and Metal for maximum training performance?
Tensorflow
TF uses tensorflow-metal PluggableDevice to accelerate the training of machine learning models on Silicon with Metal.
TF models compiled as CoreML models also uses ANE.
Is this a correct summary?
Reading various Apple pages and other blogs here's what I understand.
PyTorch
To accelerate the training of ML models, PT takes advantage of the hardware acceleration of the ANE, but any model you use needs to be translated/compiled as a CoreML version of the model.
PT also supports GPU-accelerated training on Mac via Metal.
If we don't intend to deploy transformers/models on Apple devices, we don't need to take into consideration how many cores the ANE has on any machine we are considering using for training? In this case, we can just look at number of GPU cores when choosing a Mac?
Without deploying transformers/models on Apple devices can we still use ANE for training purposes only? Can we use both ANE and Metal for maximum training performance?
Tensorflow
TF uses tensorflow-metal PluggableDevice to accelerate the training of machine learning models on Silicon with Metal.
TF models compiled as CoreML models also uses ANE.
Is this a correct summary?