Yeah, it's just getting petty at this point.
Petty or just collateral damage from 'war' outside the Mac space? Nviidia isn't going to loose any tears if OpenCL and Metel loose ground against CUDA. Nvidia can engage in an "embrace, extend, extinguish" campaign against both of those and made more money in the global space by extending their moat.
On the other side, if Apple is pushed between valuing a subset of Mac Pro space and all of Metal Space ( iOS gaming, Apple arcade , etc. ) again it would be just a collateral damage assessment.
If a battle of who is going to "out leverage' the other side the most both sides here have a quite non-overlapping set of weapons to fight with. They are also both getting much larger sums from other areas so can 'fund' the 'war' for a quite a long time.
Apple pulling signature from Nvidia drivers because they don't comply with Apple's rules and requirements for inclusion in the kernel wouldn't being petty.
I can (kind of) understand if Apple does not want to support internal (on-board) nVidia SKUs, but if nVidia is willing to do the work to make a reliable eGPU driver that Apple just needs to approve, they bloody well should be doing it.
Two issues there.
If there is more than one graphics card in the Mac system instance then "on-board" and "eGPU" don't make much material difference. ( e.g., one Apple default display GPU and second slot Nvidia GPU vs Apple default display GPU(s) and Nvidia GPU in eGPU slot. Both on the same PCI-e bus network once the OS is up and running). Inside/outside-on-TB-bus there isn't much difference. If anything a bit higher on the outside option since probably have to have a small amount of underlying support for disconnection failover.
Second, 'willing to do the work' is key. Before Apple stopped signing the Nvidia drivers would "stop and catch fire" any time Apple releases a minor dot ( _ . _ . X ) update. If their code is not playing fast and loose with API boundaries, interacts inside the 'lines' with all other kernel components , stays away from high reliance on deprecated areas , and does tight integrated development for alpha and small batch betas releases ... then what is the 'stop and catch fire" code really doing? Super duper abundance of caution. Or a significantly uncoordinated development. That latter isn't necessarily "working hard".
If AMD and Intel are participating in the Apple's GPU driver decathlon ( adding Metal features to newer family/generations , satisfying 'work well with others' criteria, sharing info/workload in Apple's dev rollouts in timely fashion , etc. ) then they'll get the wins and at least driver signatures.
You don't see Nvidia proclaiming that they have some of the best Metal compliance and performance available and Apple is stopping that from getting to market. Similarly that they have the most stable and cohabitation drivers available and that Apple is keeping that greater stability from getting to market. It is mainly an implicit "well, Apple is keeping you from our proprietary stack".
Putting in hard work would be delivering on both. The foundation that Apple wants
and whatever value add Nvidia wants to put on top. Some amount of back burner ( or extinguish ) re-prioritization of the workload probably will run into problems with Apple. ( if Nvidia significantly screwed with MS on DirectX issues would pop up there too. )
[doublepost=1556904644][/doublepost]
Clearly you are unfamiliar with CoreML, well maybe not, but that's what we got on macOS. It uses Metal, naturally.
https://developer.apple.com/machine-learning/
From that link.
"... Because it’s built on top of low level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. .. "
....
And if you want to cry, read
https://github.com/tf-coreml/tf-coreml - a tool to convert TensorFlow to Apple's proprietary API. Just what I want to work (and debug) with - using translators to convert millions of lines of open source code to Apple's proprietary API.
You don't have to convert to TensorFlow to CoreML. TensorFlow runs on macOS. Apple and Google have down work to hook Tensorflow with Metal ( WWDC 2018 session Metal for Accelerated Machine Learning )
CoreML isn't really the issue. CoreML sits on top of Metal (and Metal Performance Shaders MPS ).
If CUDA happens to be faster than an optimized MPS then fine. However, if Nvidia is out to kneecap MPS so it isn't then they probably will run into a problem with Apple.
[doublepost=1556904955][/doublepost]
Nobody is using proprietary tools for ML. Even MS has opensource their ML framework.
MS's ML Framework is design to highly integrate with .Net. ( Pragmatically .Net is pretty narrow as a "multiplatform" technology).
CoreML is fairly similar. If building on a core Apple foundation library/framework application then it has the tradeoff of being more integrated. If trying to put your ML functionality into a subsystem of a larger application there are significant upsides here. ( as opposed to some cloud, back-end, "faceless" inferencer/training in a machine room / server farm )
Also a chuckle when the who CUDA flies by and not tagged as proprietary tools for ML. Most of the grandstanding about Nvidia GPUs being "essential" for any reasonable ML is all about advocating for a proprietary ML stack.