Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And their iCloud datacenters all use non-Apple hardware running Linux and Windows instead of Macs running macOS. They could have developed custom Xserves and server versions of macOS, but why spend resources (monetary, time and human) to reinvent the wheel when perfectly-serviceable alternatives exist already that can support the core business objectives (like serving customers data and performing machine learning)?

That analogy isn't even close to comparable. The core implementation of iCloud is irrelevant to users and developers as it should be. And using linux is the industry standard for security and performance. Apple made the right choice. Using MacOS would probably result in worst performance anyways...

On the other hand, CUDA is the industry standard for Machine learning. Apple provides their own entire machine learning product & service stack to developers and customers. Where knowning the implementation is very important. Why do they need to re-invent the wheel then as you say? And if their product & service stack was so good why don't they completely use it themselves. When Turicreate first came out – a python machine learning api for Mac & Linux to create models for iOS/MacOS. The only GPU acceleration it supported at the start was CUDA/Nvidia...
 
Why do they need to re-invent the wheel then as you say?

I expect it is because Apple's ML focus is not PCs (Macs) but devices running iOS (the A Series CPUs are adding onboard ML hardware) so they need an ML language optimized for iOS - and CUDA isn't.
 
In fact Google Cloud AI - you know, the cloud offering from the company that created TensorFlow - doesn't run on Nvidia GPUs.
However, GCP (Google Cloud Platform) supports VM instances with up to eight Volta V100 GPUs, as well as Pascal and Kepler CUDA instances. https://cloud.google.com/compute/gpus-pricing

No support for ATI GPUs.

Google realizes that CUDA is vital, even if they also offer TPU instances.
 
Last edited:
I expect it is because Apple's ML focus is not PCs (Macs) but devices running iOS (the A Series CPUs are adding onboard ML hardware) so they need an ML language optimized for iOS - and CUDA isn't.

Apple’s machine learning platform runs on both iOS and Mac. But the difference between Google and Apple is - Apple is almost entirely focused on local device machine learning, and Google is focused on cloud machine learning.

Those differences mean Google can focus more on big iron solutions based on CUDA, whereas Apple is more focused on focused or real time learning models on device.

The platform strategies are why Apple and Google are in different places.
 
Apple’s machine learning platform runs on both iOS and Mac. But the difference between Google and Apple is - Apple is almost entirely focused on local device machine learning, and Google is focused on cloud machine learning.

Those differences mean Google can focus more on big iron solutions based on CUDA, whereas Apple is more focused on focused or real time learning models on device.

The platform strategies are why Apple and Google are in different places.
This ignores the simple fact that "ML/AI" often has completely different compute loads between ML training and ML inference. "Training" is back end big iron (often major CUDA) work to create models.

"Inference" is end device lighter weight work to apply the training to real-time inputs. "Training" is in the server room - "Inference" is on the device.

For example, you can spend thousands of CPU/GPU hours in training to define what a "STOP" sign is. Once you've trained the model, however, the inference engine can quickly identify the "STOP" sign and apply the brakes.

Apple is going to do training in the cloud, and inference on the handheld device. That can be CUDA/Linux training in the cloud, and A-series inference on the handheld device.
 
  • Like
Reactions: Mago
It's 20% slower than cheap consumer 9900K in single core and a little slower in multicore.

Apples and oranges. One is Xeon and the other is not. Differences in amount of RAM supported, number of channels of RAM, ECC support, duty cycle, PCIe lanes, etc...

Cheap consumer stuff is cheap consumer. If you're looking for something to overclock and use on Fortnite, Mac Pro (or any workstation class computer) is not the right choice.
 
Apples and oranges. One is Xeon and the other is not. Differences in amount of RAM supported, number of channels of RAM, ECC support, duty cycle, PCIe lanes, etc...

Cheap consumer stuff is cheap consumer. If you're looking for something to overclock and use on Fortnite, Mac Pro (or any workstation class computer) is not the right choice.

"Cheap consumer stuff is cheap consumer."

That's ridiculous and considering your past posts it's not a surprise to hear it from you. If a director like me and my partners heard you say things like this in any of our studios across four countries I would fire you on the spot for being so technically inept and maladroit to the other users.

99% of pro users are happy with 32-64GB memory and a fastest CPU. They aren't going out there looking for Xeon or ECC memory. You only have to look at the benchmark threads here. Nobody is talking about having 256GB RAM and ECC. They are all asking for high performance numbers in apps and games. The i9 or Ryzen is perfect for them especially now the price is cut in half and the cores are increasing so much.
 
Apple is going to do training in the cloud, and inference on the handheld device. That can be CUDA/Linux training in the cloud, and A-series inference on the handheld device.

Apple can't easily train in the cloud. It's against their corporate policy on privacy. They only collect anonymized data, which is difficult to train on in a personal way. That's why Siri is uhhh.. Siri. It's ability to personalize is much more limited than Google's products.

Apple will do things like pre-bake training based on anonymized data and patterns. But they aren't doing any live training in the cloud based on user data. You see this with classifiers that are baked into the device already that are immutable.

That's why Apple's ML platform is so device focused and not at all cloud focused. A cloud focused API would require collecting data, against their own corporate policy and the App Store policy for developers. That's also why Apple's ML platform supports on device training. They do support a training architecture, but only in situations where the data doesn't leave the device.
 
  • Like
Reactions: Macintosh IIcx
Apple can't easily train in the cloud. It's against their corporate policy on privacy. They only collect anonymized data, which is difficult to train on in a personal way.
Apple still "collect" metadata, but they "anonimize" it before sending it to Apple (or China).
That's why Siri is uhhh.. Siri. It's ability to personalize is much more limited than Google's products.

No, Siri Was the first victim from the nVidia-xit they spent a year trying to move to ROCm (AMD) even Xeon-phi (bare openCL), the apple beheaded their AI CTO and poached Google's AI CTO, which reinstalled CUDA training workflow and managed to get an macOS/iOS version of tensorflow lite where you can run CUDA trained models.

That's also why Apple's ML platform supports on device training. They do support a training architecture, but only in situations where the data doesn't leave the device.

On device training it's almost a joke unless you're training very trivial models.

Apple spend a lot of money on create ML and core ML to give macOS/iOS something to toy with, but useless for creativity or research, same thing is doing Google for a Android where before apple included npu/tpu (first. So to include NPU/TPU was the original pixel, which included it on a discreet chip.
 
Apple is running macOS on servers in datacenters. They have a lot of things that rely on macOS, such as QuickTime video/audio encoding, iOS/Mac App Store services and other things. They're not running the regular macOS GUI and stuff, but they do run a stripped down version of macOS on Supermicro hardware, IIRC.
 
  • Like
Reactions: aaronhead14
Apple is running macOS on servers in datacenters. They have a lot of things that rely on macOS, such as QuickTime video/audio encoding, iOS/Mac App Store services and other things. They're not running the regular macOS GUI and stuff, but they do run a stripped down version of macOS on Supermicro hardware, IIRC.
Apple is rumoured has a r&d device which allows they an "official" hackingtosh but is for hardware r&d not for production, the task you name where among the first moved to centos Linux servers, apple keeps a macOS "server" app just to enable very few features not available in regular macOS as they departed long ago from the xserve as Linux take over almost everything (and seems soon even Microsoft will ditch windows server and keep just .net core on regular Linux).

Apple comeback to the server market is very limited notwithstanding the new cgmMP will be available for 19" racks I doubt it will be fir data applications but render farming where Apple aimed it's GPU drama overseeing ML/AI, where Apple means almost nothing fir the ML AI community, and I say almost nothing because Swift for tensorflow -S4TF- is gaining support but s4tf curiously cant run in macOS it's Linux/Windows only since it requires CUDA... You mean CUDA... nVidia's Ghost like it or not will be haunting at cupertino for a while...
 
Apple is running macOS on servers in datacenters. They have a lot of things that rely on macOS, such as QuickTime video/audio encoding, iOS/Mac App Store services and other things. They're not running the regular macOS GUI and stuff, but they do run a stripped down version of macOS on Supermicro hardware, IIRC.

Do you have any links? QuickTime streaming has not been dependent on macOS for a while, and Apple has moved off of QTSS to H.264 live streaming which is available on many platforms. iOS and Mac App Store services don't really require the Mac. Serving downloads doesn't need a Mac server. Even code signing on developer upload probably doesn't really require a Mac host.
[automerge]1571678473[/automerge]
Apple still "collect" metadata, but they "anonimize" it before sending it to Apple (or China).

That's the issue.

Anonymized metadata is ok for training the onboard models, but it's not useful for training live models based on user behavior.

So Apple doesn't need giant cloud farms for machine learning because they're not doing very much in real time.

No, Siri Was the first victim from the nVidia-xit they spent a year trying to move to ROCm (AMD) even Xeon-phi (bare openCL), the apple beheaded their AI CTO and poached Google's AI CTO, which reinstalled CUDA training workflow and managed to get an macOS/iOS version of tensorflow lite where you can run CUDA trained models.

I don't think there is any debate that Apple allows importing of outside models. CoreML includes Tensorflow support.

But again, that's very different than Apple having any need for massive server farms for machine learning.

On device training it's almost a joke unless you're training very trivial models.

Maybe, but it's what Apple is doing.

The sort of stuff Apple wants to do is figure out where you are going to drive your car next, or what app you're going to open next. Or what your cat's face looks like. It's not exactly complicated stuff.

Apple spend a lot of money on create ML and core ML to give macOS/iOS something to toy with, but useless for creativity or research, same thing is doing Google for a Android where before apple included npu/tpu (first. So to include NPU/TPU was the original pixel, which included it on a discreet chip.

Apple isn't doing research oriented tasks with machine learning (as in, they're not building products for the research industry.)

The Mac Pro might be a machine aimed at researchers, but Apple is going to leave the software stack to other people. We could talk through the whole Apple and CUDA thing again but I think we'd just be going in circles about something that's already been talked about to death...

But Apple is just fine hanging out waiting to see if Tensorflow will add a Metal backend. And I think they might do more around multiple Afterburner cards and these sorts of tasks. It's really hard to see Apple becoming more CUDA centric with all their hardware going in directions away from CUDA.
 
Last edited:
Do you have any links? QuickTime streaming has not been dependent on macOS for a while, and Apple has moved off of QTSS to H.264 live streaming which is available on many platforms. iOS and Mac App Store services don't really require the Mac. Serving downloads doesn't need a Mac server. Even code signing on developer upload probably doesn't really require a Mac host.
[automerge]1571678473[/automerge]


That's the issue.

Anonymized metadata is ok for training the onboard models, but it's not useful for training live models based on user behavior.

So Apple doesn't need giant cloud farms for machine learning because they're not doing very much in real time.



I don't think there is any debate that Apple allows importing of outside models. CoreML includes Tensorflow support.

But again, that's very different than Apple having any need for massive server farms for machine learning.



Maybe, but it's what Apple is doing.

The sort of stuff Apple wants to do is figure out where you are going to drive your car next, or what app you're going to open next. Or what your cat's face looks like. It's not exactly complicated stuff.



Apple isn't doing research oriented tasks with machine learning (as in, they're not building products for the research industry.)

The Mac Pro might be a machine aimed at researchers, but Apple is going to leave the software stack to other people. We could talk through the whole Apple and CUDA thing again but I think we'd just be going in circles about something that's already been talked about to death...

But Apple is just fine hanging out waiting to see if Tensorflow will add a Metal backend. And I think they might do more around multiple Afterburner cards and these sorts of tasks. It's really hard to see Apple becoming more CUDA centric with all their hardware going in directions away from CUDA.

I think a lot of commenters on this forum who use CUDA or are CUDA proponents have the point of view that CUDA is a de facto necessity for AI/ML in any/every industry and forget that Apple is going to go their own way in this despite what they say or comment on here, and that flies in the face of their worldview (the commenters).

Apple is determined to be THE gatekeeper to AI/ML on iOS devices and given the number of devices in the wild, I don't blame them. They are not going to let NVIDIA tap into those devices for their own purposes and exploit them to further CUDA's dominance by trying to insinuate it into the product stack at Apple in any way, shape or form.

Apple using CUDA on servers in some datacenter is nothing compared to it gaining a foothold on iOS devices. AI/ML is core tech as evidence by the A11, A12 and A13 Bionic SoCs and the C-Level Suite position held by John Giannandrea.

Commenters in this thread keep looking at it from a technology perspective and that is not the issue. It's a strategic business issue for Apple and the gates keeping NVIDIA away from Apple's cash cow are HIGH, WIDE, THICK and VERY well guarded.

I am sure I will hear people tell me that I don't know what I am talking about, that I'm paranoid and that NVIDIA cannot gain that sort of control, et al. So is Apple crazy then because NVIDIA is a complete non-starter and persona non grata on macOS for the past two OS releases for what reason? NVIDIA is an existential threat to Apple, Apple knows something we do not know and they are not telling. Ever.
 
  • Like
Reactions: Flint Ironstag
I think a lot of commenters on this forum who use CUDA or are CUDA proponents have the point of view that CUDA is a de facto necessity for AI/ML in any/every industry and forget that Apple is going to go their own way in this despite what they say or comment on here, and that flies in the face of their worldview (the commenters).

Yeah, I didn't really want to dig into that again because then we're just going around in circles.

Apple is determined to be THE gatekeeper to AI/ML on iOS devices and given the number of devices in the wild, I don't blame them. They are not going to let NVIDIA tap into those devices for their own purposes and exploit them to further CUDA's dominance by trying to insinuate it into the product stack at Apple in any way, shape or form.

And that there is another problem: CUDA can only run on a subset of Apple devices. CUDA doesn't run on the iPhone. It doesn't run on the iPad. It won't run on Macs with AMD GPUs. It won't run with Macs that don't have a discrete GPU. And it won't run on future ARM SOC Macs.

Nvidia uses CUDA to push Tegra and their GPUs. Apple won't endorse CUDA because Nvidia is using it to directly try and push the iPhone off the market through Tegra based Android competitors. CUDA is just a giant conflict on interest that, like it or not, Apple is not going to play ball with.

Apple's trying to encourage an ecosystem that gives them the most amount of flexibility without Nvidia calling the shots. If Nvidia had a version of CUDA that ran on the A series, things might be different. But they don't.
 
Yeah, I didn't really want to dig into that again because then we're just going around in circles.



And that there is another problem: CUDA can only run on a subset of Apple devices. CUDA doesn't run on the iPhone. It doesn't run on the iPad. It won't run on Macs with AMD GPUs. It won't run with Macs that don't have a discrete GPU. And it won't run on future ARM SOC Macs.

Nvidia uses CUDA to push Tegra and their GPUs. Apple won't endorse CUDA because Nvidia is using it to directly try and push the iPhone off the market through Tegra based Android competitors. CUDA is just a giant conflict on interest that, like it or not, Apple is not going to play ball with.

Apple's trying to encourage an ecosystem that gives them the most amount of flexibility without Nvidia calling the shots. If Nvidia had a version of CUDA that ran on the A series, things might be different. But they don't.
If I could like this post 1000X, I would. Cut to the heart of the issues. Apple will not be dictated to by NVIDIA in any manner and besides, only a few Macs would run a dGPU, so the point is why cater to that fairly tiny market, even their own?
 
If I could like this post 1000X, I would. Cut to the heart of the issues. Apple will not be dictated to by NVIDIA in any manner and besides, only a few Macs would run a dGPU, so the point is why cater to that fairly tiny market, even their own?

Well, and Nvidia won't do a version of CUDA for the A series because Apple is correct. Nvidia wants to use CUDA as a way to push Tegra based Android devices and take share from the iPhone (and others like Qualcomm.)

So we're deadlocked.
 
  • Like
Reactions: Zdigital2015
I think a lot of commenters on this forum who use CUDA or are CUDA proponents have the point of view that CUDA is a de facto necessity for AI/ML in any/every industry and forget that Apple is going to go their own way in this despite what they say or comment on here, and that flies in the face of their worldview (the commenters).

Apple is determined to be THE gatekeeper to AI/ML on iOS devices and given the number of devices in the wild, I don't blame them. They are not going to let NVIDIA tap into those devices for their own purposes and exploit them to further CUDA's dominance by trying to insinuate it into the product stack at Apple in any way, shape or form.

Apple using CUDA on servers in some datacenter is nothing compared to it gaining a foothold on iOS devices. AI/ML is core tech as evidence by the A11, A12 and A13 Bionic SoCs and the C-Level Suite position held by John Giannandrea.

Commenters in this thread keep looking at it from a technology perspective and that is not the issue. It's a strategic business issue for Apple and the gates keeping NVIDIA away from Apple's cash cow are HIGH, WIDE, THICK and VERY well guarded.

I am sure I will hear people tell me that I don't know what I am talking about, that I'm paranoid and that NVIDIA cannot gain that sort of control, et al. So is Apple crazy then because NVIDIA is a complete non-starter and persona non grata on macOS for the past two OS releases for what reason? NVIDIA is an existential threat to Apple, Apple knows something we do not know and they are not telling. Ever.

I don't think anybody wants CUDA to be the de facto, but it is.. and Apple's solution is garbage. Have you ever tried implementing machine learning on iOS?

The issue I see right now is lets say you want to develop some iOS app with real machine learning. You can't build that model on iOS. You can't build it on MacOS... therefore you have to move everything – training, data, etc to Linux/Nvidia/Cloud. . Then you have to go through the painful process of getting that model back to iOS. Pruning weights, quantization to shrink the model size to fit on device (which can drop accuracy), removing layers that aren't supported in CoreML (which again drops accuracy). The AI field is cut throat and having state-of-art technology can be a massive game changer. CoreML majorly lacks this. Linux/Cuda does not. All the machine learning demo's at WWDC they might look cool, but they are using ancient technology in terms of whats actually possible.

All for what? Data privacy on device? Reducing the cost of inference in the cloud? Preventing Nvidia from getting into mobile?

The cost of inference in the cloud is dropping quickly. Companies already have everything in the cloud and access to the latest & greatest technology. You don't have to worry about deploying new models to millions of devices. The models reside in the cloud. You don't have to worry about competitors accessing models stored on device. You don't have to hire niche iOS developers to just do CoreML when you can support every platform in the cloud using a reliable technology.

Once the cost of inference is dirt cheap and 5G becomes ubiquitous there will be little reason to do on device besides trivial models. I don't think Apple can win the ML game against Nvidia.
 
  • Like
Reactions: Flint Ironstag
This thread is so interesting to me. To the guys battling it out in here, what exactly do you use the workstation for? MacPro users usually buy the workstation for the power under the hood, and are usually Video editors, composers, graphic artists, and the like with a general understanding of what configurations are the most bang for the buck. It sounds like you guys can speak to Apple's technical roadmap to a philosophical degree. Again, what do you guys use the workstation for?
 
  • Like
Reactions: Flint Ironstag
This thread is so interesting to me. To the guys battling it out in here, what exactly do you use the workstation for? MacPro users usually buy the workstation for the power under the hood, and are usually Video editors, composers, graphic artists, and the like with a general understanding of what configurations are the most bang for the buck. It sounds like you guys can speak to Apple's technical roadmap to a philosophical degree. Again, what do you guys use the workstation for?
I am a part-time web/iOS developer and music producer, which makes the Mac Pro the only true computer to have other than a Hackintosh. And I've been running a Hackintosh since 2012 and I'm sick of it (Boot problems, USB problems, audio problem) and although I will probably never make back the 6000+ € I will invest in this machine, I don't see an alternative solution for having two Retina displays, enough ports, 10 Gbit Ethernet and a proper GPU. Maybe a new MBP with a proper keyboard, but I still want a proper workstation.
 
Sorry but unless you work for free how is possible you never make back the money? Over a life span of 5 years a 6000€ machine will cost about 3€ a day, my maid earn 10€/h so even she can afford one. This machine should pay for itself tens of time otherwise it will be a bad investment.
 
I use it for creative stuff, like many. I bet that very very few will use the MP for AI/ML things(it’s a rather small market in general), but since those are not stuff that perform great on Mac it’s a good way to bash the platform.

I would lump AI/ML into software development. It still requires a lot of software to be written and maintained. And software development is a huge segment of the MacOS user base. Arguably one of the most important. Irritating developers is not a good business strategy.

Not everything is bad about the platform. Some of their developer tools are great. So good in fact developers use Macs to write software for other platforms – myself included. Everybody wants software that just works, is fast, efficient and gets the job done. My gripe and many other developers is that Apple has purposely made this extremely painful, costly and anti-developer.

There seems to be a bubble around some Mac users because they aren't aware of the amazing technology outside the ecosystem. Go outside and then you realize things like Nvidia's hardware encoding & decoding is amazing. Vulkan is fantastic. You can still use OpenGL & OpenCL. Machine learning is actually doable.

And then think to yourself why can't this happen on MacOS... oh because Apple is petty.
 
I would lump AI/ML into software development. It still requires a lot of software to be written and maintained. And software development is a huge segment of the MacOS user base. Arguably one of the most important. Irritating developers is not a good business strategy.

Not everything is bad about the platform. Some of their developer tools are great. So good in fact developers use Macs to write software for other platforms – myself included. Everybody wants software that just works, is fast, efficient and gets the job done. My gripe and many other developers is that Apple has purposely made this extremely painful, costly and anti-developer.

There seems to be a bubble around some Mac users because they aren't aware of the amazing technology outside the ecosystem. Go outside and then you realize things like Nvidia's hardware encoding & decoding is amazing. Vulkan is fantastic. You can still use OpenGL & OpenCL. Machine learning is actually doable.

And then think to yourself why can't this happen on MacOS... oh because Apple is petty.

Being a software developer I would expect you to excel at logical constructs yet somehow you've identified AI/ML as a subset of software development yet then moved to the conclusion that since software development in general is a huge - possibly the most important - segment of MacOS user base that AI/ML support specifically through CUDA should now be a major concern of Apple's. Perhaps we should draw a Venn diagram?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.