Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Or those connectors plug into the mpx cartdirge ejem module....

How would the MPX be able to connect to them - the MPX plugs into the slot with a horizontal motion, but the 8 pins connect with a vertical motion. You can see there's what looks like a cutout slot at the top of them, which I assume is for the locking tab on the 8pin connector.

Unless you think there'll be loose wire connectors inside the MPX shroud, that are long enough to plug into the power feeds, before inserting the MPX into the slot? That seems somewhat inelegant.
 
All three photos are oriented in the same direction.

Trying to 'snake' a ribbon cable under the full size MPX modules is unlikely to be flat. First, you have to traverse a distance of two standard slot width to get from under. the full size module. Second, where the ribbon connection is highly likely to go to after that is up vertical tangent the motherboard. Both of those pragmatically combine into a likely situation where the cable is bowed, not flat, and that the cable comes into contact with one or both ( logic board and/or MPX shroud ). The only 'noise' here is that is a good idea for a practical context to put that cable into.

The half width MPX module really isn't that much better. It needs to go vertical even more quickly if trying to snake up between the cards or needs to pass under and then up the card in the 2nd MPX bay slot.

Whether you should stick something into a socket is more important than you can possible stick something into it. "It will work because it is physically possible to jam something in there" is not a well grounded interpretation of what is visible in those pictures.

Hopefully Apple distributes some clear support documents ( and maybe even write a user manual ... what a concept? *cough* ) before they ship these system. If Apple is solely relying on common sense then they may have some problems. [ Although, I'm sure that won't mind applying a 'Darwin' tax on those who want to fry their systems on their own dime and need replacement parts. The motherboard and power supply are going to cost way more that the "I could build this with my trusty screwdriver and thermal paste" bill of material lists tossed around on these forums. ]



Conceptually someone could build a low heat radiating ( non MPX connector power) MPX module with a ribbon cable guides to control the bowing. However, that wouldn't be in conflict with Apple's likely "only use one of these options" constraints. What Apple has built extremely likely is not in that category though. Did the Engadget commentary account for that. No? Has any of the MPX module builders so far accounted for that yet. I'd be very surprised.

That is your speculation. Physically it will work to snake. The plug will be thicker than any wires going into the plug. I don’t see any reason for power to be killed to those plugs just because the mpx is plugged in adjacent to it. Then again crazier things have happened and you could be right. I’d bet you a dollar that they continue to produce juice even with the mpx module slotted over them.
 
think there'll be loose wire connectors inside the MPX shroud, that are long enough to plug into the power feeds,
No need for loose wire, a just a good 3D plug, the "tab" it's just a cover, you open it by hand not by sliding the plug into, then you vertically insert the MPX cartridge ejrm module.
 
No need for loose wire, a just a good 3D plug, the "tab" it's just a cover, you open it by hand not by sliding the plug into, then you vertically insert the MPX cartridge ejrm module.

That seems like a janky idea, as opposed to just having the power delivered to the MPX over its second tab. Unless the plugs have nothing locking them into the power sockets and they just sit with almost zero friction, you wouldn't be able to unplug an MPX device without putting strain directly into the connector on the motherboard. Getting a PCI card in an out of a slot needs affordance for wiggling to seat / unseat it.
 
Let me contend your biased (by media articles, not reality) analysis:

1st Apple doesn't control neither has hopes to control AI development, AI by now belongs to tensorflow (released by Google, comunity controlled, Apple supports tensorflow-lite)

2-tensorflow it's an framework for AI with TF you can train an model and run (inference) it , you can train a model in a CUDA cluster and do inference with this model (run itl with any other platform as it can run tf or tf-lite.

3 tensorflow strictly do not requires GPUs, but GPUs TPUs FPGAs accelerates inference and training by two magnitude order, what's its Cuda advantage? Many, a single RTX Titane GPU is even more powerful than both Radeon pro Vega II Duo in the cgMP at just 280W and at 14nm process, beyond that tensorflow still do not support Metal only tf-lite supports Metal (for inference only), nVidia positioned it's ai solution when they developed their GPU around CUDA instead to repurpose existing hardware (and Radeon), this enable a lot of programming flexibility with CUDA along superior performance (while gaming doesn't get big benefits from this), but it doesn't ends there, Metal GPU offloading features seems a rebranding of opencl, feels short to provide the features array present in CUDA, it's not just inferior to CUDA ir cant replace CUDA with efficiency beyond the few similar features, it's something people like davinci resolve developers among many are aware, aren fee lines of code to Port, some things needs vto be fully rewrite just to work properly (going from Cuda to hip).

Your assumption about Apple crusade to control AI development is bogus, the best evidence on that is Swift for Tensorflow (Swift-tf) it only supports CUDA (read: Swift, SWIFT I didn't name python).

Apple, or Cook is controlled by both shareholders meetings and the reality, and the truth is with Cuda you have better more efficient AI and you can deploy the latest tensorflow features.

Today only GPU accelerated AI in macOS rely on :
nVidia(macOS<Mojave): tensorflow, Swift-tf, tf-lite or
metal:tf-lite keras (thru plaidML to be discontinued by Intel)
that's all the rest do not deserve to name.

Wthis panorama after 2 years trying to impose Metal over CUDA failing so miserably, what Apple can do? Even their money couldn't buy a full support for Metal in tensorflow (impossible since lacks CUDA flexibility), even having money to develop custom you (using ARMs IP) they have to choose lost 2-3 years to still being behind CUDA or simple allow Mac users to use Cuda at least meanwhile someone develop a macOS compatible TPU (like Google's edge(Coral) TPU).

Did you know even Siri rely on. Cuda/Linux servers?

Sorry with the deserved respect you're wrong, you remember me some guy arguing about Apple wireless charging was delayed because apple will implement long range wireless charging tech, he only readed Apple-apologetic blogs ignoring what physics laws said about wireless power, apple got later to the wireless party because their executive where distracted not because they planned something superior, same here about AI 3 years ago some idiot asked to retire from vulkan (even vulkan it not in better position about CUDA) but even to close nVidia Doors, this cost apple the AI race, now AI developers think on a Linux workstation (even a Windows machine is better suited) for Tensorflow development.

I think I should be more succinct...Apple's Machine Learning future lies with the Neural Engine built-in to the A11 and A12 Bionic, and they have opened that up to developers using the CoreML platform, not NVIDIA or CUDA. Sure, older iOS devices can benefit from CoreML, but Apple's flagship iPhones (XS, XS Max and XR) and almost all iPad models use the A11 or A12 Bionic. Where does NVIDIA fit into this picture at all? Answer: They don't...the GPU inside every single iOS device is not CUDA compatible, nor will it ever be. Apple's market is end-user on-device Machine Learning, which is very different from NVIDIA's goals.

NVIDIA has it's market in AI and Machine Learning with big iron and on Windows and Linux computers, but seemingly nothing with Android devices and it just makes no sense for Apple to allow NVIDIA GPUs on macOS and Macs considering that Apple has no interest in utilizing CUDA anyways and its largest device base does not support CUDA. If it did, then CUDA and NVIDIA would be able to take a dominant position in determining on-device machine learning and Apple would no longer be driving that on their own devices as developers would simply seek to support one standard. NVIDIA is interested in selling more GPUs, not selling more iOS devices that do not use NVIDIA GPUs.
This is not something that Apple is going to let any company do, because it would be detrimental to Apple's core hardware business.

How does allowing NVIDIA GPUs at this point and time make any sense to Apple given their AI and Machine Learning focus with on-device Machine Learning? Again, this has nothing to do with actual GPU power and everything to do with project strategy, philosophy and Apple determining its own fate.
 
  • Like
Reactions: Macintosh IIcx
I should be more succinct...Apple's Machine Learning future lies with the Neural Engine built-in to the A11 and A12 Bionic, and they have opened that up to developers using the CoreML platform, not NVIDIA or CUDA. Sure, older iOS devices can benefit from CoreML, but Apple's flagship iPhones (XS, XS Max and XR) and almost all iPad models use the A11 or A12 Bionic. Where does NVIDIA fit into this picture at all?

You have supine misconception about ML.

1st you can train a model with any GPU, trained models are just data, a model trained with an nVidia GPU should run where ever you load it for inference, iOS programmers do not need to include CUDA to work with models trained by CUDA GPUs.

Apple's market is end-user on-device Machine Learning, which is very different from NVIDIA's goals.
On-device-ML (coreML 2) is very limited for training, and better suited for inference, for inference with an well trained model it can do much better things notwithstanding should rely on externally trained models, IMHO it's almost useless "in device ml" given the low power even having CUDA GPUs on board.
considering that Apple has no interest in utilizing CUDA

Apple business aren't they user's business, Apple only should sell me the tools I need for my business, a.e. there are people connecting 3D printers to their Mac mini s, apple has no 3D printer business, so then should Apple block them to use 3D printers?

With your due respect Sir.:cool:
[doublepost=1562344028][/doublepost]
Getting a PCI card in an out of a slot needs affordance for wiggling to seat / unseat it.
Certainly not Jony's style.... But it seems to me.
 
I wonder if we could plug in a nVidia card as a compliment to one of the AMD cards and only use CUDA on that? No need for Metal drivers for graphics, just use the Nvidia as a CUDA add-on card? Maybe a feasible compromise?
 
I wonder if we could plug in a nVidia card as a compliment to one of the AMD cards and only use CUDA on that? No need for Metal drivers for graphics, just use the Nvidia as a CUDA add-on card? Maybe a feasible compromise?

No, because you still have to have a driver for the GPU, whether its Apple's or NVIDIA's Web Driver and Apple's GPU drivers only support a limited amount of GPUs, while Apple has not approved NVIDIA's Web Drivers for use under Mojave and Catalina.
 
NVidia also has a vested interest in Apple not writing their own driver for NVidia cards. NVidia's business model depends on a distinction between GeForce and Quadro cards running on very similar hardware at vastly different prices (so does AMD's with what used to be called FirePro, but Apple is a big enough fish to AMD that AMD was willing to let Apple get away with it). If Apple were allowed to write their own GeForce driver, they'd eventually get it stable enough that it would compete with the Quadro driver. Apple would gain a huge price advantage over HP and other workstation vendors who pay big bucks for the more stable driver, and HP or someone might be encouraged to write their own more stable, but less feature-laden (than the standard GeForce gaming driver) driver that is effectively "GeForce as Quadro". Apple would leave some gaming features out, and devote more energy to squashing bugs instead of overclocking, compared to the gaming driver. It doesn't work for every application due to certification, but there are many applications where you don't need a certified driver, although you want something better than a gaming driver
 
No, because you still have to have a driver for the GPU, whether its Apple's or NVIDIA's Web Driver and Apple's GPU drivers only support a limited amount of GPUs, while Apple has not approved NVIDIA's Web Drivers for use under Mojave and Catalina.

I know that. As I wrote, forget about the graphics driver (GeForce drivers normally), I’m talking about pure CUDA drivers that can only use the Nvidia card as a CUDA compute card. It will require some hack of course. Let’s see what happens.
 
I know that. As I wrote, forget about the graphics driver (GeForce drivers normally), I’m talking about pure CUDA drivers that can only use the Nvidia card as a CUDA compute card. It will require some hack of course. Let’s see what happens.

When I upgraded from HS to Mojave, both the NVIDIA Web Drivers and CUDA we’re deactivated and Mojave directed me to uninstall them. I can always try to reinstall the CUDA drivers on my iMac, but it looks like they are only updated up to 10.13.6 compatibility, which is not really a practical idea for this new Mac Pro. It seems it would be easier to simply move to a cheaper Windows or Linux box, than to try and force it on a $6K Mac Pro with a “hack”. Just my 2¢.
 
I know that. As I wrote, forget about the graphics driver (GeForce drivers normally), I’m talking about pure CUDA drivers that can only use the Nvidia card as a CUDA compute card.

All drivers need to be signed. GPU , usb driver , etc. It is not a special corner case that the GPU drivers have to be signed. If drivers aren't being signed because violating Apple's policies then it is not particularly material what the driver is hooked too. It the policy violation that tis the core issue.

Interfering with Metal or modifying/overwriting parts of the kernel that Metal has interfaces with or not being a good kernel citizen all probably land a policy violations for kennel drivers. If Firewire driver did something along those lines it probably wouldn't get signed. A special code that is going to get merge into the kernel operating space is going to have pass some authentication/diligence test .

Secondly, the above carries the presumption that there is 100% complete decoupling of CUDA stack from the openGL/Vulkan/Metal GPU display stack. CUDA isn't 100% decoupled from data structures like textures and other GPU data structure. So spinning the notion that it is a completely different thing just because a monitor isn't actively hooked to the card probably is decoupled from reality. Pragmatically, many of the applications for CUDA are highly coupled to the display ( that's why CUDA and the display workload are working out of the a shared data space. )

if the CUDA "driver" could do all of its work outside the kernel. ... then it would be the same ballpark as "just add another augment to the /Library/Application Support/ directory and run. If the CUDA driver needs to inject itself into the kernel then it has to play by the rules the kernel owner has (which is basically Apple).

It might be easier for Apple and Nvidia to work out a truce on a narrow subset of the GPU stack where most of this CUDA stuff comes off as a "non GPU" device; e.g., more so as a "hardware accelerator". The catch-22 is that Metal and CUDA have characteristics of both GPU and computation devices. It is muddled in different ways in each ( which makes getting to a work-around even more messier. )

Apple's long term "device" set for kernel/system extensions should have a class that is something like "hardware accelerator" that doesn't have to be a GPU. Apple's Afterburner should be just a start of ability to put in a card with a chip ( being FPGA , ASIC , custom) that just crunchs data . That kind of a kernel nexus doesn't need to be the same as the GPU. Apple's current kernel extension model was creating 15+ years ago and I doubt that is a good match.



It will require some hack of course. Let’s see what happens.

There are some hacks in previous macOS instance where may tap dance around this ( turning off SIP , mutating code , non signature checking of kernel elements ) , but isn't going fly well going forward. Apple's system files are off in their own volume tree in 10.15. T2 validates the basic firmware (hack insertion free). ....

Any huge attack vector exposed to backdoor hack enable unsigned drivers to get in will probably be closed down by Apple on some future security update iteration. Apple will probably keep around a highly diminished security mode but that won't be a 'normative' place to put commercial software.


P.S. Nvidia's tactics with CUDA have been to use it as getting the camel's nose into the tent so they can tilt the rest of the subsystems around the GPU. They haven't gone out of their way to make it highly modular from the rest of their graphics stack either.
 
Last edited:
  • Like
Reactions: Zdigital2015
All drivers need to be signed. GPU , usb driver , etc. It is not a special corner case that the GPU drivers have to be signed. If drivers aren't being signed because violating Apple's policies then it is not particularly material what the driver is hooked too. It the policy violation that tis the core issue.

Interfering with Metal or modifying/overwriting parts of the kernel that Metal has interfaces with or not being a good kernel citizen all probably land a policy violations for kennel drivers. If Firewire driver did something along those lines it probably wouldn't get signed. A special code that is going to get merge into the kernel operating space is going to have pass some authentication/diligence test .

Secondly, the above carries the presumption that there is 100% complete decoupling of CUDA stack from the openGL/Vulkan/Metal GPU display stack. CUDA isn't 100% decoupled from data structures like textures and other GPU data structure. So spinning the notion that it is a completely different thing just because a monitor isn't actively hooked to the card probably is decoupled from reality. Pragmatically, many of the applications for CUDA are highly coupled to the display ( that's why CUDA and the display workload are working out of the a shared data space. )

if the CUDA "driver" could do all of its work outside the kernel. ... then it would be the same ballpark as "just add another augment to the /Library/Application Support/ directory and run. If the CUDA driver needs to inject itself into the kernel then it has to play by the rules the kernel owner has (which is basically Apple).

It might be easier for Apple and Nvidia to work out a truce on a narrow subset of the GPU stack where most of this CUDA stuff comes off as a "non GPU" device; e.g., more so as a "hardware accelerator". The catch-22 is that Metal and CUDA have characteristics of both GPU and computation devices. It is muddled in different ways in each ( which makes getting to a work-around even more messier. )

Apple's long term "device" set for kernel/system extensions should have a class that is something like "hardware accelerator" that doesn't have to be a GPU. Apple's Afterburner should be just a start of ability to put in a card with a chip ( being FPGA , ASIC , custom) that just crunchs data . That kind of a kernel nexus doesn't need to be the same as the GPU. Apple's current kernel extension model was creating 15+ years ago and I doubt that is a good match.





There are some hacks in previous macOS instance where may tap dance around this ( turning off SIP , mutating code , non signature checking of kernel elements ) , but isn't going fly well going forward. Apple's system files are off in their own volume tree in 10.15. T2 validates the basic firmware (hack insertion free). ....

Any huge attack vector exposed to backdoor hack enable unsigned drivers to get in will probably be closed down by Apple on some future security update iteration. Apple will probably keep around a highly diminished security mode but that won't be a 'normative' place to put commercial software.


P.S. Nvidia's tactics with CUDA have been to use it as getting the camel's nose into the tent so they can tilt the rest of the subsystems around the GPU. They haven't gone out of their way to make it highly modular from the rest of their graphics stack either.

This has always been my argument (different threads) that NVIDIA truly sees their GPUs as the center of the PC as opposed to the CPU being the “brain” and the GPU being a part of the overall system, which is naturally how AMD and Intel see it along with most PC OEMs. This is the arrogance and bluster part of the “NVIDIA/Apple Divide” rumors turn I tend to believe more than anything else. Even though Apple uses the GPU to accelerate the UI, I think NVIDIA proposed Apple rewriting the UI to use only NVIDIA proprietary tech that would lock them into NVIDIA and that might have been the final straw for Apple. Sounds a bit over the top, but I can almost see NVIDIA touting how important it would be to Apple’s future, et al. Just my 2¢.
 
Ask Aiden if he knows somebody waiting to buy a ncgMP with 4 Titan or Quadro RTX on board, about DNG, you're right about these speculation, but there are few facts: CUDA's repositories have been getting more patches related to macOS the past 6 months than last 2 years.


Disagree, there are N ways to fit 2 Titan RTX between the MPX real state.


With all due respect, neither apple belongs to Frederighi, Cook, etc neither nVidia belongs to Huang, both belongs to their stock holders and theirs market period.

If you believe those stories about a multi-billion corporation being managed as garage.grocery you also believe in Santa.

Name the last time the stock holders were able to change direction of Apple. Yeah, I can't think of one either.

I take it you haven't worked at any large organization at a high level. The higher up you get, the more it turns into high school.

The issue with Apple/Nvidia is all of those Nvidia GeForce 8600M GT chips that died - Apple took the reputational hit on them, not Nvidia (who really should have).
 
  • Like
Reactions: Zdigital2015
Apple business aren't they user's business, Apple only should sell me the tools I need for my business, a.e. there are people connecting 3D printers to their Mac mini s, apple has no 3D printer business, so then should Apple block them to use 3D printers?

Apple is going to decide what they want to sell to their customers that align with their business goals. There is no SVP position of 3D printers, but there is an SVP position of AI and Machine Learning, which is one of 14 total SVP positions at Apple, which elevates it to a CORE TECHNOLOGY.

Apple won’t sell something that would undercut its core businesses, plain and simple...they are completely willing to let you and others move to a different platform and they consider it an acceptable loss in their minds, because protecting a Core Technology and their hardware business is too important to risk versus a small percentage of users who want Apple to offer certain technology that does not align with its own interests.

You’re arguing it from a technology perspective and your own pure self-interest, while ignoring Apple’s business motives for excluding NVIDIA.
 
Apple is going to decide what they want to sell to their customers that align with their business goals. There is no SVP position of 3D printers, but there is an SVP position of AI and Machine Learning, which is one of 14 total SVP positions at Apple, which elevates it to a CORE TECHNOLOGY.

Apple won’t sell something that would undercut its core businesses, plain and simple...they are completely willing to let you and others move to a different platform and they consider it an acceptable loss in their minds, because protecting a Core Technology and their hardware business is too important to risk versus a small percentage of users who want Apple to offer certain technology that does not align with its own interests.

You’re arguing it from a technology perspective and your own pure self-interest, while ignoring Apple’s business motives for excluding NVIDIA.

LoL you too ignoring Apple's motivation to reinstall it.

Of. MS also had a SVP for smartphones (Paul Allen noneless) same situation too late too few many errors. Sorry Giannandrea, thought cookies.
Name the last time the stock holders were able to change direction of Apple. Yeah, I can't think of one either.

I take it you haven't worked at any large organization at a high level. The higher up you get, the more it turns into high school.

The issue with Apple/Nvidia is all of those Nvidia GeForce 8600M GT chips that died - Apple took the reputational hit on them, not Nvidia (who really should have).
Personally I've worked closely to the management on 1 S&P500 corporation and a foreign oil corporation, even the worst corrupt one has ways to tight control it's managers either direct or post events, cook isn't someone I feel sorry I think he is an opportunist backed (and backer) to a politically motivated lobby, but he is very careful to not cross borders one of these is to put her personal bias, he has to technically support his decisions to the board, it's not just n"sucks Fire them", this is not, he has tons of enemies looking (sec has power to remove and jail he w/o asking stockholders a.e.) for these errors to trial he and remove from apple, you need to read about publicly owned corporations laws, even people like Elon Musk had to resign by these kind of situations.
 
Last edited:
Only time will tell which one of us is correct...
Why and when Apple hired Giannandrea? Less than a year, why? Siri's AI was an huge failure, what changed? He borrowed tensorflow lite (he failed on full TF due metal poor feature set, everything tells he found the solution: to quietly reinstall nVidia support and fully run tensorflow until AMD GPUs and Metal are up to run by themselves at least, a good way is to enable Isa PCIe GPU and nVidia drivers, nVidia won't miss this business and apple do not need to bless or sell nVidia.
 
No Short Cuts!

I answered an add below a "Missing Dog" add stapled to a tree. It read:
"Disgruntled Apple employee has access to several base model 2019 MP's."
"Willing to sell @ $4000" "Only 1 per customer" "Text me @ 555-****"

So I sent a text and received a link to a website.
On the site I was told to bring $4000 in twenties in a paper bag to Pier #8, Bldg 32 @ 2:56am!
All this seemed so krypted because these were the specs of the computer, 8 cores, 32GB ram and 256 GB storage! This meant it was meant to be!

I got there with the paper bag and was shown the computer. I handed the guy the bag, turned around to take the panel off the MP. This is when I heard the door slam and a car burning rubber outside!
When I got the cover off this is what I saw!

The question is what am I going to do with a used tcMP?

A better question is what is he going to do with a bag full of news paper clippings!


View attachment 841865
Yes, you got the better deal.

I could use the same deal.

Where do you get the paper clippings?

 
Apple's current kernel extension model was creating 15+ years ago

Uh, should maybe they look at updating, after 15 years?
Apple still includes nVidia driver support in macOS Catalina for older model iMacs that came from the factory with nVidia GPU's. With a little effort, I bet they could figure out a way to support a 2080Ti video card in the new Mac Pro 2019.
They need to assemble a few select engineer-types from both sides of the Apple/NVidia fence and "smoke a peace pipe", so to speak.
 
Uh, should maybe they look at updating, after 15 years?
Apple still includes nVidia driver support in macOS Catalina for older model iMacs that came from the factory with nVidia GPU's.

Those are 2013 iMacs. There is a pretty good chance that when OpenGL stack dies in a future macOS those iMacs will be gone along with them. Those iMacs will be dropped in the next year or two as they fall onto the Vintage/Obsolete lists. I'm pretty sure those old iMacs are are Metal Feature family 1 and have made zero progress. The other Nvidia GPUs are likely in the same boat.

The driver model is in the process of getting a major revamp now. The GPU stack is also going through major changes ( OpenGL is going to get dropped). It could be a total coincidence that just as Apple ramps up to do major changes that Nvidia's new drivers have issues in them that Apple doesn't want to sign off on. ... or maybe it isn't.


With a little effort, I bet they could figure out a way to support a 2080Ti video card in the new Mac Pro 2019.
They need to assemble a few select engineer-types from both sides of the Apple/NVidia fence and "smoke a peace pipe", so to speak.

The software/hardware engineers and the technical folks probably aren't the root cause of the issue. There are probably some business process conflicts ( on both sides) that are most likely the primary root cause issues. Lots of Nvidia fanboys try to paint Nvidia as the "Boy/Girl Scout Troopers" here handing out free cookies and helping little old ladies across the road. Nvidia's "embrace, extend , extinguish" tactics to digging deeper moats around CUDA are a problem. ( doing OpenCL 1.1 and then going into rip van winkle mode is example of the "extend, extinguish". Squatting on a relatively older version of a standard isn't a true embrace , but gives a illusion of support ). The twist of putting older UGA support into the RTX cards for the Mac 5,1 is just as likely to be interpreted as a "F you" by Apple as the free cookies gift connotation being spun by some in these forums. ( Apple is looking more help to go forward not backwards. There is little indication that Nvidia has tried to be the best Metal partner Apple could possibly have. ). As long as Nvidia holds on to the notion that 'Metal has to loose for CUDA to win' then they'll probably find themselves on the outside looking in. Nvidia pointing fingers at Apple in public saying it "We're done it is all Apple's fault" isn't going to get them the "good partner looking to increase business with" award either.

The macOS graphics stack has historically been a split shop where Apple didn't a substantive portion of the OpenGL stack and the GPU vendors did the rest. Metal is going to change that collaborative mix and Apple probably wants more low level intellectual property info than they've needed in the past. Nvidia could be increasingly balking at that because Apple is now a GPU implementer ( Apple may see some 'secret sauce' and put it into the competing GPU. ) .


I don't think Apple is trying to completely exclude CUDA, but GPU partners can't make Metal a "we'll get to it in our copious spare time" priority either.
 
If one is wedded to CUDA for ML, why would the Mac Pro 7,1 be on your radar even if it did support nVidia? You can get less powerful, but significantly cheaper base hardware with Windows and Linux and you can also scale far beyond what a Mac Pro can do on those platforms if you have the cash.




Honestly, Apple should have shown the 7,1 off at the National Association of Broadcasters keynote, not WWDC. The crowd would have been weeping in the aisles at "only" having to pay $6000 (with stand) for a display that can actually hold it's color for more than 30 seconds and they would only want a fully-tricked out Mac Pro so the entry price and configuration would not even register with them. :D
 
This has always been my argument (different threads) that NVIDIA truly sees their GPUs as the center of the PC as opposed to the CPU being the “brain” and the GPU being a part of the overall system, which is naturally how AMD and Intel see it along with most PC OEMs. This is the arrogance and bluster part of the “NVIDIA/Apple Divide” rumors turn I tend to believe more than anything else.

Arrogance may be too strong an adjective, but pragmatically what else does Nvidia got? They aren't a significant CPU player. They tried to jump into the ARM SoC space but largely failed to become a dominant player there. They tried to pivot to "huge mobile" (cars with relatively humungous batteries), but traction there too isn't really in the dominant phase.

The problem with Apple in the Mac context is that is two fold. One, Apple is a GPU implementor competitor. (some of the same basic problem Nvidia has in vast majority of PC space where Intel and AMD in x86 systems and Qualcomm , Apple , and other smartphone SoC implementors who ran Nvidia out will open up the Windows (on ARM ) and possibly macOS on ARM (in the mobile space over the intermediate timeframe). If Apple goes with Apple SoC on some fraction of the mac line up then Nvidia is completely 'out' ( unless eGPU somehow come into play).

Second, the software moat around their GPUs is pragmatically much smaller on the macOS side. Metal on macOS being extremely coupled to Metal on iOS/iPadOS means it is probably more "powerfully" important than DirectX is on Windows. ( and Nvidia don't pick huge fights with DirectX on Windows either. It is an even bigger looser fight on macOS. )

Even though Apple uses the GPU to accelerate the UI, I think NVIDIA proposed Apple rewriting the UI to use only NVIDIA proprietary tech that would lock them into NVIDIA and that might have been the final straw for Apple.

While I think Nvidia is peephole viewing the strategic impact ( limiting themselves to just Mac pro and maybe eGPU expansion boxes) , I extremely doubt they were drinking that much kool-aid. I don't think they were dictating how to write a GPU stack to Apple or acting like they are the primary owners of the Operating System. They may be proposing something that makes Metal 'second class' or that Apple hand over more GPU stack implementation to them on a "trust us" basis. How the shared work is allocated is more likely a root cause problem.


Sounds a bit over the top, but I can almost see NVIDIA touting how important it would be to Apple’s future, et al. Just my 2¢.

Metal is coupled to iOS/iPadOS. They have to be crazy high on crack to think that. It just can't possibly be true. Even if Nvidia completely took over a huge chunk of all Mac revenue, it still would be a narrow fraction of Apple's revenues. There is no way Nvidia becomes the sole primary subcontractor that Apple has. I know folks in the Mac desktop subforms on this site try to make it out that these are the most strategically critical products that Apple has , but that that is pure self delusion. What Nvidia represents is enabling some "nice to have" revenue Apple could get; not strategic revenue.

Far more likely is that Nvidia knows they don't have huge leverage here. They aren't putting best effort into being a good partner with Apple for the future. They are putting effort into milking the Mac 5,1 cash cow (and the smaller hackintoush cash cow that goes along with that). Apple pricing the new Mac Pro at $6K has the side effect of probably extending the life of that cash cow for a couple more years while some folks just sit and squat on the infrastructure that they have and increasing look to bumping the GPU as rationalization to squat on it (and frozen in time macOS) even longer. The number of 2019 Mac Pros sold is going to be smaller than 2010 ones. ( suspect the eGPU market hasn't done much to suggest there will be much of an offset there. )

Some it won't matter if they burn bridges with Apple because they don't have an "interesting enough' future there anyway. When Apple get to having two external discrete GPU vendors ( AMD and Intel) who are willing to jump out their chair to support Metal and what is the likelihood that Nvidia would win any future "bake off" design contests for new mac products. Similarly when Apple puts their own CPU+GPU on the table for the design "bake off' what chance is Nvidia going to have on those subset of products? The Mac Pro GPU card space is a small enough market that it isn't going to support three players.

Even Nvidia's growth market in ML faces some long term issues. The "half' of ML that is inference is going to increasing move out of the cloud and into systems that folks own anyway ( which Nvidia does NOT dominate now and have an even less traction in the future. ). Expensive ML sure they have a good moat. However, for more affordable ML. their "moat" isn't the colossal dominator that some of the hand wavers here make it out to be. ML primarily only has value if you get inferences out it that are valuable. Learning without inference isn't a huge value proposition.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.