Or those connectors plug into the mpx cartdirge ejem module....Personally I think it's likely that a sensor in the MPX slot will cut power to the 8-pin connectors when an MPX card is inserted.
Or those connectors plug into the mpx cartdirge ejem module....Personally I think it's likely that a sensor in the MPX slot will cut power to the 8-pin connectors when an MPX card is inserted.
Or those connectors plug into the mpx cartdirge ejem module....
All three photos are oriented in the same direction.
Trying to 'snake' a ribbon cable under the full size MPX modules is unlikely to be flat. First, you have to traverse a distance of two standard slot width to get from under. the full size module. Second, where the ribbon connection is highly likely to go to after that is up vertical tangent the motherboard. Both of those pragmatically combine into a likely situation where the cable is bowed, not flat, and that the cable comes into contact with one or both ( logic board and/or MPX shroud ). The only 'noise' here is that is a good idea for a practical context to put that cable into.
The half width MPX module really isn't that much better. It needs to go vertical even more quickly if trying to snake up between the cards or needs to pass under and then up the card in the 2nd MPX bay slot.
Whether you should stick something into a socket is more important than you can possible stick something into it. "It will work because it is physically possible to jam something in there" is not a well grounded interpretation of what is visible in those pictures.
Hopefully Apple distributes some clear support documents ( and maybe even write a user manual ... what a concept? *cough* ) before they ship these system. If Apple is solely relying on common sense then they may have some problems. [ Although, I'm sure that won't mind applying a 'Darwin' tax on those who want to fry their systems on their own dime and need replacement parts. The motherboard and power supply are going to cost way more that the "I could build this with my trusty screwdriver and thermal paste" bill of material lists tossed around on these forums. ]
Conceptually someone could build a low heat radiating ( non MPX connector power) MPX module with a ribbon cable guides to control the bowing. However, that wouldn't be in conflict with Apple's likely "only use one of these options" constraints. What Apple has built extremely likely is not in that category though. Did the Engadget commentary account for that. No? Has any of the MPX module builders so far accounted for that yet. I'd be very surprised.
No need for loose wire, a just a good 3D plug, the "tab" it's just a cover, you open it by hand not by sliding the plug into, then you vertically insert the MPX cartridge ejrm module.think there'll be loose wire connectors inside the MPX shroud, that are long enough to plug into the power feeds,
No need for loose wire, a just a good 3D plug, the "tab" it's just a cover, you open it by hand not by sliding the plug into, then you vertically insert the MPX cartridge ejrm module.
Let me contend your biased (by media articles, not reality) analysis:
1st Apple doesn't control neither has hopes to control AI development, AI by now belongs to tensorflow (released by Google, comunity controlled, Apple supports tensorflow-lite)
2-tensorflow it's an framework for AI with TF you can train an model and run (inference) it , you can train a model in a CUDA cluster and do inference with this model (run itl with any other platform as it can run tf or tf-lite.
3 tensorflow strictly do not requires GPUs, but GPUs TPUs FPGAs accelerates inference and training by two magnitude order, what's its Cuda advantage? Many, a single RTX Titane GPU is even more powerful than both Radeon pro Vega II Duo in the cgMP at just 280W and at 14nm process, beyond that tensorflow still do not support Metal only tf-lite supports Metal (for inference only), nVidia positioned it's ai solution when they developed their GPU around CUDA instead to repurpose existing hardware (and Radeon), this enable a lot of programming flexibility with CUDA along superior performance (while gaming doesn't get big benefits from this), but it doesn't ends there, Metal GPU offloading features seems a rebranding of opencl, feels short to provide the features array present in CUDA, it's not just inferior to CUDA ir cant replace CUDA with efficiency beyond the few similar features, it's something people like davinci resolve developers among many are aware, aren fee lines of code to Port, some things needs vto be fully rewrite just to work properly (going from Cuda to hip).
Your assumption about Apple crusade to control AI development is bogus, the best evidence on that is Swift for Tensorflow (Swift-tf) it only supports CUDA (read: Swift, SWIFT I didn't name python).
Apple, or Cook is controlled by both shareholders meetings and the reality, and the truth is with Cuda you have better more efficient AI and you can deploy the latest tensorflow features.
Today only GPU accelerated AI in macOS rely on :
nVidia(macOS<Mojave): tensorflow, Swift-tf, tf-lite or
metal:tf-lite keras (thru plaidML to be discontinued by Intel)
that's all the rest do not deserve to name.
Wthis panorama after 2 years trying to impose Metal over CUDA failing so miserably, what Apple can do? Even their money couldn't buy a full support for Metal in tensorflow (impossible since lacks CUDA flexibility), even having money to develop custom you (using ARMs IP) they have to choose lost 2-3 years to still being behind CUDA or simple allow Mac users to use Cuda at least meanwhile someone develop a macOS compatible TPU (like Google's edge(Coral) TPU).
Did you know even Siri rely on. Cuda/Linux servers?
Sorry with the deserved respect you're wrong, you remember me some guy arguing about Apple wireless charging was delayed because apple will implement long range wireless charging tech, he only readed Apple-apologetic blogs ignoring what physics laws said about wireless power, apple got later to the wireless party because their executive where distracted not because they planned something superior, same here about AI 3 years ago some idiot asked to retire from vulkan (even vulkan it not in better position about CUDA) but even to close nVidia Doors, this cost apple the AI race, now AI developers think on a Linux workstation (even a Windows machine is better suited) for Tensorflow development.
I should be more succinct...Apple's Machine Learning future lies with the Neural Engine built-in to the A11 and A12 Bionic, and they have opened that up to developers using the CoreML platform, not NVIDIA or CUDA. Sure, older iOS devices can benefit from CoreML, but Apple's flagship iPhones (XS, XS Max and XR) and almost all iPad models use the A11 or A12 Bionic. Where does NVIDIA fit into this picture at all?
On-device-ML (coreML 2) is very limited for training, and better suited for inference, for inference with an well trained model it can do much better things notwithstanding should rely on externally trained models, IMHO it's almost useless "in device ml" given the low power even having CUDA GPUs on board.Apple's market is end-user on-device Machine Learning, which is very different from NVIDIA's goals.
considering that Apple has no interest in utilizing CUDA
Certainly not Jony's style.... But it seems to me.Getting a PCI card in an out of a slot needs affordance for wiggling to seat / unseat it.
I wonder if we could plug in a nVidia card as a compliment to one of the AMD cards and only use CUDA on that? No need for Metal drivers for graphics, just use the Nvidia as a CUDA add-on card? Maybe a feasible compromise?
No, because you still have to have a driver for the GPU, whether its Apple's or NVIDIA's Web Driver and Apple's GPU drivers only support a limited amount of GPUs, while Apple has not approved NVIDIA's Web Drivers for use under Mojave and Catalina.
I know that. As I wrote, forget about the graphics driver (GeForce drivers normally), I’m talking about pure CUDA drivers that can only use the Nvidia card as a CUDA compute card. It will require some hack of course. Let’s see what happens.
I know that. As I wrote, forget about the graphics driver (GeForce drivers normally), I’m talking about pure CUDA drivers that can only use the Nvidia card as a CUDA compute card.
It will require some hack of course. Let’s see what happens.
All drivers need to be signed. GPU , usb driver , etc. It is not a special corner case that the GPU drivers have to be signed. If drivers aren't being signed because violating Apple's policies then it is not particularly material what the driver is hooked too. It the policy violation that tis the core issue.
Interfering with Metal or modifying/overwriting parts of the kernel that Metal has interfaces with or not being a good kernel citizen all probably land a policy violations for kennel drivers. If Firewire driver did something along those lines it probably wouldn't get signed. A special code that is going to get merge into the kernel operating space is going to have pass some authentication/diligence test .
Secondly, the above carries the presumption that there is 100% complete decoupling of CUDA stack from the openGL/Vulkan/Metal GPU display stack. CUDA isn't 100% decoupled from data structures like textures and other GPU data structure. So spinning the notion that it is a completely different thing just because a monitor isn't actively hooked to the card probably is decoupled from reality. Pragmatically, many of the applications for CUDA are highly coupled to the display ( that's why CUDA and the display workload are working out of the a shared data space. )
if the CUDA "driver" could do all of its work outside the kernel. ... then it would be the same ballpark as "just add another augment to the /Library/Application Support/ directory and run. If the CUDA driver needs to inject itself into the kernel then it has to play by the rules the kernel owner has (which is basically Apple).
It might be easier for Apple and Nvidia to work out a truce on a narrow subset of the GPU stack where most of this CUDA stuff comes off as a "non GPU" device; e.g., more so as a "hardware accelerator". The catch-22 is that Metal and CUDA have characteristics of both GPU and computation devices. It is muddled in different ways in each ( which makes getting to a work-around even more messier. )
Apple's long term "device" set for kernel/system extensions should have a class that is something like "hardware accelerator" that doesn't have to be a GPU. Apple's Afterburner should be just a start of ability to put in a card with a chip ( being FPGA , ASIC , custom) that just crunchs data . That kind of a kernel nexus doesn't need to be the same as the GPU. Apple's current kernel extension model was creating 15+ years ago and I doubt that is a good match.
There are some hacks in previous macOS instance where may tap dance around this ( turning off SIP , mutating code , non signature checking of kernel elements ) , but isn't going fly well going forward. Apple's system files are off in their own volume tree in 10.15. T2 validates the basic firmware (hack insertion free). ....
Any huge attack vector exposed to backdoor hack enable unsigned drivers to get in will probably be closed down by Apple on some future security update iteration. Apple will probably keep around a highly diminished security mode but that won't be a 'normative' place to put commercial software.
P.S. Nvidia's tactics with CUDA have been to use it as getting the camel's nose into the tent so they can tilt the rest of the subsystems around the GPU. They haven't gone out of their way to make it highly modular from the rest of their graphics stack either.
Ask Aiden if he knows somebody waiting to buy a ncgMP with 4 Titan or Quadro RTX on board, about DNG, you're right about these speculation, but there are few facts: CUDA's repositories have been getting more patches related to macOS the past 6 months than last 2 years.
Disagree, there are N ways to fit 2 Titan RTX between the MPX real state.
With all due respect, neither apple belongs to Frederighi, Cook, etc neither nVidia belongs to Huang, both belongs to their stock holders and theirs market period.
If you believe those stories about a multi-billion corporation being managed as garage.grocery you also believe in Santa.
Apple business aren't they user's business, Apple only should sell me the tools I need for my business, a.e. there are people connecting 3D printers to their Mac mini s, apple has no 3D printer business, so then should Apple block them to use 3D printers?
Apple is going to decide what they want to sell to their customers that align with their business goals. There is no SVP position of 3D printers, but there is an SVP position of AI and Machine Learning, which is one of 14 total SVP positions at Apple, which elevates it to a CORE TECHNOLOGY.
Apple won’t sell something that would undercut its core businesses, plain and simple...they are completely willing to let you and others move to a different platform and they consider it an acceptable loss in their minds, because protecting a Core Technology and their hardware business is too important to risk versus a small percentage of users who want Apple to offer certain technology that does not align with its own interests.
You’re arguing it from a technology perspective and your own pure self-interest, while ignoring Apple’s business motives for excluding NVIDIA.
Personally I've worked closely to the management on 1 S&P500 corporation and a foreign oil corporation, even the worst corrupt one has ways to tight control it's managers either direct or post events, cook isn't someone I feel sorry I think he is an opportunist backed (and backer) to a politically motivated lobby, but he is very careful to not cross borders one of these is to put her personal bias, he has to technically support his decisions to the board, it's not just n"sucks Fire them", this is not, he has tons of enemies looking (sec has power to remove and jail he w/o asking stockholders a.e.) for these errors to trial he and remove from apple, you need to read about publicly owned corporations laws, even people like Elon Musk had to resign by these kind of situations.Name the last time the stock holders were able to change direction of Apple. Yeah, I can't think of one either.
I take it you haven't worked at any large organization at a high level. The higher up you get, the more it turns into high school.
The issue with Apple/Nvidia is all of those Nvidia GeForce 8600M GT chips that died - Apple took the reputational hit on them, not Nvidia (who really should have).
LoL you too ignoring Apple's motivation to reinstall it.
Why and when Apple hired Giannandrea? Less than a year, why? Siri's AI was an huge failure, what changed? He borrowed tensorflow lite (he failed on full TF due metal poor feature set, everything tells he found the solution: to quietly reinstall nVidia support and fully run tensorflow until AMD GPUs and Metal are up to run by themselves at least, a good way is to enable Isa PCIe GPU and nVidia drivers, nVidia won't miss this business and apple do not need to bless or sell nVidia.Only time will tell which one of us is correct...
Yes, you got the better deal.No Short Cuts!
I answered an add below a "Missing Dog" add stapled to a tree. It read:
"Disgruntled Apple employee has access to several base model 2019 MP's."
"Willing to sell @ $4000" "Only 1 per customer" "Text me @ 555-****"
So I sent a text and received a link to a website.
On the site I was told to bring $4000 in twenties in a paper bag to Pier #8, Bldg 32 @ 2:56am!
All this seemed so krypted because these were the specs of the computer, 8 cores, 32GB ram and 256 GB storage! This meant it was meant to be!
I got there with the paper bag and was shown the computer. I handed the guy the bag, turned around to take the panel off the MP. This is when I heard the door slam and a car burning rubber outside!
When I got the cover off this is what I saw!
The question is what am I going to do with a used tcMP?
A better question is what is he going to do with a bag full of news paper clippings!
View attachment 841865
Apple's current kernel extension model was creating 15+ years ago
Certainly not Jony's style.... But it seems to me.
Uh, should maybe they look at updating, after 15 years?
Apple still includes nVidia driver support in macOS Catalina for older model iMacs that came from the factory with nVidia GPU's.
With a little effort, I bet they could figure out a way to support a 2080Ti video card in the new Mac Pro 2019.
They need to assemble a few select engineer-types from both sides of the Apple/NVidia fence and "smoke a peace pipe", so to speak.
This has always been my argument (different threads) that NVIDIA truly sees their GPUs as the center of the PC as opposed to the CPU being the “brain” and the GPU being a part of the overall system, which is naturally how AMD and Intel see it along with most PC OEMs. This is the arrogance and bluster part of the “NVIDIA/Apple Divide” rumors turn I tend to believe more than anything else.
Even though Apple uses the GPU to accelerate the UI, I think NVIDIA proposed Apple rewriting the UI to use only NVIDIA proprietary tech that would lock them into NVIDIA and that might have been the final straw for Apple.
Sounds a bit over the top, but I can almost see NVIDIA touting how important it would be to Apple’s future, et al. Just my 2¢.