Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
the current Mac Pro Design makes no sense for Apple Silicon imo

if the point of Apple Silicon is to pull the Mac Pro to being more like a laptop, then yes. If Apple was going to try to do to the Mac line up what they do with iPhone and iPad Pro ( use one SoC die for the whole line up. ) then yes.


But that isn't necessarily the point of what the Mac Pro ( and iMac Pro and iMac 27" ) model really need. (Probably can throw in the 21-24" model. Maybe the Mac-Mini also. ). Supposedly, Apple does their own SoC for the leading edge products because they can adapt the SoC to what that specific product "needs". Now if look at the big picture of the Apple whole product line up there is far more "hand me down" of old iPhone/iPad Pro SoC to other products than making something maximally enhancing for each product.


The Mac Pro use-case that is driving the requirement for the numerous PCI-e slots isn't going to disappear with Apple Silicon (AS). iMac Pro and 27" iMac will probably still need a discrete GPU with x16 PCI-e v4 bandwidth capabilities.
MPX modules probably aren't going away on the Mac Pro. Neither is ECC RAM and triple (or quad ) digit GB capacities.


There are some synergies for many of the bundling of function units that the AS SoC has in the Mac Pro. First, at this point Thunderbolt 4 requires that there be video out for every port. Four ports on the current design means have a baseline need for 4 video streams. And iGPU present in the main SoC means that even if some user pulls every MPX GPU module from the system , the TBv4 ports would still meet certification.

Second, for the video vertical. Again having built in support for the widespread consumption video codes (e.g., H.265, H.264 , over time AV1 , etc. ) means it is just there regardless. For those workloads it is like having a built in afterburner card for those codecs without 'using up' a slot. (especially if the video fixed function IP and software stack is essentially 'paid for' but the volume usage on other AS/A-series variants. )

Apple doesn't have to put all of the GPU "horsepower" in the Mac Pro variant of AS. But "faar enough" for some minimal GPU baseline performance actually allows more options configurations that could those extra slot for something else. .

Third, the Mac Pro is at maximum standard wall socket power draw. If Apple can lower the CPU consumption budget max load, then they could handle bigger 'power hog' GPUs in a future system. If Apple can drop that 40-80W there it can be allocated to somewhere else since pragmatically in a 'zero sum' context at this point. (may be able to pull the SSD modules closer since not raining max temp air upon them. ) .


The Mac Pro AS SoC doesn't have to chase the "max core count" holy war into the > 32-64 core zone. Just has to be better Mac Pro than the current one. ( and on next iteration incrementally better relatively there too. ). If off in the zone that the Mac Pro has to have 64 , 72 , 128 cores to be "any good" as a workstation, then current Mac Pro design is not on that course at all. Shifting the "embarrassingly parallel" workloads to the GPU is primarily where it is pointing the future direction towards. Dropping in 'bigger/better' and multiple GPU modules/cards is primarily how that is addressed. Apple's general AS design lines work OK with that. It would need a different I/O subsystem but the general apps core subsystem and lowest common denominator GPU , Secure Enclave, Fixed function video, and Ai/ML tensor core combination isn't disconnected from the current Mac Pro at all.

it would need a much different I/O subsystem and much more beefy cache "system memory" . But that is somewhat of a add on ( with probably an enhanced internal communication bus(es). ) .

P.S. Would that future Mac Pro 'detach' from more of the general purpose workstation market? Yes. But it seems likely that all Macs with AS will 'detach' from all of the general purpose markets they inhabit. Not going to be limted to the Mac Pro. ( average selling price gap likely to increase. after sale configuration options. Even raw booting of other OS options. )
 
Last edited:

Sarajiel

macrumors newbie
Aug 12, 2020
18
10
Now that dual socket logic board, that was interesting, as was the water cooling. A logic board like that, extended for PCIe slots, that would need the space of the Cheesegrater V2.0 chassis. And it could still use the same cooling, high static pressure fans up front pushing thru massive 'wide wale' heatsinks...

The problem with that kind of CPU design is that it is highly specialized for certain massively parallel workloads compared to the general purpose workloads most "personal" devices are running. This dilemma has been described by Amdahl's Law for quite some time.
Outside of gaming and video editing there are not that many mathematical problems that benefit much from highly optimized hard- and software on personal devices. If one would be cynical, one could even go so far and blame the "fast" innovation cycles for CPUs for the lack of optimized software in the personal general purpose computation space.

Btw, for those with some spare cash, a server with 2x A64FX CPUs is available from Fujitsu starting at ~$40k. ? Apple could probably design something similar with various core counts, but there is basically no way they could deliver such a system at a consumer or prosumer price level.

Apple is looking for 'silent' partners. Not those looking to make they own 'noise'.

Apple is obviously going the route of full vertical integration and most likely full hardware and software lockdown. If you look at the way Apple treats some of their suppliers it's also obvious that they aren't looking for any real partners. They need dependent subcontractors that have to accept Apple's demands both legally and technically.
While I don't want sound all doom and gloom, it's highly likely that Tim Cook believes he can replicate the closed iPhone/iDevice ecosystems in today's personal computing space of the (mobile) desktop. How that will pan out with a non-consumer device like the Mac Pro will be very interesting.

Would that future Mac Pro 'detach' from more of the general purpose workstation market? Yes. But it seems likely that all Macs with AS will 'detach' from all of the general purpose markets they inhabit. Not going to be limted to the Mac Pro. ( average selling price gap likely to increase. after sale configuration options. Even raw booting of other OS options. )

Looking back at the likes of Silicon Graphics, Digital Equipment or SUN Microsystems, not being part of the mainstream general purpose computing market could end badly for Apple. Not that they will go bankrupt like those companies, but who knows what happens down the road in a decade or two? Maybe the world will completely move from x86-64 to AArch64 (or something more advanced) in the near future, but so far Tim Cook's Apple struggles a bit with a discernible corporate vision that is more than meaningless or contradictory marketing statements. If there is more to him than just penny pinching, now would be the time to show us his vision since the old guard from the Jobs' era is mostly gone.
 
  • Like
Reactions: Brazzan

t0mat0

macrumors 603
Aug 29, 2006
5,473
284
Home
Given AMD is the nearest thing to Apple's graphics partner, and presuming the Mac Pro gets GPU updates, If AMD is going:
- data centre style GPUs with CDNA, 2nd Gen Infinity Architecture, 8 way GPU connectivity - and 7nm GPUs coming this year
- Gaming and client side RDNA 2 aka NAVI 2X GPUs later this year, RDNA 3 aka NAVI 3X by 2022.

Does Apple offer both? If we're assuming the current Mac Pro is going to get the Apple Silicon towards the end of the 2 years (reveal at WWDC 2022, on sale for December 2022?) that's long enough to be able to give GPU updates as MPX modules.

How does Apple handle the PCIe 4.0 connect to GPUs with 2nd Gen Infinity Architecture, and then 3rd Gen Infinity Architecture GPUs that'd need a new connect?
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
Given AMD is the nearest thing to Apple's graphics partner, and presuming the Mac Pro gets GPU updates, If AMD is going:
- data centre style GPUs with CDNA, 2nd Gen Infinity Architecture, 8 way GPU connectivity - and 7nm GPUs coming this year
- Gaming and client side RDNA 2 aka NAVI 2X GPUs later this year, RDNA 3 aka NAVI 3X by 2022.

Does Apple offer both? If we're assuming the current Mac Pro is going to get the Apple Silicon towards the end of the 2 years (reveal at WWDC 2022, on sale for December 2022?) that's long enough to be able to give GPU updates as MPX modules.

How does Apple handle the PCIe 4.0 connect to GPUs with 2nd Gen Infinity Architecture, and then 3rd Gen Infinity Architecture GPUs that'd need a new connect?

Apple will do pretty much what they do now.

The AS Mac Pros, and possibly the 30" iMac get MPX dGPUs. They use the latest available when they make the new AS Macs available for sale. There will not be gaming and client side dGPUs.

Apple will implement the appropriate PCIe version for the dGPU they decide to use. If the dGPU requires a PCIe 3 connection, they will build that into the MPX board. If the dGPU requires a PCIe 4 connection, they do the same.

infinity Fabric is an on chip interconnect, to connect within a package. That is not visible outside a package, so it isn't a consideration. Whatever generation of dGPU is used, it will be a version of the PCIe that goes out to the rest of the motherboard. Only AMD should have any concern for the version of Infinity fabric that is used, it is invisible to anybody else.
 

t0mat0

macrumors 603
Aug 29, 2006
5,473
284
Home
Big Sur showing Navi 31 and others so at least we know AMD could be around for another round of GPU updates
The 2nd Gen Infinity Architecture seems between GPUs, but AMD are showing 3rd gen as also AMD CPU to GPU.
If Apple keeps using AMD for GPU, then wouldn't it concern them too? (After all, won't Apple be meeting the same issue - how to connect their Apple Silicon CPU to potential Apple Silicon dGPU if they go that route?) dc5a1de7-0153-42ee-940e-774527d01c0b.jpg
 

macjustin

macrumors newbie
Apr 28, 2008
24
19
Remember buying Mac towers for like $1500? I'd buy a beefed up Mini "tower" for $2k. I am just not a fan of AIOs and I am ready to move on to modular.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
....
The 2nd Gen Infinity Architecture seems between GPUs, but AMD are showing 3rd gen as also AMD CPU to GPU.
If Apple keeps using AMD for GPU, then wouldn't it concern them too? (After all, won't Apple be meeting the same issue - how to connect their Apple Silicon CPU to potential Apple Silicon dGPU if they go that route?) View attachment 947511

AMD is more than several years away from taking over the dominate percentage of the x86 market in terms of volume of CPUs sold. Also more than several years away from taking away the bulk of the "big iron" server market also. AMD has a good chance of carving a hefty chunk of the "supercomputer" market. That slide is far more aimed at that. (i.e., fewer and fewer folks are going to by 8-way GPUs. Those few may buys lots of units but that is increasingly going to be further away from "mere mortal" users. Same basic factor that is driving down the number of > 2 socket server numbers. It is going to hit GPUs too. )


There are specialized subsets of the APU/Laptop (at the low end) and HPC/ ML compute market that AMD will get some traction with using a CPU-GPU Infinity fabric could get decent traction with but that isn't going to be the entire scope of AMD's future GPU market. Still going to have PCI-e v4 and v5 add-in-cards for non AMD CPU packages. Especially in the generic desktop and lower-mid range workstation market. 3rd gen Infinity Arch may help bring in GPU cihplets to APU products , but probalby not for all of them (lower power just to do one , simpler die ) .

Pro Vega II cards have Infinity Band connections on them. The lower end MPX modules ( 580X , W5500X and W5700X ) do not. There is nothing about AMD roadmap that "Infinity" links will be on every single GPU instance in the future.

In terms of Apple needing a CPU to GPU cache coherent interlink then something like CCIX or CXL (layered on top of PCI-e v5 or better) may come along if there is enough consensus built up between the "big card" GPU vendors.

There is no reason why Apple has to build "every GPU for everybody" in the Mac Pro space. It isn't gong to be happening in the rest of the workstation market either.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
....
infinity Fabric is an on chip interconnect, to connect within a package. That is not visible outside a package, so it isn't a consideration.

Not true. In the GPU space, all the "generation 1" Infinity Fabric connections are between packages. the Gen 3 CPU to GPU connections are also likley mostly through inter package connections. ( chiplet cluster to chiplet cluster at least as much as intra-chiplet comms.). Infinity Fabric is used for both modes ( intra and inter package traffic) . It isn't exclusive to just one.


Whatever generation of dGPU is used, it will be a version of the PCIe that goes out to the rest of the motherboard. Only AMD should have any concern for the version of Infinity fabric that is used, it is invisible to anybody else.

Again rather ungrounded assertions. Links that don't need PCI-e v4 or v5 aren't going to get them. USB 3 or 4 needs PCI-e v5 like a another hole in the head. A bluetooth-WiFi chip doesn't need PCI-e v4 at all. If Apple continues with 1GbE sockets on desktops.. Same thing. The bleeding edge high bandwidth interconnects will move up, but basic I/O it not likely to move up.

PCI-e v4 (and up) also have trace length issues. Can get around them with re-drivers but that is just more stuff on the board.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
I mean I’m not going to give you a name and title, but I’ll leave it at I have a couple of friends at NVIDIA. I also have some friends at Apple, but I don’t even know what the Apple friends do. I just won’t ask.

Late to the party, but this is not true.

AMD and Intel both have/had engineers detached to Apple. They worked in Cupertino, but they were employed by AMD/Intel.

There were issues with Nvidia because always stationed a few to no engineers on Apple campus. And they had very long response times to issues.

Apple does not write the Mac drivers. There have even been job listings from AMD and Intel before (and notably there have been no GPU driver job listings from Apple.)

Nvidia not dedicating engineers in Cupertino on the drivers is not the status quo. It's why they got dumped. I also know for a fact even though they weren't in Cupertino, Nvidia did have engineers employed to work on the Mac drivers.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Late to the party, but this is not true.

AMD and Intel both have/had engineers detached to Apple. They worked in Cupertino, but they were employed by AMD/Intel.

There were issues with Nvidia because always stationed a few to no engineers on Apple campus. And they had very long response times to issues.

Apple does not write the Mac drivers. There have even been job listings from AMD and Intel before (and notably there have been no GPU driver job listings from Apple.)

I’m just going off of what I’ve been told by several NVidia employees. It could very well that the drivers are being done at Apple by Intel and AMD employees. At that point, it kinda seems like semantics.

It’s quite possible that there are nuances that make both true. I just know what I’ve been told *shrug*
 
  • Like
Reactions: 2Stepfan

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Late to the party, but this is not true.

AMD and Intel both have/had engineers detached to Apple. They worked in Cupertino, but they were employed by AMD/Intel.

There were issues with Nvidia because always stationed a few to no engineers on Apple campus. And they had very long response times to issues.

Apple does not write the Mac drivers. There have even been job listings from AMD and Intel before (and notably there have been no GPU driver job listings from Apple.)

Nvidia not dedicating engineers in Cupertino on the drivers is not the status quo. It's why they got dumped. I also know for a fact even though they weren't in Cupertino, Nvidia did have engineers employed to work on the Mac drivers.
This may have been an aspect of it, but from what I've read it seems there's more to it than just that:


 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
This may have been an aspect of it, but from what I've read it seems there's more to it than just that:



I definitely hear you on all of this. I definitely hear goMac as well. I’m definitely not saying that goMac is wrong by any means either.

I think part of is semantics, I think part of it could very well be legal.

What I’ve heard echoes what Renee Richie has said in regards to NVidia vs. AMD. Apple wants to control the stack, NVidia wasn’t willing to go far enough. Thats what Renee Richie I believe has said, I can go look for a quote. What I’ve heard twice, from two different people at NVidia is that Apple wanted to have more control of the stack than what NVidia is willing to give. Honestly, I can read what goMac said, and I can see that not having the resources on campus being an issue. The impression that I definitely got was that Apple wanted to write their own firmware from several conversations.

I think a larger conclusion that we can all come to, no matter what anyone says, just based on what’s happening, is that Apple wants control of their own destiny. At least to me, it doesn’t matter if they are Intel, AMD, or NVidia badged employees working at Apple. I’ve got employees at the company I work for that are badged by my company but are 100% funded by someone we do a lot of business with. I get the semantics of how some of this can be confusing, and it really is kind of a matter of how things are “portrayed”.

NVidia, is by all rights a force to be reckoned with. If we taking gaming completely out of the picture, and just focus on data center, NVidia probably didn’t want to focus too many resources at Apple when AWS and Microsoft are willing to charge people a boat load by the hour for their expertise. That makes complete sense, especially since sometimes we have to explain to people why the NVidia option is so much higher. You get what you pay for. AMD GPUs isn’t a thing at all the best that I can tell in the Cloud space. They could really disrupt things if they did.

No matter what, at the end of the day, Apple decided to just do everything on their own. Apple does what it does, and it gets frustrated when it wants to do something it can’t do. Without going into too many details, I know Apple also farms things out to companies with brilliant developers. Apple having Intel and AMD badged employees on site doesn’t surprise me in the least. I have more than a few friends at Apple corporate. Some may have accidentally disclosed things broadly, but they are friends. Not even work friends, just friends. I make it a point to not ask them what they do over there.

NVidia is a brilliant company. I hope they do buy ARM, with the stipulation that they have to keep allowing other companies to license ARM designs. NVidia has been wanting to get into the processor world for as far back as I can imagine, but just kept getting roadblocks. I’d provide links, but this was probably back in the magazines days.

One thing to note, that I’ve heard other people say, the topic of Apple and NVidia wanting to push CUDA instead of Metal has never come up. It wouldn’t surprise me if AMD was willing to let Apple do whatever for Metal, whether they be AMD badged, or Apple badged, on Apple campus.
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
This may have been an aspect of it, but from what I've read it seems there's more to it than just that:



There definitely is more. A lot of things about the Nvidia relationship made Apple unhappy. The lack of engineers assigned to Cupertino and the long response times definitely contributed. I think the drivers getting blocked angle is more about Apple being tired of dealing with Nvidia bugs, so they just locked them out completely.

That's actually the other proof Nvidia wrote the drivers. Apple locked them out of the driver specifications for Mojave. Why would Nvidia have the driver specifications to begin with for any version of macOS if they weren't writing the drivers?

What I’ve heard echoes what Renee Richie has said in regards to NVidia vs. AMD. Apple wants to control the stack, NVidia wasn’t willing to go far enough. Thats what Renee Richie I believe has said, I can go look for a quote. What I’ve heard twice, from two different people at NVidia is that Apple wanted to have more control of the stack than what NVidia is willing to give. Honestly, I can read what goMac said, and I can see that not having the resources on campus being an issue. The impression that I definitely got was that Apple wanted to write their own firmware from several conversations.

Apple wants to produce their own special versions of GPUs. There are a lot of indications that AMD is giving Apple access to their designs to produce custom versions. Nvidia doesn't want to give Apple access. Nvidia's Tegra and Apple's A series architectures compete. Nvidia doesn't want Apple stealing design secrets and putting them into the A series. Meanwhile that's not something AMD worries about because AMD isn't really into ARM chips. And conversely Apple doesn't want something they share with Nvidia to be used against them in Tegra.

It's not firmware or drivers or that sort of thing. Its access to GPU designs and collaborations on customized versions.

Nvidia and Apple went off the rails when Tegra started to be a big thing. It was far from the only thing that soured the relationship. But it seemed like the final nail in the coffin.

One thing to note, that I’ve heard other people say, the topic of Apple and NVidia wanting to push CUDA instead of Metal has never come up. It wouldn’t surprise me if AMD was willing to let Apple do whatever for Metal, whether they be AMD badged, or Apple badged, on Apple campus.

Nvidia likes to twist things around in ways that... uhhhh... make them seem like the victim. To a lot of people (like Apple), CUDA seems like Nvidia wanting to have full control over the stack. Whether Metal or CUDA is better is it's own debate, but Nvidia likes to complain about others doing the exact same thing they are.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
There are a lot of indications that AMD is giving Apple access to their designs to produce custom versions.

Can you elaborate? Apple GPUs are based on PowerVR designs, not AMD, and I haven’t seen any evidence that newer Apple GPUs are borrowing stuff from AMD. I am also not sure how much relevant technology AMD can offer here (maybe the infinity fabric and cache designs for higher-end Macs).

By the way, I am not sure one can claim that A Series and Tegra compete with each other. Apple does not sell their chips to anyone.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Can you elaborate? Apple GPUs are based on PowerVR designs, not AMD,

Mac context not iPhone/iPad.

The W5700X doesn't exist as a AMD product mainstream. ( different, larger VRAM configuration). Pro Vega16 (& 20) in MBP 15" 2019 . Not completely radically custom GPUs but different. Much of the "Pro Radeon/Vega/etc " line that Apple uses isn't used by any other vendors.


[ Although Apple may have signed a "let's not sue each other on GPU tech" agreement with AMD also .. Pay "extra' for some custom stuff and let's not sue. ]


By the way, I am not sure one can claim that A Series and Tegra compete with each other. Apple does not sell their chips to anyone.

The products embedded in compete ( or more so Nvidia wished they had gotten traction in phones. ).

Nvidia was certainly suing Samsung as though they competed at one point.

https://blogs.nvidia.com/blog/2014/09/04/nvidia-launches-patent-suits/

( if that had panned out better ( and hadn't found more lucrative revenue streams to 'mine' ), then they likely would have trained the patent lawyer cannons on Apple next. )


P.S.

...I am also not sure how much relevant technology AMD can offer here (maybe the infinity fabric and cache designs for higher-end Macs). ...

Every major GPU implementor player probably steps on other player's patents. That is one reason why companies develop large ( even bloated ) patent portfolios . It becomes somewhat of a "mutually assured destruction" to start a law suit because will be counter sued. Both sides come to the table with giant stack of stuff and the spectre of something is in the stack that will stick.

Apple isn't doing everything by themselves entirely from scratch without reinventing the wheel.



Several commentary at the time was about Apple playing "catch up" on real time ray tracing here. It may have actually been about GPU virtualization tech also ( since Apple has revealed won't boot anything other than macOS) .
But either way, it is still indicative that Apple creating a broad spectrum useful GPU and not stepping on any tech of the other implementors is unlikely.
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
By the way, I am not sure one can claim that A Series and Tegra compete with each other. Apple does not sell their chips to anyone.

Tegra is used in phones, tablets, and laptops, all of which compete with Apple's devices directly.

So far this has only extended to the iPad and iPhone, but with Apple moving Macs to Apple silicon, they will be competing with Tegra in the PC space as well.

The Geforce series is also a part of Tegra, and a direct competitor to Apple's GPUs. That gives Apple a lot of reason to freeze out Geforce GPUs.

Nvidia is a very active competitor and not their friend.

[ Although Apple may have signed a "let's not sue each other on GPU tech" agreement with AMD also .. Pay "extra' for some custom stuff and let's not sue. ]

^ this ^

An agreement between Apple and Nvidia likely would have required a patent indemnification agreement. Which when you're competing against Nvidia with Tegra could be a very bad idea. Apple needs to keep their options open that they might need to sue Nvidia over Tegra at some point if they feel Nvidia is lifting too much of their work.

Again, Tegra is not the only beef between Apple and Nvidia. But it's the most concrete issue that keeps them from working together.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
There definitely is more. A lot of things about the Nvidia relationship made Apple unhappy. The lack of engineers assigned to Cupertino and the long response times definitely contributed. I think the drivers getting blocked angle is more about Apple being tired of dealing with Nvidia bugs, so they just locked them out completely.

That's actually the other proof Nvidia wrote the drivers. Apple locked them out of the driver specifications for Mojave. Why would Nvidia have the driver specifications to begin with for any version of macOS if they weren't writing the drivers?

Apple wants to produce their own special versions of GPUs. There are a lot of indications that AMD is giving Apple access to their designs to produce custom versions. Nvidia doesn't want to give Apple access. Nvidia's Tegra and Apple's A series architectures compete. Nvidia doesn't want Apple stealing design secrets and putting them into the A series. Meanwhile that's not something AMD worries about because AMD isn't really into ARM chips. And conversely Apple doesn't want something they share with Nvidia to be used against them in Tegra.

It's not firmware or drivers or that sort of thing. Its access to GPU designs and collaborations on customized versions.

I thought both hardware and software were issues when it came to Apple ending its partnership with NVIDIA.

We can all (I assume) agree that Apple wants as much control as possible.

When it comes to the hardware component, Apple prefers AMD because they are much more willling, than NVIDIA, to work with Apple to produce custom solutions. Indeed, IIUC, AMD has a department devoted to creating custom solutions for customers. This is consitent with what you wrote.

But the same applies to the software stack, including firmware/drivers. Apple wants to be able to control that as well. Which means Apple would want access to NVIDIA's source code, so it could modify and customize it themselves, making it their own as much as possible. As I understand it, NVIDIA refused.

You are using the fact that NVIDIA, not Apple, wrote the drivers to support your contention that the issue wasn't about firmware or drivers. But it seems it's telling us the opposite: If NVIDIA is only supplying the finished binaries, it means it's not giving Apple the access it wants to their drivers/firmware.
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
I thought both hardware and software were issues when it came to Apple ending its partnership with NVIDIA.

We can all (I assume) agree that Apple wants as much control as possible.

When it comes to the hardware component, Apple prefers AMD because they are much more willling, than NVIDIA, to work with Apple to produce custom solutions. Indeed, IIUC, AMD has a department devoted to creating custom solutions for customers. This is consitent with what you wrote.

But the same applies to the software stack, including firmware/drivers. Apple wants to be able to control that as well. Which means Apple would want access to NVIDIA's source code, so it could modify and customize it themselves, making it their own as much as possible. As I understand it, NVIDIA refused.

You are using the fact that NVIDIA, not Apple, wrote the drivers to support your contention that the issue wasn't about firmware or drivers. But it seems it's telling us the opposite: If NVIDIA is only supplying the finished binaries, it means it's not giving Apple the access it wants to their drivers/firmware.

I feel like you're trying to aim at something that's not the case, so I'll simplify:

Apple does not want to write drivers for Nvidia hardware. They can solve that by having Nvidia write quality drivers, or eliminate Nvidia from the line. They chose the second option, because there was wide agreement Nvidia was failing at the first.

They also don't write AMD's GPU drivers. And they don't write Intels. They can make similar choices there as well.

Apple GPUs are one way to solve the problem. But not in the way you're implying. Apple does not want to write drivers for someone else's very complex and badly documented hardware.

It wouldn't surprise me if there were people on the Nvidia end making excuses because they botched things with Apple and they're trying to save face. Apple never wanted to take over driver development. But they most certainly were unhappy with the quality of the Nvidia drivers, and the speed at which they were developed. But I've never heard that as an answer to that Apple wanted to bring the drivers in house. Nvidia was doing too much else wrong, it wasn't just a software problem.

I've never heard anything about firmware. Out of everything I've heard, firmware was never mentioned as good or bad.
 
Last edited:

Macbookprodude

Suspended
Jan 1, 2018
3,306
898
This may have been an aspect of it, but from what I've read it seems there's more to it than just that:



Macbook Pro 2012 with nvidia and Mojave+catalina works great. No issues. Fake news.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Tom's hardware has a rumor for in-house Apple dGPU project. If true, this is probably the solution for Mac Pro with Apple Silicon

They are talking about a "GPU", not about a "dGPU". Apple rhetorics so far indicate that their GPUs will use unified memory. How exactly this will be achieved technically is not clear. But then again, it all depends on how you use the terminology and what meaning you attach to it. Personally, I think that terms like "dGPU/iGPU" are overused and not informative. We should be talking about architectural details rather than trying to hide them behind vague terms.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
They are talking about a "GPU", not about a "dGPU". Apple rhetorics so far indicate that their GPUs will use unified memory. How exactly this will be achieved technically is not clear. But then again, it all depends on how you use the terminology and what meaning you attach to it. Personally, I think that terms like "dGPU/iGPU" are overused and not informative. We should be talking about architectural details rather than trying to hide them behind vague terms.

We need either some white paper type info regarding future consumer-targeted Apple silicon Macs and/or some actual hands-on teardown action of actual shipping Apple silicon Macs first...! ;^p
 

Yebubbleman

macrumors 603
Original poster
May 20, 2010
6,024
2,616
Los Angeles, CA
Apple will do pretty much what they do now.

The AS Mac Pros, and possibly the 30" iMac get MPX dGPUs. They use the latest available when they make the new AS Macs available for sale. There will not be gaming and client side dGPUs.

Apple will implement the appropriate PCIe version for the dGPU they decide to use. If the dGPU requires a PCIe 3 connection, they will build that into the MPX board. If the dGPU requires a PCIe 4 connection, they do the same.

infinity Fabric is an on chip interconnect, to connect within a package. That is not visible outside a package, so it isn't a consideration. Whatever generation of dGPU is used, it will be a version of the PCIe that goes out to the rest of the motherboard. Only AMD should have any concern for the version of Infinity fabric that is used, it is invisible to anybody else.

Honestly, I don't know that their graphics architecture will allow traditional graphics processors from AMD considering that Apple's own GPUs function very differently and on a fundamental level.

What seems way more likely is an expansion of the Afterburner family for additional co-processors designed to assist the Apple Silicon iGPU on specific tasks (such as ray tracing, advanced video encoding, etc.). You'll buy the Afterburner card that is designed to assist your partuicular workflow and then the iGPU will be properly augmented for the task at hand.

Big Sur showing Navi 31 and others so at least we know AMD could be around for another round of GPU updates
The 2nd Gen Infinity Architecture seems between GPUs, but AMD are showing 3rd gen as also AMD CPU to GPU.
If Apple keeps using AMD for GPU, then wouldn't it concern them too? (After all, won't Apple be meeting the same issue - how to connect their Apple Silicon CPU to potential Apple Silicon dGPU if they go that route?) View attachment 947511

The 2020 27" iMac may have accounted for some of those. But there are only four GPU options spread across that line right now and that's at least six GPUs. I suspect we have one last Intel 16" MacBook Pro in the pipeline. I don't know that Apple has enough to produce a whole new Mac Pro (not sure if there are new Xeons to put in them even), though, maybe one or two extra GPU options is likely. I'm still leaning toward a newer 16" MacBook Pro though, especially with the multi-monitor issues that they haven't been able to resolve (beyond the current highest end option there).

This may have been an aspect of it, but from what I've read it seems there's more to it than just that:



I've read the whole "no NVIDIA drivers in Mojave" song and dance in multiple places. However, I'm confused by it as the higher-end Late 2013 and Mid 2014 15" MacBook Pros came with the NVIDIA GeForce GT 750M and they're still supported for Big Sur (and possibly Intel macOS releases past it as well), let alone Catalina, let alone Mojave.

Mac Mini Pro would be nice

A 6-core Mac mini will easily best a trash can Mac Pro; add Thunderbolt 3 to PCIe breakout boxes and eGPUs and you basically have a Mac mini Pro.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.