Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
Do you think Apple could begin to pull out specialty MPX Modules in the future? The Afterburner Card is an example for video editing Pro Res. Could they have an accelerator MPX for Gaming, and an accelerator MPX for Number Crunching?

I highly doubt Apple would consider it a gaming GPU, but yes I find it likely that they’d offer both a “3D modeling” GPU as well as a FP16/32/64 accelerator, but any Mac Pro would come with a video output equipped GPU, so if CDNA fully strips out video acceleration I only see it as a secondary card similar in nature to Afterburner.

Also agreed. But the way I'd frame it is that the Mac Pro, because of its multiple-GPU capability, is particularly well-suited to accommodate speciality (secondary) cards that lack a video out.

Given this, I think the main barrier to making it appealing for number-crunching applications isn't that Apple would refuse to offer non-video CDNA GPU options per se; I don't see any reason they'd be reluctant to offer that as a secondary GPU. Indeed, they've at least acknowledged scientific use of the machine in their marketing (the published benchmarks mention its Matlab performance for "simulation of dynamical systems"); and offering a CDNA GPU would bolster that. Rather, the barrier is the limitation that Apple would place on the nature of those cards -- specifically that Apple (at least currently) refuses to partner with NVIDIA (or maybe it's mutual--I don't know).

A partnership with Nvidia would make no difference for AMD GPUs. A CDNA GPU would not be able to run CUDA regardless. Regardless, I’m personally opposed to Nvidia and think the best path forward for everyone would actually be for the academic community to move away from CUDA and towards open standards. AMD (mostly) champions open source technologies that will run on any accelerator. Nvidia makes their software stack proprietary. It’s a shame that the scientific community largely went for a closed software stack that only runs on a single GPU manufacturer‘s hardware, when free alternatives exist.
I don’t think the best path for Apple is to partner with Nvidia to get access to their hardware and software stack. To quote Linus Torvalds, “**** Nvidia”. That said, I do think there is an issue in Apple betting entirely on Metal as their accelerator API. OpenCL support never going further than 1.2, Vulkan only being supported through Khronos’ MoltenVK translation layer. Supporting AMD’s Radeon Open Compute initiative, ROCm would also greatly improve the development ecosystem for scalable GPU compute programming. In fact, extend that to all of AMD’s Open GPU projects.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
...aside from the whole "Apple pro GPUs are just gaming cards with extra VRAM and a "Pro" sticker" thing.

That really isn't up to Apple since AMD's dies were segmented the same way. Can't get blood from a turnip. Now that AMD is forking the die offerings which way will Apple go. A couple of things Apple did last year at WWDC.

1. Introduced support in Metal for Infinity Fabric. Infinity Fabric is an essential part of the CNDA. In


CDNA_Roadmap_Master_575px.jpg



One of the major features of this first 7nm CDNA implementation listed above is "2nd Gen AMD Inifinity Archtecture". That is not a feature of Navi. ( perhaps there is a forked Navi "Pro" that has it but pretty likely Infinity Fabric is thrown out of Navi so that can put in something else (e.g. real time ray tracing focused computation. )

So Apple put in support for Infinity Fabric just to "walk away" a year or two later. I suspect not.


2. Introduced some support for more efficient rendering with Metal. Doing ray tracing with less resource pressure plays extremely well on iOS devices that run on battery 99% of the time. Will Apple be interested in Variable-Rate Shading in Metal. Probably since that will translate well to the mobile context. Will Apple be interested in some proprietary hardware real time ray tracing subsystem? Probably not with Metal in the intermediate term (because would "blow out" transistor budget on their mobile GPUs) . Not any more than interested in double floats in Metal ( another 'blind spot' because Apple's mobile GPUs drive most of Metal coverage).



The core issue is that some of this stuff ( large tensor allocation , large ray trace specific computation , etc. ) pushing the larger dies into a "rob Peter to pay Paul" contexts where dropping other functions to get more transistor budgets. Not just an AMD "problem". Nvidia has die variants where they dumped the real time system to hit lower price points. That probably isn't going away when they get to 7nm across the whole line up ( may be narrowed to smaller range ).


If go back to the slide above in post 18 above, RNDA "bottom line" feature there is maximizing "frames per second". Apple has not been on that path at all. Your hand waving at the similarities to the mainstream die configurations that Apple used is way off point. Apple graphics drivers have been far more like the "Pro" drivers for AMD/Nvida/Intel than the "frame rate at all costs" ones. It isn't just simply just pointing at the dies. The hardware+driver is a more complete system and Apple hasn't pushed to do best on tech porn frame rate marks before and probably won't in the future either.

If Metal doesn't "do" the proprietary ray trace of the 'big Navi' implementations there is a pretty high chance that Apple will take any option they have access to that drops that ( for better price and/or thermals or features that are on higher on their priority list. )
 
  • Like
Reactions: high heaven

OkiRun

macrumors 65816
Oct 25, 2019
1,005
585
Japan
There is nothing substantive to do a "mild refresh" too. Bumping the SSD capacity to a capacity can already buy is a bit of a stretch to label 'mild'. ( Yes Apple is engaged in more than suspect behavior of labeling the Mac Mini as 'new' when it is not. The last time they tried to do that for the Mac Pro some folks turned them into the FTC. ) A new GPU card? Pragmatically they already did that with the W5700X. A new module isn't a new system; again 'mild' is an overstatement. The system already could take a new module.

CPU wise there is nothing. There is some chance of a new CPU in this class later in the year but all of the alternatives require a new logic board. That would probably span past "mild" ( unless anchored on the case externals as being the primary aspect of the system. ). New CPU more likely heading to the iMac Pro first. ( decent chance either Intel cranks out a Xeon W-2300 (ice lake ) or switch to AMD later in year ). Probably 2021 (or later) for something for the Mac Pro as an overall system upgrade.

Navi + HBM + CPU with PCI-e v4 would be a substantive bump for the iMac Pro.
I think those upgrades to the iMac Pro would starve the 7.1 market a bit....:rolleyes:
 

eflx

macrumors regular
May 14, 2020
192
207
There is nothing substantive to do a "mild refresh" too. Bumping the SSD capacity to a capacity can already buy is a bit of a stretch to label 'mild'. ( Yes Apple is engaged in more than suspect behavior of labeling the Mac Mini as 'new' when it is not. The last time they tried to do that for the Mac Pro some folks turned them into the FTC. ) A new GPU card? Pragmatically they already did that with the W5700X. A new module isn't a new system; again 'mild' is an overstatement. The system already could take a new module.

CPU wise there is nothing. There is some chance of a new CPU in this class later in the year but all of the alternatives require a new logic board. That would probably span past "mild" ( unless anchored on the case externals as being the primary aspect of the system. ). New CPU more likely heading to the iMac Pro first. ( decent chance either Intel cranks out a Xeon W-2300 (ice lake ) or switch to AMD later in year ). Probably 2021 (or later) for something for the Mac Pro as an overall system upgrade.

Navi + HBM + CPU with PCI-e v4 would be a substantive bump for the iMac Pro.

There's nothing "suspect" about a company applying a "new" tag on a product listed on its website to attract attention to the fact there was an update. After all, none of these companies would survive long term without consistent sales #'s.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
There's nothing "suspect" about a company applying a "new" tag on a product listed on its website to attract attention to the fact there was an update.

Yes there is. Deceptive advertising isn't legal. There is a decent chance that Apple is only getting away with this because the FTC is mostly hobbled due to the virus limitations on personell and request.

it is plain as turd in punch bowl that it isn't new even if just merely don't take a superficial look at Apple's website. Got to the support section and look up the tech specs on the Mini.

https://support.apple.com/en_US/specs/macmini

Any 2020 entry there? Nope. That's because this is entirely the 2018 system with some "new" prices attached to the same configurations that were there before. That doesn't meet FTC guidelines for "New". It is exactly what someone could have bought before, just at a different price.

There is nothing hardware currently on https://www.apple.com/mac-mini/specs/ that isn't also on. https://support.apple.com/kb/SP782?locale=en_US ( macOS Catalina is on the current page but that would ship with anything new instance after the macOS major release in the Fall. )


After all, none of these companies would survive long term without consistent sales #'s.

That doesn't mean can lie about the status of something. I know lots of folks excuse salesfolks ( "how do you know salesmen are lying ... their lips are moving. ) , but technically that isn't legal. So therefore suspect (at best).



Truthful would have been "new value" or "lower mini prices" not that the Mini itself is "new" ... because it isn't.


Same scam .... just in 2020 nobody is calling them on it.




P.S. more than a little lame that MacRumors hit the "reset" button on the buyers guide for the Mini timeline. ( similar to reset of timeline for Mac Pro 2013 in 2017 with price changes. And temporarily on the Mac Pro 2012 before were shamed like Apple to backtrack. )
 
Last edited:

eflx

macrumors regular
May 14, 2020
192
207
Sure, I won't argue with you based on those grounds to that level of detail. If Apple made an update where a new component, or option was available for the MacMini I don't think I'd go so far as to deem it "illegal". If they slapped "new" as your saying, to something exactly the same with a different price point then I agree - that's extremely deceptive marketing.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
Yes there is. Deceptive advertising isn't legal. There is a decent chance that Apple is only getting away with this because the FTC is mostly hobbled due to the virus limitations on personell and request.

it is plain as turd in punch bowl that it isn't new even if just merely don't take a superficial look at Apple's website. Got to the support section and look up the tech specs on the Mini.

https://support.apple.com/en_US/specs/macmini

Any 2020 entry there? Nope. That's because this is entirely the 2018 system with some "new" prices attached to the same configurations that were there before. That doesn't meet FTC guidelines for "New". It is exactly what someone could have bought before, just at a different price.

There is nothing hardware currently on https://www.apple.com/mac-mini/specs/ that isn't also on. https://support.apple.com/kb/SP782?locale=en_US ( macOS Catalina is on the current page but that would ship with anything new instance after the macOS major release in the Fall. )




That doesn't mean can lie about the status of something. I know lots of folks excuse salesfolks ( "how do you know salesmen are lying ... their lips are moving. ) , but technically that isn't legal. So therefore suspect (at best).



Truthful would have been "new value" or "lower mini prices" not that the Mini itself is "new" ... because it isn't.


Same scam .... just in 2020 nobody is calling them on it.




P.S. more than a little lame that MacRumors hit the "reset" button on the buyers guide for the Mini timeline. ( similar to reset of timeline for Mac Pro 2013 in 2017 with price changes. And temporarily on the Mac Pro 2012 before were shamed like Apple to backtrack. )

Apple has done this many times before and have always put the "new" tag on it. I do not disagree with you that it's a bit disingenuous, but the FTC wouldn't have reacted no matter the situation. They never have in the past when Apple have done this before. - And arguably it's a new base configuration
 

MisterAndrew

macrumors 68030
Sep 15, 2015
2,895
2,390
Portland, Ore.
There is nothing substantive to do a "mild refresh" too. Bumping the SSD capacity to a capacity can already buy is a bit of a stretch to label 'mild'. ( Yes Apple is engaged in more than suspect behavior of labeling the Mac Mini as 'new' when it is not. The last time they tried to do that for the Mac Pro some folks turned them into the FTC. ) A new GPU card? Pragmatically they already did that with the W5700X. A new module isn't a new system; again 'mild' is an overstatement. The system already could take a new module.

CPU wise there is nothing. There is some chance of a new CPU in this class later in the year but all of the alternatives require a new logic board. That would probably span past "mild" ( unless anchored on the case externals as being the primary aspect of the system. ). New CPU more likely heading to the iMac Pro first. ( decent chance either Intel cranks out a Xeon W-2300 (ice lake ) or switch to AMD later in year ). Probably 2021 (or later) for something for the Mac Pro as an overall system upgrade.

Navi + HBM + CPU with PCI-e v4 would be a substantive bump for the iMac Pro.

What a weird detail to focus on. "Mild" is entirely subjective and encompasses a variable range of differentiation. When I think of mild I think bumping some of the internal hardware while keeping the larger design intact. It's speculation what those details are and how exactly "mild" the refresh is.
 

theorist9

macrumors 68040
May 28, 2015
3,883
3,064
A partnership with Nvidia would make no difference for AMD GPUs. A CDNA GPU would not be able to run CUDA regardless.

I misspoke. What I meant to write was that "I think the main barrier to making it appealing for number-crunching applications isn't that Apple would refuse to offer non-video GPGPU options per se".


It’s a shame that the scientific community largely went for a closed software stack that only runs on a single GPU manufacturer‘s hardware, when free alternatives exist.

These decisions were made in the late 00's, I believe, and wasn't the reason the scientific community went for NVIDIA in large part because of CUDA, which made programming GPU's for scientific computing far easier? At the time (~2006), I don't believe there were any good free alternatives to that. And if the scientific community was able to be more productive in its research by choosing a proprietary rather than an open-source tool, I think the social good of the former outweighs that of the latter.


AMD (mostly) champions open source technologies that will run on any accelerator. Nvidia makes their software stack proprietary....I don’t think the best path for Apple is to partner with Nvidia to get access to their hardware and software stack. To quote Linus Torvalds, “**** Nvidia”.

Here we differ philosophically. While I think open source is a social good, I think consumer choice is a stronger social good.

Thus I'd prefer a world in which I had the option to choose either AMD or NVIDIA, rather than AMD only. I.e., consumer choice is improved if I can choose between an open source solution and a proprietary solution, rather than being offered the former option only. I likewise prefer a world in which I can choose from *nix, MacOS, and Windows, rather than *nix only. And I prefer a world in which I can choose from R, MAXIMA, Octave, SciPy, Mathematica, Matlab, Maple, and SAS, rather than one in which I only have access to the first four. And so on.

Indeed, open source and proprietary need each other. MacOS wouldn't exist without *nix, and many open source projects owe their existence to voluntary work by programmers who are able to pay the rent only because they have day jobs with companies whose income comes in part from proprietary software.

And as far as Torvalds-related quotes go, I suspect an even more common one, at least from those who've worked with him, would be: “****Torvalds." :D :
https://thehackernews.com/2018/09/linus-torvalds-jerk.html
 
Last edited:

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
These decisions were made in the late 00's, I believe, and wasn't the reason the scientific community went for NVIDIA in large part because of CUDA, which made programming GPU's for scientific computing far easier? At the time (~2006), I don't believe there were any good free alternatives to that. And if the scientific community was able to be more productive in its research by choosing a proprietary rather than an open-source tool, I think the social good of the former outweighs that of the latter.

Indeed you are correct and I fully agree that if the best tool for the job is closed source it is the tool that should be used. - I would however argue that CUDA is not better than open source alternatives any more and only maintains its market share as a result of historical significance and a reluctance to leave the environment that has been established. And by that I don’t mean to **** on CUDA because CUDA is still good. I’d just argue that it’s just as fast and easy to write with any of the open standards that will work on any GPU not just Nvidia’s offerings, which means that if Intel, AMD or a brand new manufacturer develops something much faster and cheaper than Nvidia’s offerings, we can switch without having to redo the software stack. CUDA‘s success is limiting consumer choice, if we skip ahead a bit to the next part of this post. The more we get stuck into a CUDA-only ecosystem the harder it will be to leave Nvidia no matter how much they overcharge or under deliver.

Here we differ philosophically. While I think open source is a social good, I think consumer choice is a stronger social good.

Thus I'd prefer a world in which I had the option to choose either AMD or NVIDIA, rather than AMD only. I.e., consumer choice is improved if I can choose between an open source solution and a proprietary solution, rather than being offered the former option only. I likewise prefer a world in which I can choose from *nix, MacOS, and Windows, rather than *nix only. And I prefer a world in which I can choose from R, MAXIMA, Octave, SciPy, Mathematica, Matlab, Maple, and SAS, rather than one in which I only have access to the first four. And so on.

Indeed, open source and proprietary need each other. MacOS wouldn't exist without *nix, and many open source projects owe their existence to voluntary work by programmers who are able to pay the rent only because they have day jobs with companies whose income comes in part from proprietary software.

And I think an even more common quote, at least from those who've worked with him, is: “****Torvalds." :D :

Oh you misunderstood my main point; perhaps I presented it poorly. I am by no means a Stallman. In fact I publish under either MIT or BSD licenses, not GPL. If I put code on GitHub, it’s free to be incorporated into closed source proprietary projects as well.

When I said I don’t think the best path for Apple is in negotiating with Nvidia, it has absolutely nothing to do about what benefits it might or might not bring for the consumer. I was speaking from the point of view of Apple. And the fact that AMD is willing to work with open source and allow others to develop or modify drivers for their GPUs and control the software stack in general, means that Apple can use AMD hardware but maintain a damn lot of control over the software stack, and we both know Apple likes that a lot. I find it close to impossible that Nvidia would agree to give Apple full control of the software stack for their GPUs. They’d send them a binary blob and say “Just put that in the kernel”. So from the perspective of Apple, the best path is ignoring Nvidia and focusing on AMD.

Now I also think that Nvidia is a pain in the arse in many ways and engage in many anti-consumer behaviors and generally just *****. Hence the **** Nvidia bit. They make fine, though price-pinching, hardware, but their PR, marketing and general behavior as a company often seems abhorrent and their software stack seems very focused on locking in users.

Now; I am obviously an Apple user, which means I can’t exactly stand here and champion full open source and never using software to lock users into your ecosystem. I gladly use iCloud, Apple Music, etc. And I’d also happily take a Mac with an RTX GPU. But if I were representing Apple, I wouldn’t bother negotiating with Nvidia because all I’ve heard, read and seen from them is that they have quite unreasonable terms, and abuse their position of power in the market.

It’s all about the hat; From the user‘s perspective I’m all for having all options on the table, as long as none of it compromises any of the options, and the user thus gets the best fitted tools for whatever they need. - From a company perspective I would argue it’s not worth looking at Nvidia for Apple.

So yeah that’s what I meant :). I don’t think we’re that much in disagreement. And frankly, yes, F Torvalds as well. I like him to an extend but his abusive comments against some of the people he’s worked with pushes me away completely, and I think it’s good for Linux that he was removed really. It creates a toxic community to have a front figure like him say some of the terrible things he did.
 

theorist9

macrumors 68040
May 28, 2015
3,883
3,064
Indeed you are correct and I fully agree that if the best tool for the job is closed source it is the tool that should be used. - I would however argue that CUDA is not better than open source alternatives any more and only maintains its market share as a result of historical significance and a reluctance to leave the environment that has been established. And by that I don’t mean to **** on CUDA because CUDA is still good. I’d just argue that it’s just as fast and easy to write with any of the open standards that will work on any GPU not just Nvidia’s offerings, which means that if Intel, AMD or a brand new manufacturer develops something much faster and cheaper than Nvidia’s offerings, we can switch without having to redo the software stack. CUDA‘s success is limiting consumer choice, if we skip ahead a bit to the next part of this post. The more we get stuck into a CUDA-only ecosystem the harder it will be to leave Nvidia no matter how much they overcharge or under deliver.

Agreed, other software tools are now available that are comparable to CUDA. But what about hardware? Does AMD make anything to compete with Volta? Or perhaps a better question is: Are there use cases where NVIDIA's offerings are cleary superior?

Also, how do the AMD graphics cards offered on the Mac Pro compare with AMD's Instinct (which I've read doesn't work on the Mac Pro) for GPGPU computing?
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,127
2,707
Agreed, other software tools are now available that are comparable to CUDA.
I think you guys are missing the point, it's not about CUDA itself. It's all the software and support that NVIDIA offers. AMD is more than a decade behind and they're never going to catch up. Let's say I want to do research for autonomous driving, I'll simply throw $1 million (get it cheaper with academic discount) at NVIDIA and I get a full system, simulation, real sensor models, hardware to connect real sensors... I can start within hours. Want to do genetics? Physical simulation? Astro physics? Climate modelling? <insert anything here>? Same thing, place the order for hardware or download the software package of your choice and just start. Having trouble getting things to work or port for GPUs? Visit a NVIDIA super computer center for free, bring your students and NVIDIA will help. Need more power? NVIDA, Dell, Lenovo and other will happily supply GPU clusters. 500, 1000 and more GPUs, no problem, including same day service.

And AMD? (imagine chirping cricket sound here) Nothing! They don't have the software, they don't have the support, they can't supply the hardware in numbers and they don't have the cluster solutions that NVIDIA has. And no, porting software and maintain it yourself is not an option, it's way too time consuming. Researchers want to get work done, so it's up to AMD to supply the necessary tools, both hardware and software.

I bought a Radeon VII for my MBP, just to play around a little here and there. But the real work? Done on Titan RTX for playing around, RTX 8000 in the workstation under the desk for digging a little deeper and V100s in the server cluster for real number crunching. AMD doesn't even try, otherwise they'd hire a few thousand people and put a few billion dollars into this.That ship has sailed.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
Agreed, other software tools are now available that are comparable to CUDA. But what about hardware? Does AMD make anything to compete with Volta? Or perhaps a better question is: Are there use cases where NVIDIA's offerings are cleary superior?

Nvidia currently still, mostly, has the superior offerings. There are certain fields Where AMD have an edge, or have developed very specialised cards, but overall Nvidia has the strongest hardware - But that very well could change in the not too distant future and with a programming interface that works on both Nvidia and AMD; And even Intel - don't forget they're working on big accelerators too now (again), you'd have an easier time switching should that time come - which would also put more pressure on Nvidia to continue making better products and keeping prices down, since their position would be more fragile. If everyone is locked to CUDA; Nvidia wouldn't care if their GPUs were 10% slower than competing solutions at everything - they could still charge more, since it's not just the cost of the hardware, but the cost of rewriting things if you've already started in CUDA.

And on that point, while Nvidia is generally superior on performance and perf/watt in most cases, they often cost a lot more than AMD cards, in the performance segments Where AMD have competing products.

Also, how do the AMD graphics cards offered on the Mac Pro compare with AMD's Instinct (which I've read doesn't work on the Mac Pro) for GPGPU computing?

There are several Instinct cards, and to be entirely honest I'm not very familiar with any of them. - I know there are specialised Instinct cards that very heavily focus on certain compute needs, like FP64 of INT8 - For FP64 the Vega cards in the Mac Pro aren't actually that great, so if you need high precision you're a bit out of luck. - Well, at least unless Apple has done something special - On AMD's website it's listed as having rather slow FP64, though I can't remember the specific ratio. but it's a fact that Vega 20 GPUs can have 1:2FP64 performance, so if Apple has permission to unlock the capability it would be good, but I doubt it, since it's not listed on AMD's site under the Vega II, and I think only Apple has the GPU. - The equivalent Instinct card does have FP64 enabled though.
But for FP32, the Vega II is actually slightly faster than the equivalent Instinct card, owing to slightly faster clocks and an additional 4CUs, with the Instinct being similar to the Radeon VII in having 4 CUs disabled (though I think there also was a variant of the Instinct with all 64CUs, but it's basically been limited stock).

In total, it depends what computations you do, but if what we're talking is traditional FP32, the Vega II will battle with any Instinct card.

In entirely other and unrelated strands of thought, AMD also made, quite a while ago now, the Radeon SSG - Solid State Graphics, where they put an entire SSD directly on the GPU, so that data could go straight from the drive into the VRAM without having to go through the processor. I believe that if you work with huge GPU datasets; Like hundreds and hundreds of Gigabytes that need to be processed on a GPU, that card is still one of the fastest options out there, even though the GPU itself isn't as fast as most of Nvidia's or even newer AMD offerings. - While an SSD is of course many times slower than VRAM or system memory, once the dataset becomes large enough, being able to skip going through the CPU helps.
[automerge]1590619370[/automerge]
I think you guys are missing the point, it's not about CUDA itself. It's all the software and support that NVIDIA offers. AMD is more than a decade behind and they're never going to catch up. Let's say I want to do research for autonomous driving, I'll simply throw $1 million (get it cheaper with academic discount) at NVIDIA and I get a full system, simulation, real sensor models, hardware to connect real sensors... I can start within hours. Want to do genetics? Physical simulation? Astro physics? Climate modelling? <insert anything here>? Same thing, place the order for hardware or download the software package of your choice and just start. Having trouble getting things to work or port for GPUs? Visit a NVIDIA super computer center for free, bring your students and NVIDIA will help. Need more power? NVIDA, Dell, Lenovo and other will happily supply GPU clusters. 500, 1000 and more GPUs, no problem, including same day service.

And AMD? (imagine chirping cricket sound here) Nothing! They don't have the software, they don't have the support, they can't supply the hardware in numbers and they don't have the cluster solutions that NVIDIA has. And no, porting software and maintain it yourself is not an option, it's way too time consuming. Researchers want to get work done, so it's up to AMD to supply the necessary tools, both hardware and software.

I bought a Radeon VII for my MBP, just to play around a little here and there. But the real work? Done on Titan RTX for playing around, RTX 8000 in the workstation under the desk for digging a little deeper and V100s in the server cluster for real number crunching. AMD doesn't even try, otherwise they'd hire a few thousand people and put a few billion dollars into this.That ship has sailed.

You are absolutely right. Nvidia has a great stack of software on top of CUDA, like Drive and all of what's packaged under CUDA-X. Similar, but of course much broader and extensive, to Apple's MPS (Metal Performance Shaders) - essentially a library of already written functionality that works with the GPU.
And if you're researching fluid dynamics or doing genetic modelling or something, go for whatever's easiest to get working fast - I get that entirely.

What I'm talking about is much smaller scale than that, but still in academia. For example, one of my ex-TAs is currently doing a Ph.D project on a way of sort of tricking GPUs into doing MIMD instead of just SIMD; In other words, being able to not just do one instruction on a whole bunch of data across a GPU core, but being able to perform several independant instructions on each dataset, by moving around the instruction pointer during execution. It's a bit beyond me at this point and there's no working code yet that fully does it, only theory, but I've asked her to send me sample code when it's working - In any case, it's research on the nature of GPUs. It doesn't require a lot of horsepower, or a massive cluster. University paid for a laptop for her with an Nvidia GPU in it, and it was all but demanded that the research be conducted with CUDA, because "That's the academic standard, and anybody reading the paper will expect CUDA and not care to understand OpenCL or anything else - Make it work in CUDA". - That's what I'm opposed to. The rigidity in the academic community, of which I am part. There may also be many performance sensitive, but not large budget or huge projects where you could get much more for the available resources by going with an open standard and an AMD based solution, but "academic tradition" forces CUDA on you.

CUDA locks people in. Use it when it's appropriate, but I don't like how it's gotten the status of "industry standard" in so many circles, to the point where other tools aren't even considered.

EDIT:
Could you imagine a world in which all universities needed to do everything in C# and no project would ever be allowed to be written in C++, Java, Python, whatever, because some large infrastructure existed around C#?
Again, when you need the scale that Nvidia can offer you - take advantage of it. When you don't, at least consider your options. All I ask for :)
 
Last edited:
  • Like
Reactions: theorist9

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
I’d just argue that it’s just as fast and easy to write with any of the open standards that will work on any GPU not just Nvidia’s offerings
To pick one phrase out of your response - this one simply misses the point.

In my work (AI/ML) it has nothing to do with writing to open standards vs proprietary APIs.

We never touch CUDA or OpenCL or that new Apple stuff - at all. However, the number of frameworks available for CUDA is far, far larger - and we write to frameworks.
 
Last edited:

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
To pick one phrase out of your response - this one simply misses the point.

In my work (AI/ML) it has nothing to do with writing to open standards vs proprietary APIs.

We never tough CUDA or OpenCL or that new Apple stuff - at all. However, the number of frameworks available for CUDA is far, far larger - and we write to frameworks.

In my other posts here I elaborated on my points, and I feel I addressed this point in those paragraphs.

But to go a step further; Yes - And I don't think that needs to change at all. But perhaps the people who maintain those frameworks could consider computability with other lower level APIs to support more than just CUDA.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
But to go a step further; Yes - And I don't think that needs to change at all. But perhaps the people who maintain those frameworks could consider computability with other lower level APIs to support more than just CUDA.
So, instead of offering more features and performance for your mainstream users, you waste time adding platform support for a platform where you have very few users?

Not even the pointy-haired boss would go for that.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
So, instead of offering more features and performance for your mainstream users, you waste time adding platform support for a platform where you have very few users?

Not even the pointy-haired boss would go for that.

Which works fine as an argument for what I talked about earlier, about why I don’t think Apple should bother negotiating with Nvidia when things are working fine with AMD - from the perspective of Apple.

But there are many frameworks out there for AI and other GPU accelerated work, and if a vast majority of it runs exclusively on CUDA, supporting another acceleration API would give you a good USP, especially come a day when Nvidia drops the ball, but continue to charge $1300 for GPUs that perform worse than $600 alternatives from their competitors.

I don’t get why everyone is somehow making it out to be a good thing that Nvidia has created a software ecosystem that locks their costumers into only using their hardware no matter what competitors may come forth with, all but creating a monopoly.

Many many years before the switch happened, Apple also developed OS X for Intel, because having OS X only work on PowerPC limited their options for hardware suppliers and once Intel could deliver better products on better terms than IBM could, they had an option to switch over their platform. Are you arguing that was a bad decision and that they should instead have put all that energy exclusively into the PowerPC version of OS X?
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
You've completely twisted the argument - many of the frameworks are maintained by volunteer or research groups. They don't have the resources or the demand for Apple support.
[automerge]1590627316[/automerge]
Many many years before the switch happened, Apple also developed OS X for Intel
So completely wrong. When NeXT acquired Apple - NeXTSTEP already ran on Intel.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
You've completely twisted the argument - many of the frameworks are maintained by volunteer or research groups. They don't have the resources or the demand for Apple support.

I really don't feel strongly enough about this to continue on this much long; I acknowledge your arguments and their validity, but I continue to believe that there is immense value in supporting more than just CUDA. - On the performance side of things - relating to how you earlier mentioned putting development effort in improving performance and features for existing users of the frameworks - I believe there are more performance gains to be had by showing Nvidia that they are not The Almighty, and that they need to be competitive. By adopting more alternatives to CUDA that will run on non-Nvidia hardware (as well as Nvidia), they will have to stay more competitive hardware wise than they might otherwise have to. If we lock everything into CUDA, Nvidia does not have to make better products than their competition to succeed. They can make slightly worse performing GPUs at higher prices and still win, because it costs more to rewrite the code to work with the competitor than to just stick with Nvidia even when they offer inferior products. I am not saying that's the case now, I'm saying that's how things can evolve. Without any reason to compete, things end up like what we had with Intel while AMD was making products that didn't compete. Look what happened to the CPU market when Zen came around.
GPUs are in a trickier spot, because both Core and Zen can run x86_64. If all HPC GPU code is CUDA, it's not enough for a competitor to offer a far superior product.

That's the core of why I don't think it's good for everything to get locked into CUDA. You've all made very valid arguments, but I also believe that what I've just written is argument enough in itself to not want to put all our proverbial eggs in the CUDA basket.
Cheers.

So completely wrong. When NeXT acquired Apple - NeXTSTEP already ran on Intel.

Indeed. But I'm sure we can agree that it required an active effort to continue support from that point onwards, and that there were aspects of OS X that were still different to NeXTSTEP, even though NeXTSTEP obviously laid the foundation. - Though actually I think it was only OpenSTEP that ran on Intel? (Same difference of course) - In any case, whilst they did have a very good starting point for Intel support, it still didn't come for free.
 

theorist9

macrumors 68040
May 28, 2015
3,883
3,064
When I said I don’t think the best path for Apple is in negotiating with Nvidia, it has absolutely nothing to do about what benefits it might or might not bring for the consumer. I was speaking from the point of view of Apple. And the fact that AMD is willing to work with open source and allow others to develop or modify drivers for their GPUs and control the software stack in general, means that Apple can use AMD hardware but maintain a damn lot of control over the software stack, and we both know Apple likes that a lot. I find it close to impossible that Nvidia would agree to give Apple full control of the software stack for their GPUs. They’d send them a binary blob and say “Just put that in the kernel”. So from the perspective of Apple, the best path is ignoring Nvidia and focusing on AMD.

In reading this again, a couple more questions occurred to me:

1) Given how much Apple wants control, as much as it likes working with AMD, wouldn't it ultimately prefer to have its own in-house ARM-based discrete GPU's? Or is Apple so happy with how much control AMD gives it (and with the performance of AMD's product) that there's not the same motivation to switch to in-house GPU's as there is for in-house CPU's?

My own thinking is that Apple's main concern with discrete AMD (or NVIDIA) GPU's in an ARM-powered MBP is that their TDP's are so high that they might make the energy savings from switching to an ARM CPU much less consequential. [Going from a 45W TDP CPU to, say, a 25W TDP CPU doesn't have as much of an impact if you're running an 85W TDP discrete graphics card.] Of course, Apple may not be able to produce an ARM-based discrete GPU with higher performance/watt that what AMD currently offers. I haven't found enough on ARM GPU's to assess this.

2) Relatedly, with regard to the considerations you just described, where does Intel fall? I.e., with its CPU's, is Intel more AMD-like, NVIDIA-like, or somewhere inbetween? Does Intel allow Apple to develop/modify its CPU drivers, and control the software stack (like AMD does for its GPU's), or does Intel just give Apple "binary blobs"?


In entirely other and unrelated strands of thought, AMD also made, quite a while ago now, the Radeon SSG - Solid State Graphics, where they put an entire SSD directly on the GPU, so that data could go straight from the drive into the VRAM without having to go through the processor. I believe that if you work with huge GPU datasets; Like hundreds and hundreds of Gigabytes that need to be processed on a GPU, that card is still one of the fastest options out there, even though the GPU itself isn't as fast as most of Nvidia's or even newer AMD offerings. - While an SSD is of course many times slower than VRAM or system memory, once the dataset becomes large enough, being able to skip going through the CPU helps.

Interesting!

Perhaps related: NVDIA is working on embeddng ARM CPUs in its GPU packages. Given that the CPU is an embedded device, I wonder if these offer direct SSD --> GPU data flow.
I tried Googling whether AMD is also doing this, but wasn't able to find any info.
 
Last edited:

OkiRun

macrumors 65816
Oct 25, 2019
1,005
585
Japan
Apple is probably wondering what it has to do to break the strangle hold of Windows PCs on city, state, and federal governments.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
1) Given how much Apple wants control, as much as it likes working with AMD, wouldn't it ultimately prefer to have its own in-house ARM-based discrete GPU's? Or is Apple so happy with how much control AMD gives it (and with the performance of AMD's product) that there's not the same motivation to switch to in-house GPU's as there is for in-house CPU's?

Just to be clear, ARM is a CPU architecture, not relating to graphics. The GPUs you'd find on various mobile ARM SoCs are not ARM GPUs, just associated with an ARM SoC. Closest thing to an ARM GPU you'd get is "Mali" which is the GPU family owned by the company ARM. :)

In any case, I don't think discrete GPUs are in Apple's near or medium term future. Perhaps one day in the far future, but I think they're fairly happy continuing on as things are now on that side.
There may very well however take their integrated GPUs from iOS over to their future ARM CPUs for Macs, but I doubt dedicated graphics will be in the cards. I'll elaborate a bit more on this further down this post, when I get to some of your other questions, but on the aspect of control; AMD has a semi-custom department for tailoring hardware to clients' needs, as well as allowing Apple to have full control over the software stack. Seems to be enough, at least for the medium term.

My own thinking is that Apple's main concern with discrete AMD (or NVIDIA) GPU's in an ARM-powered MBP is that their TDP's are so high that they might make the energy savings from switching to an ARM CPU much less consequential. [Going from a 45W TDP CPU to, say, a 25W TDP CPU doesn't have as much of an impact if you're running an 85W TDP discrete graphics card.] Of course, Apple may not be able to produce an ARM-based discrete GPU with higher performance/watt that what AMD currently offers. I haven't found enough on ARM GPU's to assess this.

I can't definitively say that Apple couldn't make a more efficient GPU, but graphics are generally about relatively large scale parallelism and many compute blocks. GPUs generally draw more power than CPUs, and Amd's Navi GPUs actually seem rather efficient.

This is speculation of course but I'd almost find it more likely for Apple to work with AMD's custom department, to combine an Apple CPU and AMD GPU on a single package, similar to what Intel and AMD did with Kaby Lake G.

2) Relatedly, with regard to the considerations you just described, where does Intel fall? I.e., with its CPU's, is Intel more AMD-like, NVIDIA-like, or somewhere inbetween? Does Intel allow Apple to develop/modify its CPU drivers, and control the software stack (like AMD does for its GPU's), or does Intel just give Apple "binary blobs"?

CPUs don't per se have "drivers". But when it comes to things like chipset firmware and microcode, I honestly don't know how the relationship between Apple and Intel is structured. - But on a hardware level, Apple has in the past shown to have a fairly decent deal with Intel, getting some chips before everyone else and even some entirely exclusive, custom made chips - like what was in the original MacBook Air (or perhaps one of its redesigns - I think it was the original though). AMD as a company however has more focus on custom and semi-custom hardware for their costumers, whereas with Intel it seems to be more something they are persuaded to do, but I don't know the ins and outs of what their business terms really are like :)

Perhaps related: NVDIA is working on embeddng ARM CPUs in its GPU packages. Given that the CPU is an embedded device, I wonder if these offer direct SSD --> GPU data flow.

I have absolutely no clue how things are wired inside those chips, but I doubt it. And in any case, the SSD itself would not be on package I imagine, so it would still have to go through PCIe or whatever it's connected to.
But to be honest I also don't really entirely understand how AMD's SSG would work - having the SSD physically with the GPU and on the same PCIe device is one thing, and the physical layer in that aspect makes sense, but how the software side of that would work is beyond me. - Haven't really looked into it that much

I tried Googling whether AMD is also doing this, but wasn't able to find any info.

I believe there've been rumours of the sort but never anything concrete. I think the closest thing is that AMD are making Radeon GPUs for Samsung SoCs, pairing an AMD GPU with a Samsung-made ARM chip
 
  • Like
Reactions: theorist9
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.