Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
Supposed Apple released a Mac tomorrow that's almost as powerful in compute compared to a 4090 (on paper) with 192GB of Unified Memory, would you be in the target market for such a solution?

I used to do all my work on a MP 5.1 with a GTX 1070/CUDA till Apple banned Nvidia - that was High Sierra IIRC.

If Apple offered an MP in the tradition of the MP 5.1 - then the answer is yes.
 

Joe Dohn

macrumors 6502a
Jul 6, 2020
840
748
If Appel can deliver the performance and the features, interest will come. I certainly see a future where Nvidia dominates cloud and cluster GPU computing and Apple dominates mobile (+partially desktop).

There is zero chance Apple will "partially dominate" desktop with their current strategy. Their pricing alone deters users who are looking for price.

There's nothing wrong with that strategy, of course. But it's foolishness to believe they will dominate a large slice of the market with that, especially outside the US.


They are still essentially on their first generation hardware and even M2 is not much more than a replicated iPhone core (with a new fabric). I think your conclusions are being a bit premature.

I think their "mercy period" has expired. Even more so because they seem to have isolated themselves and not made many moves to either play with the X86 market or have exclusive products. Sure, they have released their Game Developing Kit to make it easier play x86 games, but it was meant for developers and doesn't do much to be easier for end users to use.

Their whole ignoring the competition assumed that their product was so much better that developers / users wouldn't be able to ignore the performance difference and migrate to an Apple product. But what is happening is quite the opposite: the gap is becoming worse against Apple.

Yes, if by not staying still you mean compensating the lack of architectural progress by packing more processing cores and cranking up the power consumption. Because this is such a scalable strategy... Less than a decade ago 65watts was the TDP of an enthusiast-class desktop processor. Today this is an enthusiast class laptop processor.

AMD made some great progress though and I am looking forward to Zen5. They managed to gain parity with Intel in some key metrics and overtake them on energy efficiency. Still, Zen4 is not yet on M1 level and AMDs GPU tech is focused on cost reduction more than anything, so I wouldn’t really overemphasize their progress.

While the competition is not getting efficient in the same speed the Mx architecture is, they are still moving towards that direction. You mentioned it yourself with AMD. Today, we definitely have handheld PCs that can pack 6-8 hours usage with at least light to medium usage (e.g, office work). Maybe it's not as good as Apple, but they are always more flexible and sometimes cheaper. What happens if AMD manages to reach the same manufacturing size Apple can before Apple can scale up their solution?
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
I used to do all my work on a MP 5.1 with a GTX 1070/CUDA till Apple banned Nvidia - that was High Sierra IIRC.

If Apple offered an MP in the tradition of the MP 5.1 - then the answer is yes.
But how many 4090 can you install to match > 100GB VRAM ... 5? ... and it's not even contiguous 100GB memory? Which PSU can take that?

It looks to me that your intention is just to bash Apple.

My take is that folks are too hung up on the past and they cannot see that tech is evolving. I think Apple is heading in the right direction. PCIe will hit a bottleneck sooner or later.
 

Joe Dohn

macrumors 6502a
Jul 6, 2020
840
748
PCIe will hit a bottleneck sooner or later.

It may be so, but what really matters is: does Apple offer anything better than PCIe?
If they don't, then PCIe hitting the bottleneck is irrelevant.
Sure, they have their unified memory model, but so far, their PRACTICAL implementations have not proven to be faster than PCIe
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
It may be so, but what really matters is: does Apple offer anything better than PCIe?
If they don't, then PCIe hitting the bottleneck is irrelevant.
Sure, they have their unified memory model, but so far, their PRACTICAL implementations have not proven to be faster than PCIe
Doesn't look like Apple cares for PCIe for high speed data transfers. They are all in with UMA.

Edit: I think you mistakenly equate nVidia's GPU compute power of their GPUs to PCIe. In fact, I would say PCIe is holding back nVidia GPU's potential.
 
  • Like
Reactions: bcortens

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
But how many 4090 can you install to match > 100GB VRAM ... 5? ... and it's not even contiguous 100GB memory? Which PSU can take that?

It looks to me that your intention is just to bash Apple.

My take is that folks are too hung up on the past and they cannot see that tech is evolving. I think Apple is heading in the right direction. PCIe will hit a bottleneck sooner or later.
There is no need to install 5 4090s if 1 is way faster than the Ultra.

Why bash Apple? I had to leave the platform for Apple chose to not offer a platform that suits my needs. That's not bashing at all, that's a fact.

By your own logic one could claim you are just an apologist.

Apple's platform did not evolve at all; if it were, I could still use an MP after all.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
There is no need to install 5 4090s if 1 is way faster than the Ultra.

Why bash Apple? I had to leave the platform for Apple chose to not offer a platform that suits my needs. By your argument, one could claim you are just an apologist.

Apple's platform did not evolve at all; if it were, I could still use an MP after all
Don't think you got my point.

Anyway, it just means that the way Apple is heading is not for you. And it's good that you found your solution.

The world is big enough for many different solutions to exists.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,666
I'd prefer a GPGPU solution that's not vendor/OS locked. Can you name any that are still relevant?

Since OpenCL is dead for practical purposes CUDA is the obvious choice. In comparison to Metal, using the given metaphor, CUDA is not an island but the sea.

Yes, I don't like the situation, but it is what it is

It is simply an objective fact that Nvidia is currently the gold standard of GPGPU computing. If your work involves ML or GPU compute, Nvidia is an obvious choice and you need to have very good reasons not to consider them very seriously. There is no point contesting any of these points.

But I am not talking about any of that. There are plenty of tools on the market today, one can pick whatever best suits their needs. What I am interested in is what the situation is going to be tomorrow. And I think this is also the stage from which one should evaluate the business strategy of GPU companies.

Apple and Metal are now where Nvidia and CUDA were around 2017-2018, and the low market share and high prices don't nessesarily help. But Apple has tremendous innovation momentum, and they are closing in very fast. I think it is likely that the market situation will change significantly in the next couple of years, and that there is a good chance that Apple will establish themselves as the standard local development platform for parallel/ML codes, while Nvidia will further monopolise the cloud.
 

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
Don't think you got my point.

Anyway, it just means that the way Apple is heading is not for you. And it's good that you found your solution.

The world is big enough for many different solutions to exists.
Sure, I don't dispute that.

Still kinda sad - I prefer macOS over other platforms. I cannot see a good reason why Apple's lighthouse product of 10 years ago can do what the brand spanking new can't.
I cannot see how this is "evolving". (oh-the irony: my field of research/development are genetic/evolutionary algos, which are all about "evolving")
 

leman

macrumors Core
Oct 14, 2008
19,518
19,666
Apple's platform did not evolve at all; if it were, I could still use an MP after all.

Just because they are not offering a tool that's suitable for your needs does not mean that they are not evolving. Staying with the old paradigm would have likely spelled out the end of the Mac. Apple is playing the long game. They are carefully playing to their strength to generate enough sales (currently targeting photo/video professionals and software developers), while working on their weak points. There is a good reason to believe that their next GPU will be an extremely capable raytracing machine for example, which will let them take the production rendering market as well. Next is in-memory compute, which will massively speed up ML inference. These are all things at which Apple is uniquely posed to succeed, simply because of their vertical integration strategy*

* By the way, this is also why Nvidia could succeed — they integrated ML capabilities with the GPU that already had high-bandwidth memory. But Nvidia is standing in front of a performance plateau as the problem sizes outpace the technology. Hence their aggressive push into high-bandwidth CPU/GPU and NUMA interfaces. If you look at it carefully, both Apple Silicon and Grace/Hopper solve the same problem — only the first focuses on local/handheld devices and the second on large-scale datacenters.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
I'd prefer a GPGPU solution that's not vendor/OS locked. Can you name any that are still relevant?

Since OpenCL is dead for practical purposes CUDA is the obvious choice. In comparison to Metal, using the given metaphor, CUDA is not an island but the sea.

Yes, I don't like the situation, but it is what it is
How is CUDA not vendor locked? Can you run it on Intel GPUs? AMD GPUs?
 
  • Love
Reactions: heretiq

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
Doesn't look like Apple cares for PCIe for high speed data transfers. They are all in with UMA.

Edit: I think you mistakenly equate nVidia's GPU compute power of their GPUs to PCIe. In fact, I would say PCIe is holding back nVidia GPU's potential.
Agree, even NVIDIA thinks PCIe is too slow, The Grace Hopper super chip doesn't use generic PCIe to communicate but their proprietary NVLINK.
 
  • Like
Reactions: quarkysg

leman

macrumors Core
Oct 14, 2008
19,518
19,666
There is zero chance Apple will "partially dominate" desktop with their current strategy. Their pricing alone deters users who are looking for price.

There's nothing wrong with that strategy, of course. But it's foolishness to believe they will dominate a large slice of the market with that, especially outside the US.

I am talking about the relevant portion of the market. Users who set and drive trends. I think it is possible that in a couple of years a $4k studio will outperform a large tower PC in rendering tasks. You are simply not going to get a fast RT-capable GPU with 64GB of RAM any time soon.


I think their "mercy period" has expired. Even more so because they seem to have isolated themselves and not made many moves to either play with the X86 market or have exclusive products. Sure, they have released their Game Developing Kit to make it easier play x86 games, but it was meant for developers and doesn't do much to be easier for end users to use.

Their whole ignoring the competition assumed that their product was so much better that developers / users wouldn't be able to ignore the performance difference and migrate to an Apple product. But what is happening is quite the opposite: the gap is becoming worse against Apple.

It seems that our evaluation of the current situation differs dramatically. I see Apple Silicon successfully establishing itself in photo and video work for example, with Apple laptops receiving better usability marks than large and (at least on paper) more capable desktop alternatives. Software's adoption of Apple's ARM and technology stack has been moving forward with extreme speed. Sure, it's lagging in gaming, but gaming is hardly indicative of overall progress.

While the competition is not getting efficient in the same speed the Mx architecture is, they are still moving towards that direction. You mentioned it yourself with AMD.

Intel is certainly not moving in that direction, they are doing exactly the same as what they've been doing for the last 8 years — increasing the power consumption for more performance in the hopes that this will drive sales. This strategy won't work well that much longer. And it doesn't seems like they have an alternative plan (although they could have a dramatic comeback with their next-gen node). AMD's main concern is manufacturing cost, and that alone makes it difficult for them to meaningfully innovate IMO.

What happens if AMD manages to reach the same manufacturing size Apple can before Apple can scale up their solution?

AMD has been using the same size as Apple for a while now. As does Nvidia by the way. And I doubt that Apple will be scaling up their solution. The M1/M2 family is just the first experiment, direct application of smartphone technology on a desktop platform. The second step will be qualitatively different.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
i nowhere claimed it isn't
You said you wanted a GPGPU solution that wasn't vendor locked then called CUDA the sea (not an island) which implies that CUDA is some sort of standard. And yes it can be a defacto standard but that doesn't change the fact that as long as it isn't a real standard adopted by all vendors it won't be all that useful outside its niche (not saying its niche isn't important but it is still just a niche).
 
  • Like
Reactions: MORGiON666

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
And how well are those supported on AMD/Intel/Nvidia/Apple GPUs? How well are they supported on Linux/Windows/macOS/BSD?
How do they compare performance-wise?
Intel is the company that is driving the development of SYCL the most.

It works very well on Intel GPUs, and well on AMD and Nvidia GPUs.

It also works on FPGAs and other accelerators.
 

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
You said you wanted a GPGPU solution that wasn't vendor locked then called CUDA the sea (not an island) which implies that CUDA is some sort of standard.
Doesn't imply that.
It just means for a lack of open standard working on all kinds of devices and operating systems, one has to make a choice.
The obvious one would be CUDA.

Again. if there's a competitive open multi platform solution I'd like to hear about it
 
  • Like
Reactions: bcortens

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
Intel is the company that is driving the development of SYCL the most.

It works very well on Intel GPUs, and well on AMD and Nvidia GPUs.

It also works on FPGAs and other accelerators.
Interesting... Apple seems to be missing from the list?
 

leman

macrumors Core
Oct 14, 2008
19,518
19,666
You said you wanted a GPGPU solution that wasn't vendor locked then called CUDA the sea (not an island) which implies that CUDA is some sort of standard. And yes it can be a defacto standard but that doesn't change the fact that as long as it isn't a real standard adopted by all vendors it won't be all that useful outside its niche (not saying its niche isn't important but it is still just a niche).

I think you might have misread what @Romain_H were trying to say. I understand it as "pretty much all viable solutions are proprietary and CUDA is by far the most mature and best supported"
 
  • Like
Reactions: Romain_H

TechnoMonk

macrumors 68030
Oct 15, 2022
2,603
4,110
I'd prefer a GPGPU solution that's not vendor/OS locked. Can you name any that are still relevant?

Since OpenCL is dead for practical purposes CUDA is the obvious choice. In comparison to Metal, using the given metaphor, CUDA is not an island but the sea.

Yes, I don't like the situation, but it is what it is
CUDA should die or morph in to open solution.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
I think it is likely that the market situation will change significantly in the next couple of years, and that there is a good chance that Apple will establish themselves as the standard local development platform for parallel/ML codes, while Nvidia will further monopolise the cloud.
I don't see that happening. Developers like using hardware that is similar to the production environment, because it makes things easier. Why waste time supporting weird hardware with weird interfaces, if those don't exist in production?
 
  • Like
Reactions: Basic75
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.