Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Depends. Apple might just put 128 cores on an Apple GPU, it would be efficient but it would be tied to Metal like how NVIDIA cards are tied to CUDA, they could also do their own custom dGPU like the "Lifuka" rumor was talking about.

Lifuka isn't necessarily a dGPU. Lifuka is an island that is part of Tonga. It isn't an independent place.
Lifuka could be the chiplet used to expand GPU core count. ( if they used code words that relate to the actual real places in a similar way). It probably isn't hooked to the main SoC with a "plain" PCI-e links at all. Pragmatically wouldn't be a discrete GPU if there is some proprietary interchip connection between them.

It would be somewhat like a discrete GPU in that there is a decent chance memory controllers on the chipet. But that that memory is likely coherently caches and accessible from the other chiplet also.

Chiplets usually aren't really functional all by themselves. That have to go into a package with something else that "completes" the system on a chip. So applying discrete to them is not quite right.

If not a chiplet Lifuka could easily be a major area on a monolithic die. ( easier to 'glue' things together when on the same die ).




It seems like Apple is not going back to AMD for pro GPU's.

They just realesed RX 6800-6900 drivers so they do not have to "go back" because they haven't left yet.

M-series not getting Intel or AMD drivers? That's probably as much Apple herding developers into optimizing for the 128 GPU (that makes some substantive compromises on trade-offs ) than on necessarily leaving AMD forever. Also the new objective for the M-series Macs to run native iPhone apps as fast as possbile ( even bigger optimization mismatches between the iPhone app code base and AMD/Intel GPUs. ) .

Apple has an substantive issue to uncork ( or just ban native iPhone apps from non Apple GPU driven screens). They aren't in a hurry to resolve that but eventually there will probably be pressure to "do something". Integrated GPUs don't scale at higher counts. Apple is going to scale higher than anyone has ever done before, but there are some scaling problems when shift more so into GPGPU oriented workloads. Running iPhone games faster probably isn't going to be a big motivator for the largest Mac Pro option. (definitively not now on the Mac Pro 2019. )
 
  • Like
Reactions: jdb8167

Hexley

Suspended
Jun 10, 2009
1,641
505
I think people will hate you for telling them to wait for what they actually need.

I call these people... people who have no friends.
 
  • Like
Reactions: JMacHack

09872738

Cancelled
Feb 12, 2005
1,270
2,125
They will simply scale up their existing architecture and add CPU, GPU and ML cores along with more cache and RAM. They have been crystal clear that their model is SOC with Unified Memory going forward and they have no reason to break that model.
How can this work in a Mac Pro scenario? I agree in what you describe may be the target. However, the Mac Pro audience might want/demand swappable GPUs and/or may require multi-GPU setups.

I doubt the M1 model can scale up that high. If they can, I wonder how this feat can be accomplished
 
  • Like
Reactions: Flint Ironstag

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Multi-GPU has no future in consumer space, as GPUs are more than fast enough. But they are still important in professional workstation world.

But is a $7+K priced Mac Pro in the "consumer space" ? The $12K variant?

If Apple pigeon-holed the non iGPU as a GPGPU compute card ( not complete Metal "draw" support , but Metal compute ) that would make more sense than cutting it of completely forever. At some point integrated runs into the scaling wall . At some point a single GPU programming model that goes from watches to the top end of the GPU spectrum will have a mismatch.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
But is a $7+K priced Mac Pro in the "consumer space" ? The $12K variant?

If Apple pigeon-holed the non iGPU as a GPGPU compute card ( not complete Metal "draw" support , but Metal compute ) that would make more sense than cutting it of completely forever. At some point integrated runs into the scaling wall . At some point a single GPU programming model that goes from watches to the top end of the GPU spectrum will have a mismatch.

Not if the modular compute board includes CPU, RAM etc. A dedicated GPU is a dead concept on Apple platforms. They can’t be asking people to rewrite their pro apps to take advantage of their GPU tech just to say, oh, you have to rewrite them back for the new Mac Pro, sorry.
 
  • Like
Reactions: alex00100

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Not if the modular compute board includes CPU, RAM etc. A dedicated GPU is a dead concept on Apple platforms. They can’t be asking people to rewrite their pro apps to take advantage of their GPU tech just to say, oh, you have to rewrite them back for the new Mac Pro, sorry.

Err , a couple years ago Apple told folks to throw away their 32 bit code. Apple “spends“ developer dollars ( other people’s money ) on a regular basis . They asked folks to write for three GPU implementations before . Happens on Windows and Linux all the time . Yet the sky will fall if have to write more than one optimization pipeline on M-series ? Probably not . The ones for Intel and AMD are already there . That’s the bigger issue Apple has; changing inertia, not that multiple models was too hard .

Detached compute board doesn’t match their security model at all .

The farther apart these compute cards get the more latency will creep in and Apple’s wide pipelines will hiccup on data swaps .

There is no free lunch here .

Apple needs folks to do major optimizations to tip toe around substantive trade offs they have made that impact several workloads . Short term they want developer focus there . DaVinci Resolve developers aren’t going to forget that they need to put dGPU scale into their window port . That option will be there long term . And it is a stick that over time Windows vendors will use to move customers over time if blocks it forever .


There is decent chance Apple will publicly squat on letting dGPU back in until macOS 13 ( or 14 ) . There is a loophole for writing kernel drivers they support now that will close for PCIe cards in one or two iterations . Apple probably doesn’t want GPU drivers that will disappear in 1-2 years . AMD ( and Intel ) probably don’t want to dump money into that short term hack either .

But after that all Apple is doing is pissing away Thunderbolt functionality which is over long term competitively deeply misguided . ( Apple laptop may have graphics edge now but those systems edge isn’t going to last 3-6 years ) . Same thing for whatever Mac Pro they do and GPGPU workloads 2-5 years down the road.


There was a WWDC session on Friday where Apple was crowing about how it was great that there Tensor Flow port can see and use a GPU ( now in 2021) . Like folks weren’t ask8ng for that 2-4 years ago. That’s is kind of competitive timeliness get when Apple goes down the Apple only hardware rabbit hole .
 
  • Like
Reactions: alex00100

Flint Ironstag

macrumors 65816
Dec 1, 2013
1,334
744
Houston, TX USA

You keep saying that, but it is a facially false statement. Show me *any* task that you allege requires multiple GPUs.

And then I will explain to you that a *single* CPU with equivalent performance to those 4 GPUs will do the job just as well. And you won’t be able to prove me wrong.
@cmaier, please download hashcat, crack some hashes, and get back to us. We eagerly await your findings.
But that‘s assuming a false premise. Why do you think if you have one GPU you can only do one attack at a time? Once again, it’s perfectly possible for one Apple GPU to have identical performance characteristics to 4[fill in blank] GPUs. Including the ability to process in parallel.

As for the rest, that’s a non sequitor. I admit that for *any* GPU, if 1 is good, 4 is likely better (putting aside power usage/heat). But that’s never been what this discussion is about.
Yes, yes, please try hashcat and let me know.
Who is limiting you to running a single attack at a time? I don’t want to sound condescending, but have you ever programmed a GPU? These are massively parallel processors and it’s exceedingly difficult (and inefficient) to run a single task on them to begin with. If you want good performance, you will be running dozens of thousands hash attempts on a single large GPU.
@leman, condescending isn't the word I'd use. You should also try hashcat and report your findings here, in public.
I want my original thought for a Mac Pro Cube...

Mac mini footprint, and extend the height to the same dims as said footprint; you know, so it's a Cube...

"Beefy" (for ASi needs) integrated PSU, basic Mx? APU on main logic board with SSD(s), MLB connects to ultra-high-speed backplane; options for add-in cards (CPU core cards, GPU core cards, Neural Engine core cards, SSD cards, maybe even an A/V I/O card)...

The new personal workstation, customized to meet your assorted professional needs...! ;^p
You want a NeXT Cube - heh, I like that form factor as well. The add-in boards at the time were pretty high end. Good times.
It could be PCIe electrically and signal wise, but not physically. Apple could create a proprietary slot.

They could also make their own bus.
Please no...
Multi-GPU has no future in consumer space, as GPUs are more than fast enough. But they are still important in professional workstation world.
640k ought to be enough for anybody! :D
 

Blue Quark

macrumors regular
Oct 25, 2020
196
147
Probabilistic
Vis a vis NVIDIA, Apple wouldn't be the only one to have long memories. I still love Linus Torvalds' well-documented "reaction" to them. ?

Apple's going their own direction here, as well they should, particularly if they can do just as well without them.

When I eventually build my next desktop, it's highly unlikely to have anything but an AMD graphics card in it.
 
  • Like
Reactions: JMacHack

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Err , a couple years ago Apple told folks to throw away their 32 bit code. Apple “spends“ developer dollars ( other people’s money ) on a regular basis . They asked folks to write for three GPU implementations before . Happens on Windows and Linux all the time . Yet the sky will fall if have to write more than one optimization pipeline on M-series ? Probably not . The ones for Intel and AMD are already there . That’s the bigger issue Apple has; changing inertia, not that multiple models was too hard .

You are confusing changes that are necessary (or at least motivated) with changes that are arbitrary. Removing 32bit support was long overdue and enabled better hardware and software. Some rewriting of professional apps is required to take better advantage of Apple GPUs. And in the case of Mac Pro, if Apple decides to implement a NUMA architecture, some API adoption will be required as well, in order to take advantage of that system aspect.

Watch the WWDC session on optimizing image processing apps for Apple Silicon. It would be completely unreasonable for Apple to ask the devs to implement these changes now, and then next year to say "oh by the way, it was all a joke, now you have to change your apps back to work with a dGPU". One of the advantages of Appel Silicon is the streamlined programming model, and Apple would co complete fools if they don't see it though. Now compare this with a "our Mac Pro features multiple compute boards, so if you want to take care of all that extra processing power, please make this simple change to your app that will allow it to redistribute threads and compute kernels across multiple boards using the same API as you were using before", that's something else entirely.

Detached compute board doesn’t match their security model at all .

The farther apart these compute cards get the more latency will creep in and Apple’s wide pipelines will hiccup on data swaps .

I really don't see it this way. Explicit NUMA API will task the developer with planning the costs appropriately. Look how multi-GPU is currently implemented for Metal for an idea.

There was a WWDC session on Friday where Apple was crowing about how it was great that there Tensor Flow port can see and use a GPU ( now in 2021) . Like folks weren’t ask8ng for that 2-4 years ago. That’s is kind of competitive timeliness get when Apple goes down the Apple only hardware rabbit hole .

2-4 years ago they didn't have their own hardware on the Mac side. I hope that things will speed up from here. Besides, APIs are there, everyone can implement a backend for their favorite ML framework now.

@cmaier, please download hashcat, crack some hashes, and get back to us. We eagerly await your findings.

Yes, yes, please try hashcat and let me know.

@leman, condescending isn't the word I'd use. You should also try hashcat and report your findings here, in public.

I am sorry, what exactly do you want us to try and report? I don't have a bunch of high-end GPUs and a multi-GPU workstation laying around. You are the one making outlandish claims about multi-GPU scaling, why don't you "report"?

640k ought to be enough for anybody! :D

There is no free lunch, everything is a tradeoff. If Apple can theoretically ship a single-GPU system that is going to offer competitive performance to most multi-GPU workstation on the market, what else do you want?


I think the gpu cores in the m1 are dedicated. They are just located on the same die.

The industry-standard definition of a dedicated GPU is a GPU that is a separate device (that is, it has it's own physical memory pool and connects to the rest of the system via an expansion bus). So any PCIe GPU with it's own RAM = dedicated, M1, Intel/AMD APUs, modern gaming consoles = integrated.

Frankly, this definition is not helpful as it has nothing to do with performance.
 
  • Like
Reactions: JMacHack

09872738

Cancelled
Feb 12, 2005
1,270
2,125
Yes, yes, please try hashcat and let me know.
I used hashcat on 2 GTX 1070s the other day, and yes, as far as I remember hashcat scales close to linearly.
However, a GPU twice as fast as the 1070s would have been twice as fast as well. Maybe I am missing your point, could you clarify?
 

Bug-Creator

macrumors 68000
May 30, 2011
1,785
4,717
Germany
There will always be some wo need more than an M3xXx could offer and would like to run either 2 of them or something 3rd party (aka NVidia/AMD) but that number is dwindling.

Tasks that really required a MacPro 10 years ago can now (and not just "now" but also with the last gen of Intels) be done with an iMac, MBP or Mini. That trend is going to continue and hence the question is at what point does it make sense for Apple to create HW (they can't reuse anywhere else from top to bottom) for smaller and smaller clientele?
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
No display output, no need for ROPs/TMUs or anything else that helps display graphics to a display.

Well, Apple just last week released a WWDC talk where they tell developers of professional image-manipulation apps to use the graphics pipeline for better performance and efficiency, so probably not :)
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,665
OBX
Well, Apple just last week released a WWDC talk where they tell developers of professional image-manipulation apps to use the graphics pipeline for better performance and efficiency, so probably not :)
So those apps don't use compute? Or they use both, where having the compute part of their app offload to an ACE would help the graphics part?
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
So those apps don't use compute? Or they use both, where having the compute part of their app offload to an ACE would help the graphics part?

Well, difference between graphics and compute is academic at best. The point Apple was making in the talk is that using the graphics pipeline on their TBDR hardware will automatically utilize the on-chip cache for pixel processing, giving you a major efficiency and performance benefits in many cases.

In the end, making a distinction between graphics and compute hardware only complicates the programming model and does not have any advantages. Apple GPUs are heavily compute-oriented, but they also come with some unique advantages in the graphical department. One should just take advantage of whatever will make your job as a developer simpler and whatever will make your app run better. This dualism between compute and graphics is the main reason why I don't see Appel introducing compute-only GPUs — giving up the graphics pipeline kind of means giving up half of your compute functionality.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
@cmaier, please download hashcat, crack some hashes, and get back to us. We eagerly await your findings.

Yes, yes, please try hashcat and let me know.

@leman, condescending isn't the word I'd use. You should also try hashcat and report your findings here, in public.

You want a NeXT Cube - heh, I like that form factor as well. The add-in boards at the time were pretty high end. Good times.

Please no...

640k ought to be enough for anybody! :D

I don’t understand your point re hashcat. If I have four gpus it runs 4x as fast. But if I have a single gpu that is 4x the speed, it also runs 4x as fast. So how does this not prove my point?
 

Maconplasma

Cancelled
Sep 15, 2020
2,489
2,215
Interesting thread because as an owner of the 16" MBP I was wondering how Apple will create the Apple-Silicon version with high-end mobile GPU.

I will say it's unfortunate that this forum got duped by the OP. Notice how the OP started a new account, wrote a one-liner that has caused 5 pages of discussion and the OP never returned. I think the forum should recognize when new members start threads and never return. It's obvious they are attempting to strike a nerve especially when the OP even mentioned he's a Windows fanboy and threw his new gaming PC with the make model and the GPU type in the face of the forum. SMH.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Interesting thread because as an owner of the 16" MBP I was wondering how Apple will create the Apple-Silicon version with high-end mobile GPU.

That I have little doubt about. The GPUs in the upcoming MacBook Pros will be very competitive and much faster than what you usually find in laptops of this size and battery life.
 
  • Like
Reactions: JMacHack

hans1972

Suspended
Apr 5, 2010
3,759
3,399
Cant explain others? I'm done then. Good luck.

Let's say your GPU has a performance of x.

Performance of 4 such GPUs: x + x + x + x = 4x

cmaier's imagined GPU has four times the power of one such GPU: x * 4 = 4x

Since 4x = 4x it doesn't matter if you have 4 GPUs or one great GPU.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Let's say your GPU has a performance of x.

Performance of 4 such GPUs: x + x + x + x = 4x

cmaier's imagined GPU has four times the power of one such GPU: x * 4 = 4x

Since 4x = 4x it doesn't matter if you have 4 GPUs or one great GPU.
Oh, no, don’t start this again. It’s a pointless battle.
 
  • Like
Reactions: JMacHack
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.