Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
What makes you think the SoC isn't as powerful as an external GPU? Apple could very well target the 3090 and if they can do it, game over. There's still the market of data centers, but that's never been interesting for Apple.

Apple could bring back a lot of things, including OpenGL and Vulkan. I doubt they will.
Sure future SOC or in package GPUs may very well be that powerful (or more!), but as good as it is for it’s weight class, the M1’s isn’t. An eGPU could be powerful enough to be worth it for M1 Macs as well as future generations of Air, mini, and small pro machines. That doesn’t necessarily mean that Apple will enable that, but I don’t understand Unregistered’s point about why an eGPU would necessarily have a bigger performance hit on Apple Silicon than a comparable Intel Mac.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Not specifically UMA, as external GPU’s COULD have access to the memory, it would be more that Thunderbolt, regardless of how fast, won’t come anywhere near the access speed that the SoC provides. It actually has more to do with the rendering technology, TBDR vs. IMR. Apple has stated that rendering for Apple Silicon will be TBDR which comes with a lot of assumptions, including that there must be a very fast connection to a common pool of memory.

Apple COULD, at any time, change this stance, of course. BUT, as long as they stick with TBDR, then certain potential options would be far less likely due to the performance hit.

Strictly speaking, TBDR is orthogonal to whether memory is shared or not. It is a rendering algorithm, not a CPU/GPU communication algorithm. There is no reason why a dGPU with a dedicated memory pool could not be a TBDR renderer.

Thunderbolt is a limiting factor of course, but then again there are applications like gaming where this is not that crucial.

But I digress. I completely agree that Apple won't offer eGPU support on Apple Silicon, just for a different reason.

I would guess that the lack of GPU support was more of a business rather than technical decision. There may be too few users using eGPUs to make supporting them in the first release worthwhile. Apple may not want to continue tasking engineers with writing the drivers and supporting them. Everyone tries to get their features in to each release, but it just isn't possible to include everything and maintain quality. Tough decisions get made and features get bumped.

I think that Apple simply wants the Mac developers to design software that utilizes their proprietary technology. Apple GPUs have a lot of properties that third-party GPUs don't have, like low-latency CPU/GPU communication, zero copy data sharing, TBDR, persistent shader memory allocations, actually usable sparse memory model, unique implementation of variable shading etc. etc. If they allow third-party GPU, they are making the programming model more complicated. In the end, this is about software quality and getting the most out of the system. Streamlined GPU programming model means that you can develop and test on one machine and deploy to the entire ecosystem without worrying that you will run into weird driver or hardware behavior. One can criticize Apple's walled garden, but there are definitive advantages to vertical integration.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Strictly speaking, TBDR is orthogonal to whether memory is shared or not. It is a rendering algorithm, not a CPU/GPU communication algorithm. There is no reason why a dGPU with a dedicated memory pool could not be a TBDR renderer.

Thunderbolt is a limiting factor of course, but then again there are applications like gaming where this is not that crucial.

But I digress. I completely agree that Apple won't offer eGPU support on Apple Silicon, just for a different reason.



I think that Apple simply wants the Mac developers to design software that utilizes their proprietary technology. Apple GPUs have a lot of properties that third-party GPUs don't have, like low-latency CPU/GPU communication, zero copy data sharing, TBDR, persistent shader memory allocations, actually usable sparse memory model, unique implementation of variable shading etc. etc. If they allow third-party GPU, they are making the programming model more complicated. In the end, this is about software quality and getting the most out of the system. Streamlined GPU programming model means that you can develop and test on one machine and deploy to the entire ecosystem without worrying that you will run into weird driver or hardware behavior. One can criticize Apple's walled garden, but there are definitive advantages to vertical integration.

I agree with much of what you say, but some of these intricacies should be hidden by the API, no? And I mean right now developers have to develop for both Intel Macs and AS Macs if they want to get all Mac users so for the foreseeable future, they’ll have to keep much of what is not dealt with by the API in mind anyway. (Unless they target only AS Macs which some will, especially those porting iDevice apps)
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I agree with much of what you say, but some of these intricacies should be hidden by the API, no?

Yes, of course, and that's how Metal works right now. But if you want to get the most out of the Apple GPUs you have to design your rendering algorithms with their unique properties in mind. You can get major performance and quality benefits that way, but it requires explicit effort.

And I mean right now developers have to develop for both Intel Macs and AS Macs if they want to get all Mac users so for the foreseeable future, they’ll have to keep much of what is not dealt with by the API in mind anyway. (Unless they target only AS Macs which some will, especially those porting iDevice apps)

True, and that's why graphical stuff runs slower on M1 than it could and why developing graphical applications is more complicated than it should be.

A TBDR GPU is mostly API-compatible with a forward rendering GPU, so if you write software for both, it is sufficient to write it assuming that you have a good old Nvidia or AMD GPU running your game or whatever. But Intel Macs are a dying breed. Couple of years from now they will be rare creatures, Apple Silicon Macs are going to be much more performant, and Metal will give you even more access to the hardware (now that Apple does not have to support third-party GPUs anymore going forward, they can really go crazy and potentially offer the devs console-level control). They absolutely will want you to code for their hardware, using approaches that their hardware does best, and what's better way to do it than promise that all hardware will have unified capabilities.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,629
I’m not quite sure what you’re trying to say. Metal works with both TBDR and IMR. While of course an eGPU suffers from having to connect over TB versus being on SOC, a powerful enough GPU will still be worth it just as it was on Intel Macs. As @leman says there may be other reasons why Apple would want to discourage eGPUs, but I don’t see why an eGPU would necessarily be too slow to be useful on an M1 Mac as compared to an Intel Mac. Could you explain further?
Metal on Apple Silicon specifically only uses TBDR, so IMR doesn’t factor into Apple Silicon unless and until Apple indicates otherwise. TBDR, specifically as Apple has implemented it (where the graphics card has a tiny amount of memory but otherwise shares main memory with the CPU) requires a fast interface. Thunderbolt doesn’t come close to the bandwidth of the SoC solution.

Again, Apple could potentially change the way they’ve implemented TBDR (perhaps at WWDC). But, as it’s currently implemented, Apple’s TBDR solution would not work well at all over an external bus.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Metal on Apple Silicon specifically only uses TBDR, so IMR doesn’t factor into Apple Silicon unless and until Apple indicates otherwise. TBDR, specifically as Apple has implemented it (where the graphics card has a tiny amount of memory but otherwise shares main memory with the CPU) requires a fast interface. Thunderbolt doesn’t come close to the bandwidth of the SoC solution.

Again, Apple could potentially change the way they’ve implemented TBDR (perhaps at WWDC). But, as it’s currently implemented, Apple’s TBDR solution would not work well at all over an external bus.

I don't think that anyone talks about possibility of using an Apple GPU via Thunderbolt. This is about using third-party GPUs as eGPUs.
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Also for home use, where downtime is acceptable and playing some games is the primary use, none of this matters either.

I dunno, I do play Destiny 2 from time to time, and my build was done when the 3600 came out. I got bit and had to wait out a multi-month finger pointing, troubleshooting period before a fix was released. The USB issues with the 5000 series and certain mobos have taken ~4 months to resolve as well.

That sort of turn around isn't great in this use case either, IMO.

now that Apple does not have to support third-party GPUs anymore going forward, they can really go crazy and potentially offer the devs console-level control

This would be a case of Apple out-Microsofting Microsoft if they pull that off. :)
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Yes, of course, and that's how Metal works right now. But if you want to get the most out of the Apple GPUs you have to design your rendering algorithms with their unique properties in mind. You can get major performance and quality benefits that way, but it requires explicit effort.



True, and that's why graphical stuff runs slower on M1 than it could and why developing graphical applications is more complicated than it should be.

A TBDR GPU is mostly API-compatible with a forward rendering GPU, so if you write software for both, it is sufficient to write it assuming that you have a good old Nvidia or AMD GPU running your game or whatever. But Intel Macs are a dying breed. Couple of years from now they will be rare creatures, Apple Silicon Macs are going to be much more performant, and Metal will give you even more access to the hardware (now that Apple does not have to support third-party GPUs anymore going forward, they can really go crazy and potentially offer the devs console-level control). They absolutely will want you to code for their hardware, using approaches that their hardware does best, and what's better way to do it than promise that all hardware will have unified capabilities.

Absolutely ... compatible with vs optimized for and I agree with you that Apple would like to make sure that graphics are optimized for their internal GPUs. I’m not 100% certain that third party egpu would break that as they would be a rarity, but I can agree with the logic that Apple won’t be rushing to support them either.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
This would be a case of Apple out-Microsofting Microsoft if they pull that off. :)

Microsoft does not build their own GPUs ;) DirectX still has to take into account a wide range of hardware (same for Vulkan), and you can already see it reflected in the API differences.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Microsoft does not build their own GPUs ;) DirectX still has to take into account a wide range of hardware (same for Vulkan), and you can already see it reflected in the API differences.

Aye which is why for Xbox (and PS) optimized apps devs will go beyond the standard APIs. Of course, under your proposal they wouldn’t have to go beyond the API to optimize for AS silicon (though there is still a small range in hardware capabilities obviously so not quite console level but close).

I wonder if that means, unlike Swift, Apple won’t open source Metal? Or is that orthogonal as Apple can always endure AS optimizations are part of that standard?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I wonder if that means, unlike Swift, Apple won’t open source Metal? Or is that orthogonal as Apple can always endure AS optimizations are part of that standard?

Why would they ever want to open source Metal? What is the benefit to Apple or someone else?
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Why would they ever want to open source Metal? What is the benefit to Apple or someone else?

Like Swift, I was thinking greater potential support for it, more people using it, etc... But maybe for a graphics API that isn’t as much a benefit.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
Like Swift, I was thinking greater potential support for it, more people using it, etc... But maybe for a graphics API that isn’t as much a benefit.
Since the recent US Supreme Court ruling on Oracle v. Google, anyone can take the Metal APIs and write their own compatible stack if there is some call for such a thing. I can perhaps see if someone had an iOS game on Metal and wanted to port it to DirectX having a Metal wrapper around DirectX. I have no idea what kind of effort that would be though.
 
  • Like
Reactions: jeremiah256

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
OpenGL remains supported on Apple Silicon. I don’t think Apple ever supported Vulkan. Vulkan is supported via MoltenVK.
Sorry, bad choice of words. What I meant was bring back OpenGL with full support as it's deprecated and introduce Vulkan.
Sure future SOC or in package GPUs may very well be that powerful (or more!), but as good as it is for it’s weight class, the M1’s isn’t. An eGPU could be powerful enough to be worth it for M1 Macs as well as future generations of Air, mini, and small pro machines.
Well, the M1 is already "dead". It's a 1st gen product (a very impressive one) to show everyone they can do it. I expect the next generation to be much more powerful and the M1 to suffer the same fate as the first MBA.
I dunno, I do play Destiny 2 from time to time, and my build was done when the 3600 came out. I got bit and had to wait out a multi-month finger pointing, troubleshooting period before a fix was released. The USB issues with the 5000 series and certain mobos have taken ~4 months to resolve as well.

That sort of turn around isn't great in this use case either, IMO.
It sucks, no doubt. Still, I think there's a difference between not being able to play a game and burning money... six-figures and above for downtime. There are other issues such as work that can't be done. In research this is not acceptable because another research group might be faster because that AMD workstation/server isn't working, you can't publish, therefore missing out on future funding, etc.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Like Swift, I was thinking greater potential support for it, more people using it, etc... But maybe for a graphics API that isn’t as much a benefit.

I don’t think Metal would make much sense as a cross-platform API. It’s too integrated into Apple ecosystem. There would be very little reason for anyone outside the Apple garden in adopting it since the interesting parts of the API is tied to the unique aspects of Apple GPUs anyway...

But the new WebGPU API is partially based on Metal, even though Apple didn’t manage to get as much traction there as they initially wanted. Their model was apparently not the best fit for some vendors (I guess Nvidia...)

Since the recent US Supreme Court ruling on Oracle v. Google, anyone can take the Metal APIs and write their own compatible stack if there is some call for such a thing. I can perhaps see if someone had an iOS game on Metal and wanted to port it to DirectX having a Metal wrapper around DirectX. I have no idea what kind of effort that would be though.

A basic subset, e.g. Metal 1 is trivial, the only problem is shader compilation. Full contemporary Metal - probably impossible, since there are basic things you do in Metal that you can’t do in DX12 IMO.

For my game I have two backends- Apple Silicon one using Metal and a cross-platform one via Vulkan.
 
  • Like
Reactions: jdb8167

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
I don’t think Metal would make much sense as a cross-platform API. It’s too integrated into Apple ecosystem. There would be very little reason for anyone outside the Apple garden in adopting it since the interesting parts of the API is tied to the unique aspects of Apple GPUs anyway...

But the new WebGPU API is partially based on Metal, even though Apple didn’t manage to get as much traction there as they initially wanted. Their model was apparently not the best fit for some vendors (I guess Nvidia...)



A basic subset, e.g. Metal 1 is trivial, the only problem is shader compilation. Full contemporary Metal - probably impossible, since there are basic things you do in Metal that you can’t do in DX12 IMO.

For my game I have two backends- Apple Silicon one using Metal and a cross-platform one via Vulkan.
How about Vulkan? Could you wrap Vulkan with a Metal API? This is interesting. Software seems like you should be able to wrap anything with enough work though it might not perform particularly well. I don't know enough about modern graphics programming to be confident of what I'm talking about. Any hints on the difficult/impossible stuff.
 

Maconplasma

Cancelled
Sep 15, 2020
2,489
2,215
This isn't fairy dust--it's an actual watershed moment in computing. You can rage against it if you want to--doesn't change anything.
Agreed! Some people just can't stand to see Apple succeed and it burns them up when other companies with many years of experience in the field (namely Intel) are embarrassed that they unexpectedly got upstaged by Apple. Diehard Windows users like the OP hate that Apple hit a huge home run with the M1 which is stomping on Intel-based laptops with twice the specs with great GPU's. That's why such a ridiculous thread is created. Sit back and enjoy hater's raging. ?
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
It sucks, no doubt. Still, I think there's a difference between not being able to play a game and burning money... six-figures and above for downtime. There are other issues such as work that can't be done. In research this is not acceptable because another research group might be faster because that AMD workstation/server isn't working, you can't publish, therefore missing out on future funding, etc.

I’m not disagreeing there is a difference, but I was pushing back against the idea that “none of this matters” when talking about this sort of home use. If someone spends a grand or more on a gaming rig to be bit by this, it doesn’t change their impressions or the fact that they are stuck with a “what do I do now” situation. Some of the folks building these rigs are using it for their job (service economy, baby), and so downtime is a problem there as well. Let alone when talking about thousands of people hitting that issue. That’s cumulatively non-trivial amounts of money on the line looking at it from the larger picture. I don’t think we need to diminish one situation to include the other, really. Both are not good.

How about Vulkan? Could you wrap Vulkan with a Metal API? This is interesting. Software seems like you should be able to wrap anything with enough work though it might not perform particularly well. I don't know enough about modern graphics programming to be confident of what I'm talking about. Any hints on the difficult/impossible stuff.

You could, although I’m not sure you’d want to. Although part of that is because I’ve grown to really dislike middleware layers and “making some platform’s API feel like something you are more comfortable with”. It’s partly how we got React Native and Electron.
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Microsoft does not build their own GPUs ;) DirectX still has to take into account a wide range of hardware (same for Vulkan), and you can already see it reflected in the API differences.

But they did manage to get a bunch of devs to adopt DirectX at the expense of something more open.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
How about Vulkan? Could you wrap Vulkan with a Metal API? This is interesting. Software seems like you should be able to wrap anything with enough work though it might not perform particularly well. I don't know enough about modern graphics programming to be confident of what I'm talking about. Any hints on the difficult/impossible stuff.

Vulkan and DX12 are kind of similar here, Vulkan might be even more limited since it has to support a wider rage of devices.

A big issue I see is with the resource binding model: communicating which textures, data buffers etc. shaders should use. All modern APIs are kind of similar here: they let you define "descriptors" that refer to resources (textures, data buffers etc.). A very simplified version — let's say, in your shader you are defining a texture variable and annotate it with a "descriptor slot 1". Then, in your application code you can set the descriptor slot 1 to a specific texture. When the shader runs, it will be able to access the texture.

Where APIs differ though is how these descriptors are organized. DX12 and Vulkan use abstract descriptor "tables" — list of resource descriptors that have to be specially allocated and set up. These lists of descriptors are then bound to slots and that's how shaders get access to them. The real story is much more complicated, as you can mix the resource descriptors with constant data, and there are some restrictions which resources can be put in the same table, but that's about a gist of it.

In Metal however, resource descriptors are stored in regular data buffers. You simply define a struct in your Metal shader and bind a buffer as a pointer to that struct (some examples). That alone is very convenient and much more simpler compared to other APIs, but there is more. In Metal, these structs can contain pointers to other structs, so you can create fairly complex nested data structures. As far as I can tell, the last bit is not possible with DX12 or Vulkan (I might be wrong about this though, I am not an expert on those APIs).

So you have a problem in matching these two data models. Any DX12 or Vulkan data layout can be easily mimicked using Metal: you simply set up the structs that mirror the respective descriptor tables and be done with it. But the reverse is not true: if you have an app using more advanced Metal resource binding patterns, there is no obvious way to represent them using the other model.

Interesting! What kind of game are you working on? Are you using your own engine?

Slowly trying to build a prototype of a strategy/simulation game using side-scrolling 2D graphics and fully destructible environments. Yes, I am making my own engine because I love to make my life difficult. Also, I don't think that usual engines can do what I want to do. My game world is represented as planar graph that is dynamically rendered using compute shaders.
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
This isn't fairy dust--it's an actual watershed moment in computing. You can rage against it if you want to--doesn't change anything.
I declared the WINTEL monopoly obsolete two years ago and got downvoted here on these forums. Everyone knew this moment was coming! Independence from Intels pricing and schedule of delays would’ve been reason enough to go it alone. But adding a Neural Engine and a GPU optimized for Metal makes Apple Silicone a game changer. The only way to compete is with another optimized ARM architecture. The entire x86 era has come to an end. It’s like selling dumbphones in 2007. ☎️ ??
 

spiderman0616

Suspended
Aug 1, 2010
5,670
7,499
I declared the WINTEL monopoly obsolete two years ago and got downvoted here on these forums. Everyone knew this moment was coming! Independence from Intels pricing and schedule of delays would’ve been reason enough to go it alone. But adding a Neural Engine and a GPU optimized for Metal makes Apple Silicone a game changer. The only way to compete is with another optimized ARM architecture. The entire x86 era has come to an end. It’s like selling dumbphones in 2007. ☎️ ??
Normally when people say things like this, it's hyperbole. In this moment in time, with this new chip architecture, it is definitely not. We are always going to look back at 2020 and think of a LOT of bad stuff, but one of the good things we're going to remember is when the M1 chip finally launched to the public, and in the most mainstream of all the Mac models. Apple knew this was a slam dunk--if they thought it was not, they would not have put it in their most popular Mac models.

What's amazing to me about this change is that if you weren't on board with it and forward thinking enough to start looking at it 10 or 12 years ago, you're already way behind and have probably been caught flat footed. And Intel is just ONE of the competitors it's affecting.

I'm so convinced that this is one of the most important developments in computing history, when my son was looking to buy a computer, I convinced him to get the M1 Mac mini. I told him he's going to remember this moment for a long time because he's getting into the Mac at a wonderful point in history.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.