OpenGL remains supported on Apple Silicon. I don’t think Apple ever supported Vulkan. Vulkan is supported via MoltenVK.Apple could bring back a lot of things, including OpenGL and Vulkan. I doubt they will.
OpenGL remains supported on Apple Silicon. I don’t think Apple ever supported Vulkan. Vulkan is supported via MoltenVK.Apple could bring back a lot of things, including OpenGL and Vulkan. I doubt they will.
Sure future SOC or in package GPUs may very well be that powerful (or more!), but as good as it is for it’s weight class, the M1’s isn’t. An eGPU could be powerful enough to be worth it for M1 Macs as well as future generations of Air, mini, and small pro machines. That doesn’t necessarily mean that Apple will enable that, but I don’t understand Unregistered’s point about why an eGPU would necessarily have a bigger performance hit on Apple Silicon than a comparable Intel Mac.What makes you think the SoC isn't as powerful as an external GPU? Apple could very well target the 3090 and if they can do it, game over. There's still the market of data centers, but that's never been interesting for Apple.
Apple could bring back a lot of things, including OpenGL and Vulkan. I doubt they will.
Not specifically UMA, as external GPU’s COULD have access to the memory, it would be more that Thunderbolt, regardless of how fast, won’t come anywhere near the access speed that the SoC provides. It actually has more to do with the rendering technology, TBDR vs. IMR. Apple has stated that rendering for Apple Silicon will be TBDR which comes with a lot of assumptions, including that there must be a very fast connection to a common pool of memory.
Apple COULD, at any time, change this stance, of course. BUT, as long as they stick with TBDR, then certain potential options would be far less likely due to the performance hit.
I would guess that the lack of GPU support was more of a business rather than technical decision. There may be too few users using eGPUs to make supporting them in the first release worthwhile. Apple may not want to continue tasking engineers with writing the drivers and supporting them. Everyone tries to get their features in to each release, but it just isn't possible to include everything and maintain quality. Tough decisions get made and features get bumped.
Strictly speaking, TBDR is orthogonal to whether memory is shared or not. It is a rendering algorithm, not a CPU/GPU communication algorithm. There is no reason why a dGPU with a dedicated memory pool could not be a TBDR renderer.
Thunderbolt is a limiting factor of course, but then again there are applications like gaming where this is not that crucial.
But I digress. I completely agree that Apple won't offer eGPU support on Apple Silicon, just for a different reason.
I think that Apple simply wants the Mac developers to design software that utilizes their proprietary technology. Apple GPUs have a lot of properties that third-party GPUs don't have, like low-latency CPU/GPU communication, zero copy data sharing, TBDR, persistent shader memory allocations, actually usable sparse memory model, unique implementation of variable shading etc. etc. If they allow third-party GPU, they are making the programming model more complicated. In the end, this is about software quality and getting the most out of the system. Streamlined GPU programming model means that you can develop and test on one machine and deploy to the entire ecosystem without worrying that you will run into weird driver or hardware behavior. One can criticize Apple's walled garden, but there are definitive advantages to vertical integration.
I agree with much of what you say, but some of these intricacies should be hidden by the API, no?
And I mean right now developers have to develop for both Intel Macs and AS Macs if they want to get all Mac users so for the foreseeable future, they’ll have to keep much of what is not dealt with by the API in mind anyway. (Unless they target only AS Macs which some will, especially those porting iDevice apps)
Metal on Apple Silicon specifically only uses TBDR, so IMR doesn’t factor into Apple Silicon unless and until Apple indicates otherwise. TBDR, specifically as Apple has implemented it (where the graphics card has a tiny amount of memory but otherwise shares main memory with the CPU) requires a fast interface. Thunderbolt doesn’t come close to the bandwidth of the SoC solution.I’m not quite sure what you’re trying to say. Metal works with both TBDR and IMR. While of course an eGPU suffers from having to connect over TB versus being on SOC, a powerful enough GPU will still be worth it just as it was on Intel Macs. As @leman says there may be other reasons why Apple would want to discourage eGPUs, but I don’t see why an eGPU would necessarily be too slow to be useful on an M1 Mac as compared to an Intel Mac. Could you explain further?
Metal on Apple Silicon specifically only uses TBDR, so IMR doesn’t factor into Apple Silicon unless and until Apple indicates otherwise. TBDR, specifically as Apple has implemented it (where the graphics card has a tiny amount of memory but otherwise shares main memory with the CPU) requires a fast interface. Thunderbolt doesn’t come close to the bandwidth of the SoC solution.
Again, Apple could potentially change the way they’ve implemented TBDR (perhaps at WWDC). But, as it’s currently implemented, Apple’s TBDR solution would not work well at all over an external bus.
Also for home use, where downtime is acceptable and playing some games is the primary use, none of this matters either.
now that Apple does not have to support third-party GPUs anymore going forward, they can really go crazy and potentially offer the devs console-level control
Yes, of course, and that's how Metal works right now. But if you want to get the most out of the Apple GPUs you have to design your rendering algorithms with their unique properties in mind. You can get major performance and quality benefits that way, but it requires explicit effort.
True, and that's why graphical stuff runs slower on M1 than it could and why developing graphical applications is more complicated than it should be.
A TBDR GPU is mostly API-compatible with a forward rendering GPU, so if you write software for both, it is sufficient to write it assuming that you have a good old Nvidia or AMD GPU running your game or whatever. But Intel Macs are a dying breed. Couple of years from now they will be rare creatures, Apple Silicon Macs are going to be much more performant, and Metal will give you even more access to the hardware (now that Apple does not have to support third-party GPUs anymore going forward, they can really go crazy and potentially offer the devs console-level control). They absolutely will want you to code for their hardware, using approaches that their hardware does best, and what's better way to do it than promise that all hardware will have unified capabilities.
This would be a case of Apple out-Microsofting Microsoft if they pull that off.
Microsoft does not build their own GPUs DirectX still has to take into account a wide range of hardware (same for Vulkan), and you can already see it reflected in the API differences.
I wonder if that means, unlike Swift, Apple won’t open source Metal? Or is that orthogonal as Apple can always endure AS optimizations are part of that standard?
Why would they ever want to open source Metal? What is the benefit to Apple or someone else?
Since the recent US Supreme Court ruling on Oracle v. Google, anyone can take the Metal APIs and write their own compatible stack if there is some call for such a thing. I can perhaps see if someone had an iOS game on Metal and wanted to port it to DirectX having a Metal wrapper around DirectX. I have no idea what kind of effort that would be though.Like Swift, I was thinking greater potential support for it, more people using it, etc... But maybe for a graphics API that isn’t as much a benefit.
Sorry, bad choice of words. What I meant was bring back OpenGL with full support as it's deprecated and introduce Vulkan.OpenGL remains supported on Apple Silicon. I don’t think Apple ever supported Vulkan. Vulkan is supported via MoltenVK.
Well, the M1 is already "dead". It's a 1st gen product (a very impressive one) to show everyone they can do it. I expect the next generation to be much more powerful and the M1 to suffer the same fate as the first MBA.Sure future SOC or in package GPUs may very well be that powerful (or more!), but as good as it is for it’s weight class, the M1’s isn’t. An eGPU could be powerful enough to be worth it for M1 Macs as well as future generations of Air, mini, and small pro machines.
It sucks, no doubt. Still, I think there's a difference between not being able to play a game and burning money... six-figures and above for downtime. There are other issues such as work that can't be done. In research this is not acceptable because another research group might be faster because that AMD workstation/server isn't working, you can't publish, therefore missing out on future funding, etc.I dunno, I do play Destiny 2 from time to time, and my build was done when the 3600 came out. I got bit and had to wait out a multi-month finger pointing, troubleshooting period before a fix was released. The USB issues with the 5000 series and certain mobos have taken ~4 months to resolve as well.
That sort of turn around isn't great in this use case either, IMO.
Like Swift, I was thinking greater potential support for it, more people using it, etc... But maybe for a graphics API that isn’t as much a benefit.
Since the recent US Supreme Court ruling on Oracle v. Google, anyone can take the Metal APIs and write their own compatible stack if there is some call for such a thing. I can perhaps see if someone had an iOS game on Metal and wanted to port it to DirectX having a Metal wrapper around DirectX. I have no idea what kind of effort that would be though.
For my game I have two backends- Apple Silicon one using Metal and a cross-platform one via Vulkan.
How about Vulkan? Could you wrap Vulkan with a Metal API? This is interesting. Software seems like you should be able to wrap anything with enough work though it might not perform particularly well. I don't know enough about modern graphics programming to be confident of what I'm talking about. Any hints on the difficult/impossible stuff.I don’t think Metal would make much sense as a cross-platform API. It’s too integrated into Apple ecosystem. There would be very little reason for anyone outside the Apple garden in adopting it since the interesting parts of the API is tied to the unique aspects of Apple GPUs anyway...
But the new WebGPU API is partially based on Metal, even though Apple didn’t manage to get as much traction there as they initially wanted. Their model was apparently not the best fit for some vendors (I guess Nvidia...)
A basic subset, e.g. Metal 1 is trivial, the only problem is shader compilation. Full contemporary Metal - probably impossible, since there are basic things you do in Metal that you can’t do in DX12 IMO.
For my game I have two backends- Apple Silicon one using Metal and a cross-platform one via Vulkan.
Agreed! Some people just can't stand to see Apple succeed and it burns them up when other companies with many years of experience in the field (namely Intel) are embarrassed that they unexpectedly got upstaged by Apple. Diehard Windows users like the OP hate that Apple hit a huge home run with the M1 which is stomping on Intel-based laptops with twice the specs with great GPU's. That's why such a ridiculous thread is created. Sit back and enjoy hater's raging. ?This isn't fairy dust--it's an actual watershed moment in computing. You can rage against it if you want to--doesn't change anything.
It sucks, no doubt. Still, I think there's a difference between not being able to play a game and burning money... six-figures and above for downtime. There are other issues such as work that can't be done. In research this is not acceptable because another research group might be faster because that AMD workstation/server isn't working, you can't publish, therefore missing out on future funding, etc.
How about Vulkan? Could you wrap Vulkan with a Metal API? This is interesting. Software seems like you should be able to wrap anything with enough work though it might not perform particularly well. I don't know enough about modern graphics programming to be confident of what I'm talking about. Any hints on the difficult/impossible stuff.
Microsoft does not build their own GPUs DirectX still has to take into account a wide range of hardware (same for Vulkan), and you can already see it reflected in the API differences.
How about Vulkan? Could you wrap Vulkan with a Metal API? This is interesting. Software seems like you should be able to wrap anything with enough work though it might not perform particularly well. I don't know enough about modern graphics programming to be confident of what I'm talking about. Any hints on the difficult/impossible stuff.
Interesting! What kind of game are you working on? Are you using your own engine?
I declared the WINTEL monopoly obsolete two years ago and got downvoted here on these forums. Everyone knew this moment was coming! Independence from Intels pricing and schedule of delays would’ve been reason enough to go it alone. But adding a Neural Engine and a GPU optimized for Metal makes Apple Silicone a game changer. The only way to compete is with another optimized ARM architecture. The entire x86 era has come to an end. It’s like selling dumbphones in 2007. ☎️ ??This isn't fairy dust--it's an actual watershed moment in computing. You can rage against it if you want to--doesn't change anything.
Normally when people say things like this, it's hyperbole. In this moment in time, with this new chip architecture, it is definitely not. We are always going to look back at 2020 and think of a LOT of bad stuff, but one of the good things we're going to remember is when the M1 chip finally launched to the public, and in the most mainstream of all the Mac models. Apple knew this was a slam dunk--if they thought it was not, they would not have put it in their most popular Mac models.I declared the WINTEL monopoly obsolete two years ago and got downvoted here on these forums. Everyone knew this moment was coming! Independence from Intels pricing and schedule of delays would’ve been reason enough to go it alone. But adding a Neural Engine and a GPU optimized for Metal makes Apple Silicone a game changer. The only way to compete is with another optimized ARM architecture. The entire x86 era has come to an end. It’s like selling dumbphones in 2007. ☎️ ??