Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

rmerpes

macrumors newbie
Oct 18, 2013
12
1
Europe
Hi!

Just wanna know what's the OpenGL driver version for those of you who have a Iris Pro (4000HD, 5000HD) intel graphics card

You can see it with the OpenGL Extension Viewer (free app from the AppStore)

(I attach the mavericks 10.9.3 version for reference)
 

Attachments

  • Sin título.png
    Sin título.png
    48.9 KB · Views: 203

PsykX

macrumors 68030
Sep 16, 2006
2,714
3,883
It's really sad to see Apple praising iOS gaming and doing some cool stuff like Metal API for it and completely ignoring gaming in OS X at the same time.

I never understood why Apple was never serious about gaming.

Remind me of Steve Jobs' first global success already ? Oh yeah... a video game.
super_breakout.png


They even lost the Halo series to the Xbox ...
 

Cougarcat

macrumors 604
Sep 19, 2003
7,766
2,553
Remind me of Steve Jobs' first global success already ? Oh yeah... a video game.

Well, that was really Wozniak's thing. Jobs didn't actually design the game. Jobs never really liked games, despite working at Atari.

They even lost the Halo series to the Xbox ...

Because MS bought them...but we did get Halo 1.
 

wilycoder

macrumors 6502
Aug 4, 2008
337
0
Opengl developer here.

The screenshots in this thread are using intel hardware. Intel GPUs does not support opengl 4.4 yet on ANY platform. We need to see a screenshot of an nvidia sytem.

Second, you will NEVER see metal in osx. Metal exists in ios only because apple designs the gpu themselves (they custom design their gpu based on technology they license from imagination technologies). Apple does not design any GPUs found in The Mac platform.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Don't have a 3D background, but have worked in development for a while.

Correct me if I am wrong, but from the look of it, OpenGL gets extended quite regularly from the vendors until OpenGL creates methods within the API to support the new features exposed by the Hardware.

If all of the Hardware vendors are creating extensions exposed to the Software, and the main features provided are in the major version updates, it seems like theres minimal benefit to updating to the newer versions. Going from 4.1 to 4.4 wouldn't realistically provide huge differences in performance at all, especially if specific Hardware extensions exist to give more well rounded performance updates.

This is my own general understanding, I could be wrong. I'd imagine if OpenGL 5 was out, it would be a bigger deal. I'd imagine that most developers are targeting the primary release 4.X and the frameworks they run off of are more optimized towards the extensions.

Metal makes much more sense for iOS then OS X for the time being. The best I can tell, Metal gives much closer to the bare metal performance then OpenGL allows. From everything I've seen, OpenGL takes a notable amount of time doing function checks before most of anything is ran to validate that the Hardware supports what the Software is running to do. Metal from a high level, looks to take more of an assumptive model with A7 as a baseline for what will run.

For gaming specifically, this isn't interesting since most games are written targeting middleware like Unity, or Epic's Unreal Engine, or Frostbite. But since those engines are going to be able to target Metal, each function call if updated right should save a lot on latency.

Metal could benefit OS X, but would be a much bigger initiative because of all of the variations in hardware that Apple has. Theres no guaranteed return on investment for that at least in the mean time since developers of the major money makers in 3D development software would need to update their work as well.

Metal's benefits from fixed or limited hardware to support. OpenGL benefits from how many hardware vendors it gets supported.

If there isn't a major point update in OpenGL, the benefits to Apple updating the version of OpenGL seems minimal.
 

rossip

macrumors regular
Feb 13, 2011
183
0
Opengl developer here.

The screenshots in this thread are using intel hardware. Intel GPUs does not support opengl 4.4 yet on ANY platform. We need to see a screenshot of an nvidia sytem.

Second, you will NEVER see metal in osx. Metal exists in ios only because apple designs the gpu themselves (they custom design their gpu based on technology they license from imagination technologies). Apple does not design any GPUs found in The Mac platform.


That's a good point about chip design, one that I was thinking of bringing up myself.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
...which is made by Imagination Technologies.


Completely agree.

Seems like the GPU should be a fairly standard known quantity.

Though I wouldn't be surprised if Apple does far more advanced customizations longer term with what they've been doing.

----------

Looking at the differences between the different API's, it looks like OpenGL 4.2 will be a big update if the GPU supports it. I'm curious if they are doing something more elaborate for the next big release considering theres two GPU's in the Mac Pro.

Again, don't know how these things scale on the OpenGL side.
 

wilycoder

macrumors 6502
Aug 4, 2008
337
0
Completely agree.

I don't know how you could disagree. Could you explain?


Looking at the differences between the different API's, it looks like OpenGL 4.2 will be a big update if the GPU supports it. I'm curious if they are doing something more elaborate for the next big release considering theres two GPU's in the Mac Pro.

Again, don't know how these things scale on the OpenGL side.

For the work I do, the important update is GL 4.3 which introduces compute shaders and shader storage buffer objects into the core GL profile.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Yosemite OpenGL

I don't know how you could disagree. Could you explain?









For the work I do, the important update is GL 4.3 which introduces compute shaders and shader storage buffer objects into the core GL profile.


i was agreeing that Apple isn't building their own GPU's for iOS devices yet.

Don't think theres much incentive to build it from scratch

I'd think theres more advantage to customizing the hell out of the GPU's the way they did with the CPU's. They may use the PowerVR IP to build something that is at least compatible but I don't think theres much advantage to building the entire GPU stack themselves.

Again, for my own curiosity is there an advantage to doing it in OpenGL for Sharder Computer that you can't get out of OpenCL? It seems to me at least that if theres something specific for compute then theres more incentive in doing OpenCL
 
Last edited:

wilycoder

macrumors 6502
Aug 4, 2008
337
0
i was agreeing that Apple isn't building their own GPU's for iOS devices yet.

Don't think theres much incentive to build it from scratch

I'd think theres more advantage to customizing the hell out of the GPU's the way they did with the CPU's. They may use the PowerVR IP to build something that is at least compatible but I don't think theres much advantage to building the entire GPU stack themselves.

Apple starts by licensing a complete, working gpu implementation from img tech and then does heavy customization to the design. Qualcomm did something similar with their krait CPU, license the design and then tweak the heck out of it.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Apple starts by licensing a complete, working gpu implementation from img tech and then does heavy customization to the design. Qualcomm did something similar with their krait CPU, license the design and then tweak the heck out of it.


Very cool. I know Apple does this with the ARM CPU's. I'd imagine that going too far outside of the actual implementation would require Apple to do much more work on the GPU side as opposed to the CPU side.

Apple has complete control over the entire OS stack for CPU's, so I figured the GPU's wouldn't be as easy to just move over to completely internal development.
 

wilycoder

macrumors 6502
Aug 4, 2008
337
0
Again, for my own curiosity is there an advantage to doing it in OpenGL for Sharder Computer that you can't get out of OpenCL? It seems to me at least that if theres something specific for compute then theres more incentive in doing OpenCL

Opencl performance on nvidia is not as good as cuda.

CL-GL interop requires a context switch in the gpu. If one is doing compute work that is rendering related, it can be easier to drop in a quick compute shader than to integrate all of opencl.

Opencl has more features and guaranteed precision, but it is also more boilerplate code to work with. Also a different, but similar, shading language.

I've used both CL and compute shaders, two different tools to solve the same kind of problem (sort of).

----------

Very cool. I know Apple does this with the ARM CPU's. I'd imagine that going too far outside of the actual implementation would require Apple to do much more work on the GPU side as opposed to the CPU side.

Apple has complete control over the entire OS stack for CPU's, so I figured the GPU's wouldn't be as easy to just move over to completely internal development.

I'm starting to think apple might ditch intel and nvidia in 5-10 years and do everything, CPU + GPU completely in house for iOS and Mac. Complete vertical integration...
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Opencl performance on nvidia is not as good as cuda.



CL-GL interop requires a context switch in the gpu. If one is doing compute work that is rendering related, it can be easier to drop in a quick compute shader than to integrate all of opencl.



Opencl has more features and guaranteed precision, but it is also more boilerplate code to work with. Also a different, but similar, shading language.



I've used both CL and compute shaders, two different tools to solve the same kind of problem (sort of).

----------





I'm starting to think apple might ditch intel in 5-10 years and do everything, CPU + GPU completely in house for iOS and Mac. Completely vertical integration...


That makes sense. Are you talking for individual development where this is an issue, and sending things from CL to GL that the penalty comes? i can see the advantages either way, just curious.

I also agree that 5 to 10 years Apple does everything internally. But I don't think it will happen fast. 5 years seems feasible as a starting point since Apple is so reliant on the x86 for traditional users. But the performance of ARM, and the ability to do it all themselves and not worry about power consumption as much as iOS users I can see it happening starting with the MacBook Air line.

Many things need to be aligned for that to happen, but the direction that things are going it makes a lot of sense a few years out.
 

Centsy

macrumors regular
Feb 9, 2011
108
14
Opengl developer here.

The screenshots in this thread are using intel hardware. Intel GPUs does not support opengl 4.4 yet on ANY platform. We need to see a screenshot of an nvidia sytem.

Second, you will NEVER see metal in osx. Metal exists in ios only because apple designs the gpu themselves (they custom design their gpu based on technology they license from imagination technologies). Apple does not design any GPUs found in The Mac platform.

Well maybe not Metal, but we could see a vendor specific API that does something similar. For example, AMD is developing the Mantle API for AMD cards (although it is an open standard, so hypothetically it could be implemented for nVidia as well).

http://en.wikipedia.org/wiki/Mantle_(API)
 

WallToWallMacs

macrumors regular
Jan 26, 2014
166
0
I would also like to know. The Yosemite page is surprisingly devoid of new under the hood improvements.

Look on the Mac developer preview library because there are a lot of things listed - the OpenGL changes appear to be related to fixing up bugs and making life easier for developers who want to support two GPU configurations such as the Mac Pro. Oh, and here are the files that have been changed.
 

leman

macrumors Core
Oct 14, 2008
19,494
19,631
Second, you will NEVER see metal in osx. Metal exists in ios only because apple designs the gpu themselves (they custom design their gpu based on technology they license from imagination technologies). Apple does not design any GPUs found in The Mac platform.

We already had a different thread about this and I never understood what Metal has even remotely to do with chip design. Metal is a modern GPU-agnostic graphics API designed for low call overhead. Much like Mantle is or DirectX 12. It has nothing that ties it to the PowerVR GPUs used in iPhones and iPads. Your argument is like saying 'you will never see DirectX on Windows because Microsoft does not design the GPUs for the PC platform'.

All Apple needs to do is ask the IHVs to write a pluggable driver for the Metal framework. Oh wait, they already do the same thing with their OpenGL framework. Adopting Metal on OS X would mean higher performance and fewer bugs. In fact, I'd be happy if Apple completely switches to Metal, leaving OpenGL 4.1 as a wrapper.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Yosemite OpenGL

We already had a different thread about this and I never understood what Metal has even remotely to do with chip design. Metal is a modern GPU-agnostic graphics API designed for low call overhead. Much like Mantle is or DirectX 12. It has nothing that ties it to the PowerVR GPUs used in iPhones and iPads. Your argument is like saying 'you will never see DirectX on Windows because Microsoft does not design the GPUs for the PC platform'.

All Apple needs to do is ask the IHVs to write a pluggable driver for the Metal framework. Oh wait, they already do the same thing with their OpenGL framework. Adopting Metal on OS X would mean higher performance and fewer bugs. In fact, I'd be happy if Apple completely switches to Metal, leaving OpenGL 4.1 as a wrapper.


From what I understand, Apple writes all of the drivers internally for whats in OSX right now. They may get assistance from IHV's, but from what I understand especially with Retina Mac's they write the drivers themselves.

The advantage from what I understand of Metal is that you are getting closer to the "Metal" in that your code doesn't iterate through all of the function checks to see what is available.

Ideally, you are right, they could just ask the vendors to provide graphics drivers.

I'd ASSUME that Apple has more incentive to provide support for it since the IHV's would need to spend internal engineering resources to hit Apple's internal needs for the drivers for each release. Since IHV's don't get the opportunity to sell upgrades of cards for Mac's like they do on the PC end, I'd think that the priority isn't very high at NVida or ATI or even Intel to continue to write these for devices that are outside of their current product pipeline.

I agree that Apple writing these things is far less then optimal. We've also seen Apple write their own stuff that has been great for the specific purposes that they use it for. I'd have to go back and look, but I remember reading about how the original Retina MBP's had an extremely clean and efficient set of software to handle the Retina implementation that was better then what was provided by Intel or Nvidia at the time.

For 3D work, from an outsider on this (I work in software developers every day for other areas) it seems like they are doing this stuff in house to keep things as close to their own closed "Platform" needs then what Windows does.

Definitely not saying it's better. It's just a different approach. They sometimes come out ahead by doing this with the way they implement certain features into their hardware. Then they fall behind on things like this.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,494
19,631
The best I can tell, Metal gives much closer to the bare metal performance then OpenGL allows. From everything I've seen, OpenGL takes a notable amount of time doing function checks before most of anything is ran to validate that the Hardware supports what the Software is running to do.

Precisely. The issue that the API is designed in a way which does not allow the CPU to feed the GPU fast enough, which starves the performance. Note that newer OpenGL version have a bunch of things to help dealing with this issue (there is a recent Nvidia presentation called 'approaching zero driver overhead in OpenGL'). Also, Nvidia has been providing a number of proprietary extensions (https://developer.nvidia.com/content/bindless-graphics) since as early as 2009. But unfortunately, the official OpenGL API is pretty much stale. It either needs to be redesigned or frozen for good.

Metal from a high level, looks to take more of an assumptive model with A7 as a baseline for what will run.

Again, I don't see any indication for that. From what I've seen, the API is pretty much identical to OpenGL without some pipeline stages.

Metal makes much more sense for iOS then OS X for the time being.

Only because it is a good testing platform. As many have mentioned, it is easier for Apple to provide and tweak the drivers.

For gaming specifically, this isn't interesting since most games are written targeting middleware like Unity, or Epic's Unreal Engine, or Frostbite. But since those engines are going to be able to target Metal, each function call if updated right should save a lot on latency.

Your last sentence is exactly what is very interesting for gaming ;)

Metal could benefit OS X, but would be a much bigger initiative because of all of the variations in hardware that Apple has. Theres no guaranteed return on investment for that at least in the mean time since developers of the major money makers in 3D development software would need to update their work as well.

Sure, its a process. The API needs to be more refined until it can be pushed to the desktop. But again, Microsoft has been doing this for years. I have little doubt that the DirectX 12 driver model will be quite different from DirectX9 or DirectX11. So the hardware vendors will have to rewrite their drivers in any case.

Metal's benefits from fixed or limited hardware to support. OpenGL benefits from how many hardware vendors it gets supported.

This I honestly don't get. Both applies to any kind of graphics API.

----------

From what I understand, Apple writes all of the drivers internally for whats in OSX right now. They may get assistance from IHV's, but from what I understand especially with Retina Mac's they write the drivers themselves.

AFAIK, this is not the case. I don't have direct access to the industry and especially this area seems to be very discrete, but I've heard from multiple sources (e.g. big game developers publishing on OS X), that drivers are actually provided by the hardware vendors themselves. What Apple does is provide the base framework and the IHVs need to implement the backend hooks. Apple might write the Intel drivers themselves though, because Intel publishes the documentation for programming their GPUs.

It seems that the reason for low performance of OS X drivers is a) because the Apple OpenGL framework adds an additional layer which slows things down and b) because the IHVs are not optimising as well as they do for Windows. Metal could potentially fix that.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Sure, its a process. The API needs to be more refined until it can be pushed to the desktop. But again, Microsoft has been doing this for years. I have little doubt that the DirectX 12 driver model will be quite different from DirectX9 or DirectX11. So the hardware vendors will have to rewrite their drivers in any case.







This I honestly don't get. Both applies to any kind of graphics API.


The general point I was trying to make is that since Apple has much more control over whats in their hardware, and it's a limited set of hardware that it's supporting the A7 is going to be the baseline for what happens with Metal going forward.

I do want to be careful to phrase this based off of my understanding. The way how OpenGL seems to run is almost to be getting what will be supported by the hardware by sending a request to the OS to see whats supported. Once that's validated, getting the functions relevant and running what it can handle. This seems like it's not much better from the Java model of "Write Once Run Everywhere". Yes, Java will run on everything that has the runtime, but it comes at the expense of having the bytecode run through the Java VIrtual Machine, then running against the results.

What Metal, at least to me, appears to do is far more assumptive. A7 is the baseline. If you program to Metal, you know that the full graphic stack of A7 is there. It runs without as much intermediate steps. Once the calls are actually ran, I'd imagine that there is very very little difference. But all of the other steps build up latency.

I can't stress enough that I'm thinking about this in terms of OpenGL running more like Java (though OpenGL doesn't suck as much as Java on most platforms), and Metal running more like native code compiled specifically against it

----------

AFAIK, this is not the case. I don't have direct access to the industry and especially this area seems to be very discrete, but I've heard from multiple sources (e.g. big game developers publishing on OS X), that drivers are actually provided by the hardware vendors themselves. What Apple does is provide the base framework and the IHVs need to implement the backend hooks. Apple might write the Intel drivers themselves though, because Intel publishes the documentation for programming their GPUs.



It seems that the reason for low performance of OS X drivers is a) because the Apple OpenGL framework adds an additional layer which slows things down and b) because the IHVs are not optimising as well as they do for Windows. Metal could potentially fix that.


You could very well be right. It's far too outside of my domain.

From my perspective, Metal seems like the first step towards what DirectX does on Microsoft's consoles. Familiar to developers, but with optimizations in the right places to make it just run way better if the developer targets it explicitly. We shall see in a few months if that is actually the case.

It's more exciting to me what Metal means long term. From my point of view, it seems like Apple is opening up much deeper hooks for low level coding that wasn't available before. It's being done in a controlled way, that still lets the developers get more done then with current open standards.

The beautiful thing, you have the choice. You can use an OpenGL stack and get something that is some what more portable to other platforms, or you can use Metal and target the hardware more explicitly and be able to reap those benefits. OpenGL ES isn't being dropped from iOS 8. It's just another method to draw pretty polygons on the screen.
 

leman

macrumors Core
Oct 14, 2008
19,494
19,631
I do want to be careful to phrase this based off of my understanding. The way how OpenGL seems to run is almost to be getting what will be supported by the hardware by sending a request to the OS to see whats supported. Once that's validated, getting the functions relevant and running what it can handle. This seems like it's not much better from the Java model of "Write Once Run Everywhere". Yes, Java will run on everything that has the runtime, but it comes at the expense of having the bytecode run through the Java VIrtual Machine, then running against the results.

My first reaction was to write 'no, that is not how it works'. But its also possible that your understanding is essentially correct. Well, let me elaborate. OpenGL has a specification and everything exposed by the specification must be supported. its actually rather strict about this. E.g. if I target OpenGL 4.0, I know that I can use certain things (like tessellation shaders). Same with the various extensions offered via OpenGL — I can test for the extension and if its supported, I can switch to a certain code path. These tests happen only once though, at the start of the application, so they do not have any performance impact.

A completely different question is whether a certain functionality is actually supported by the underlying hardware, e.g. whether is fast. E.g. a driver might support tessellation shaders, but it could actually fall back to slow CPU-based emulation if you use them. This is a big problem, simply because there is no way to tell which feature is fast and which is not.

This is not a problem which is OpenGL-specific, every API which aims to communicate with a certain hardware has it, but OpenGL is quite prone to these problems because of its complexity. The specification is very big and consequently, its quite difficult to write a driver which fully supports it and does it really good. This is why Microsoft has had such good success with DirectX — they provide most of the functionality (like the shading language compiler), and the hardware vendor just needs to implement a fairly simple interface to plug in their drivers. This makes things easier for both the driver developer (as they don't need to care about a very complex specification) and the software developer (because they have less stress in trying to find out the idiosyncrasies of the particular implementations).


What Metal, at least to me, appears to do is far more assumptive. A7 is the baseline. If you program to Metal, you know that the full graphic stack of A7 is there. It runs without as much intermediate steps. Once the calls are actually ran, I'd imagine that there is very very little difference. But all of the other steps build up latency.

I don't really see a principal difference here. Sure, Metal assumes some things; but so does OpenGL. They both assume that the graphics processing happen in a pipeline (the pipeline model between the both is essentially identical), that the output is produced by rasterising triangles, that there are framebuffer operations, that there are certain areas of memory called 'textures' which have to be accessed in a specific way, and so on.

It is true that when using Metal you can assume that the full A7 stack is available, but only because iOS and A7 is the only platform where Metal is actually implemented ;) If someone would, say, write a Metal library which translates it into OpenGL calls, you could link your application agains it and it will still work.

I can't stress enough that I'm thinking about this in terms of OpenGL running more like Java (though OpenGL doesn't suck as much as Java on most platforms), and Metal running more like native code compiled specifically against

While your Java analogy has some merit, it applies to any API in more or less the same degree. The purpose of the API is to provide an abstraction level over hardware, so there is always some translation involved. APIs like Metal and DirectX 12 aim to be 'thin', by avoiding design mistakes that can lead to reduced performance in certain scenarios. APIs like OpenGL do not really care about that because they have a certain historical background from the times when the hardware was different. Both APIs make some assumptions about the underlaying hardware and neither of them have anything that makes them 'like native code'. E.g. the Metal shader language for example is no way similar to the actual native code used by the GPU.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.