Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
OpenGL 4.1 is quite limiting though. Off the top of my head, Cemu (Wii U emulator) requires OpenGL 4.5 and Yuzu (Nintendo Switch emulator) requires OpenGL 4.6. Can't imagine what else doesn't run because of missing decade of OpenGL updates.
When was OpenGL last updated?
 

spaz8

macrumors 6502
Mar 3, 2007
492
91
Even when apple did semi care about openGL it was always a full 1.0 version behind the leading edge in windows. The Foundry threw in the towel trying to support a mac version of Mari because the open GL versions between windows and MacOS were just too far apart, and they were not willing to rewrite the whole very complex program in metal.. which makes me sad as a license owner. I pretty much bought my MP 6,1 to run mari back in 2014.
 

mi7chy

macrumors G4
Oct 24, 2014
10,621
11,294
When was OpenGL last updated?

Makes sense now why a lot of software don't run since, for example, Doom (2016) requires OpenGL 4.2.

https://en.wikipedia.org/wiki/OpenGL
1665933345423.png


Even Intel integrated graphics from 2013 could run it.
1665933701219.png
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
When was OpenGL last updated?

2017



in part because Vulkan is
“…. The Vulkan API was initially referred to as the "next generation OpenGL initiative", or "OpenGL next" by Khronos. …

… On March 7, 2018, Vulkan 1.1 was released by the Khronos Group …”


It not the same level of scope and technically Khronos group still lists OpenGL as an active standard .

However, pragmatically Vulkan is the replacement in the areas where Khronos is more interested and motivated. OpenGL 4.6 loops in some support for SPiR-V language shaders . That and other hooks to other more active standards is a contributing reason why OpenGL is still active status. They may need to sync up with some changes elsewhere , but major new features are unlikely . There is also many millions of code out there that doesn’t want the standard terminated .
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
I mean, if it was discontinued 5+ years ago it’s dead isn’t it? Isn’t that what Windows is for (until they pull off the shift that they too are trying to do)?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Of course it’s limiting, it was released in 2010. It’s hardly a problem since Apple only provide OpenGL to support old legacy applications.

chuckle . Apple did not release 4.0 until 2013 ; 3 years later .


of course Apple didn’t get to 4.5 because they were already four versions behind by 2013.
Metal didn’t come out until 2014 ( same year as 4.5 ) and not on Macs until 2015.

There were multiple years between where Apple quit before when they rolled out a supposed substitute. Khronos didn’t do that for there transition . There was a OpenGL 4.6 to ease transition to Vulkan 1.1 timeframe.


What Apple did was cheaper for them . It also open door for them to change the underlying ’plumbing’ to be far more oriented to Metals needs while the API was comatose( non moving target) . What Apple was not doing was trying to provide Mac developers with a leading edge grapgics API for several years. They would have had a fig leave excuse if had released a better alternative , but did not for a long gap.

That is way it is puzzling that some folks think Apple is out to make the biggest , baddest GPU ever when have substantive track record of not putting much of any effort for years at a time in trying to stay at the leading edge of performance.[/B]
 
Last edited:
  • Like
Reactions: spaz8

leman

macrumors Core
Oct 14, 2008
19,521
19,674
For those interested in obscure history of graphics, long time ago (we are talking about 2006-2007) the OpenGL committee was working on a next gen version of OpenGL 3.0 codenamed "Longs Peaks". It was supposed to be a forward-looking clean slate redesign of the entire API. Some slides were published which got the community very exited — I still remember how the devs were happy to finally get a proper modern API instead of working with the extremely messy and hard to debug OpenGL state machine. But then the committee went into media blackout and when they reemerged and announce the "OpenGL 3.0" it was not what we expected. What we got was basically the same old OpenGL with few minor features and a soft deprecation of some old functionality. Nobody really knows what happened back then and why the promising approach was abandoned, there was just a cryptic "we ran into issues" without much explanation. Most likely the committee either couldn't agree on some details or was actively sabotaged.

This was essentially the beginning of the end for OpenGL. Many devs — who were sticking with the API because they believed in open standards and hoped that 3.0 will solve the issues people have complained about for a decade — just threw in the towel and switched to DirectX. The OpenGL community, once very strong and active, slowly fell apart and the OpenGL committee became somewhat of a running joke on the official GL forums.

I believe — and mind, this is just a speculation! — that Apple was among those frustrated by this development. They would have immensely benefited from the simplified API model due to their driver model (for example, they were the first implementor to drop the legacy OpenGL features and only implement the modern programmable shading profile). The nature of OpenGL abstractions makes performance unpredictable and requires a lot of software-specific driver-side optimisations to achieve good results — something Apple was not in the mood of doing (unlike the hardware vendors who could use this as a mechanism to promote their products). They probably started working on Metal around 2012, when it became abundantly clear that the open source approach wasn't producing any useful results (Apple submitted the initial OpenCL proposal to Kronos around 2008 if I remember correctly). Apple was also quick to join the initial Vulkan effort when the initiative was announced, but dropped out quickly as it became clear that the API was moving in a very different direction from what Apple would have liked. Well, at least they have managed to significantly influence the WebGPU spec to be conceptually similar to Metal.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
That is way it is puzzling that some folks think Apple is out to make the biggest , baddest GPU ever when have substantive track record of not putting much of any effort for years at a time in trying to stay at the leading edge of performance.

Read my post above for the historical context. And sure, I complexity agree with you that OpenGL support was an extremely low priority for Apple. It's graphics division was pretty much frozen between 2009 and 2013.

But I find it even more puzzling that you would quote events that happened decade ago and completely ignore what they are doing now. In the recent few years Apple has developed a cutting edge GPU API (and one which is actually a pleasure to work with), hardware that outperforms any other vendor in a similar thermal envelope as well as state of the art GPU debugging tools. They should still do more and their developer relations group is pretty much non-functional, but they are investing a lot of time and effort into these things.
 

spaz8

macrumors 6502
Mar 3, 2007
492
91
The problem is Apple throws the baby out with the bath water every 6? years on developers. So unless you are a devout mac developer, or have large resources.. the business case is often not there for devs to adopt whatever new novel tech apple are championing that season. Do you remember how many years it took for devs to port their apps to 64bit.. Probably 5+ years - and that included 3d software that really would benefit from it. Apple had to basically castrate API's to get devs to move to Carbon. Metal is a similar thing, I'm very happy that there is a Houdini port to metal.. but it is taking years and I think most thought mac support was gonna end when Apple silicon showed up. You are basically asking software companies to 2-3x their effort for likely a fraction of their user base.
 
  • Like
Reactions: singhs.apps

jujoje

macrumors regular
May 17, 2009
247
288
I'm very happy that there is a Houdini port to metal.. but it is taking years and I think most thought mac support was gonna end when Apple silicon showed up.
The Apple Silicon Houdini port is still OpenGL -> Metal rather than Metal, hence it's still stuck on the old version of OpenGL feature wise. It is more stable than the old AMD OpenGL driver and pretty performant so it's not all bad, but misses a few features. Strongly suspect that the new Houdini viewport will similarly be Vulkan/MoltenVK and not Metal (unless Apple step up and provide a lot of a support developing an additional viewport just for Mac doesn't seem like a good use of time if MoltenVK is fast enough).

The Foundry threw in the towel trying to support a mac version of Mari because the open GL versions between windows and MacOS were just too far apart, and they were not willing to rewrite the whole very complex program in metal..

Which is kind of ironic given that they showed off Mari on the 2013 Mac Pro and praised how revolutionary it was. Then again they also demonstrated a Metal viewport in Modo, and that never materialised either.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Yesterday, AMD claimed that its interconnect is 5.3 TB/s, twice as fast as UltraFusion.
amd.png


Is this a TSMC or AMD technology? Could Apple use it for M2 Ultra?
What are the implications of Mx Ultra doubling its interconnect speed?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Yesterday, AMD claimed that its interconnect is 5.3 TB/s, twice as fast as UltraFusion.
View attachment 2107906

Is this a TSMC or AMD technology? Could Apple use it for M2 Ultra?
What are the implications of Mx Ultra doubling its interconnect speed?

It's a bit of a misnomer. The individual GCD-to-MCD connection is around 900GB/s, so the claimed 5.3TB/s is the aggregate bandwidth of six MCD dies. It's kind of difficult to compare these things, especially since I couldn't find any information on the SCL cache size or bandwidth of M1 Ultra. I think overall is fairly safe to assume that AMD's caches have always been faster than Apple's caches (you also have to consider the issue of power consumption). But then again, AMD's GPUs also run much higher frequency and need that higher bandwidth to maintain a steady flow of bandwidth. If you normalise for the compute throughput of each GPU there is not much difference, actually, Apple might even end up slightly ahead.
 
Last edited:

tmoerel

Suspended
Jan 24, 2008
1,005
1,570
What are the odds that Apple will move away from integrated graphics for their Mac Pro and iMac Pro?

1 big GPU card is better than 4 M2 Max fused together in the end.

Combining multiple AMD or NVIDIA GPU’s also wasn’t very good using SLI or Crossfire. It is better to just have 1 big powerful one.
But Apple has a dedicated GPU and a dedicated CPU and dedicated RAM and dedicated Video and Audio accelerators......they are just all on the same piece of silicon :p
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
But Apple has a dedicated GPU and a dedicated CPU and dedicated RAM and dedicated Video and Audio accelerators......they are just all on the same piece of silicon :p
No.

We use dGPU to mean a card, typically non-unified memory, on PCIe. By contrast, the M1 has an iGPU, meaning integrated rather than dedicated. Most iGPUs rely on unified memory since they are on the same package as the CPU(s). Most or all Intel processors these days (e.g., Alder Lake) have some sort of integrated GPU on the package, in order to allow some customers to skip installing a graphics card in their product – the models likely to go into notebooks tend to have larger iGPU arrays than the workstation-class processors.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
No.

We use dGPU to mean a card, typically non-unified memory, on PCIe. By contrast, the M1 has an iGPU, meaning integrated rather than dedicated. Most iGPUs rely on unified memory since they are on the same package as the CPU(s). Most or all Intel processors these days (e.g., Alder Lake) have some sort of integrated GPU on the package, in order to allow some customers to skip installing a graphics card in their product – the models likely to go into notebooks tend to have larger iGPU arrays than the workstation-class processors.

I think you are kind of missing the joke of the post your have quoted ;)

That said “dedicated” these days is mostly an emotional label. Folks use it as synonymous to “fast” or “powerful”. Well, Apple GPUs are plenty fast. We should just retire the term altogether. For the purpose of this thread it makes more sense to talk about a “modular” or “swappable” GPU because that’s what people really mean.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,663
OBX
I think you are kind of missing the joke of the post your have quoted ;)

That said “dedicated” these days is mostly an emotional label. Folks use it as synonymous to “fast” or “powerful”. Well, Apple GPUs are plenty fast. We should just retire the term altogether. For the purpose of this thread it makes more sense to talk about a “modular” or “swappable” GPU because that’s what people really mean.
Which isn't a thing on the mobile side (well anymore as MXM died a silent death). Even on the desktop side it really was just a GPU that wasn't on the same package as the CPU as even modern day GPU's can share memory with the CPU at a massive performance hit cost, see the RX 6400/6500.

At this stage of the game Apple, per how they have explained their API, really can't do a non-integrated display out GPU with Appel Silicon. They probably could add a "dGPU" as a headless accelerator (think AMD Instinct line) though.

I guess we can be surprised whenever they get around to showing us the new Mac Pro though.
 

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
I think you are kind of missing the joke of the post your have quoted ;)

That said “dedicated” these days is mostly an emotional label. Folks use it as synonymous to “fast” or “powerful”. Well, Apple GPUs are plenty fast. We should just retire the term altogether. For the purpose of this thread it makes more sense to talk about a “modular” or “swappable” GPU because that’s what people really mean.
Sure, let's change established terminology to placate overly sensitive Apple fans.
 
  • Like
Reactions: gwang73

tmoerel

Suspended
Jan 24, 2008
1,005
1,570
Sure, let's change established terminology to placate overly sensitive Apple fans.
I think the established terminology does need change as the world of computers is changing. The CPU/GPU paradigm is fading.
We now have a lot more processors all helping in solving computer problems: CPU, GPU, Video Processors, Audio Processors, Encryption Processors, ML Processors, etc., etc. Do you want each of these to be replaceable?
Keep in mind that communication between different parts suffers when making things replaceable and also bumps prices up. Isn't it slowly time to admit the computer world is changing so something new that is more efficient, less power hungry and more portable.
And keep in mind that power efficiency will be more and more important as our ways of generating power at the moment is polluting, finite and getting more and more expensive. When I see some of those external GPUs pulling 400W+ of power, it makes me cringe. This should not be allowed!
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
I think the established terminology does need change as the world of computers is changing.
The M1 has not changed what an integrated or dedicated GPU is, only our conception of them. Until the M1 Pro/Max, an integrated GPU meant slow, and a dedicated GPU meant fast.

To consider the M1 Pro, and especially the M1 Max as having an integrated GPU is a bit misleading. If we consider the neural engine and the media engine as part of the GPU, as the PC world does, the M1 Pro/Max is more like a CPU integrated into a GPU than a GPU integrated into a CPU.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Sure, let's change established terminology to placate overly sensitive Apple fans.

Established terminology? What established terminology? Half of the posters on this forum consider GeForce 320M a dedicated GPU just because it was made by Nvidia. Not to mention that this terminology is completely useless. Do you think people calling for a dGPU will be happy if Apple gives them a soldered-on GPU with soldered-on dedicated RAM? No, they are asking for a “dedicated graphics” because they want a modular GPU card. But modularity is in no way a part of “dedicated” terminology.

Some time ago dGPU was a reasonably useful proxy for GPU capabilities. But now that we have integrated GPUs with 512bit DRAM bus and 10TFLOPs of compute throughput who cares whether the RAM is shared or dedicated. Even Intel is now shipping iGPUs that are faster than some dGPUs. These discussions would be much more constructive if people stop relying on opportunistic labels and voice their expectations instead.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
M1 Pro/Max is more like a CPU integrated into a GPU than a GPU integrated into a CPU.

It’s a somewhat fitting characterization but at the same time this still trying to deform the technical reality into very narrow conventional notions just because the PC market historically happened to operate within those notions. All of these difficulties disappear if one talks about technical properties of the system, the memory hierarchy and the data interconnect.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.