Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

johngwheeler

macrumors 6502a
Original poster
Dec 30, 2010
640
212
I come from a land down-under...
At present, it seems that only a handful of specialized apps will make use of the nMP's GPUs for computational tasks. This makes it less attractive if you don't use one of these apps.

How long do you think it will take for the majority of desktop apps to be able to leverage the GPU to improve performance?

My own interest is Java, and there is a current project to run Java apps on the GPU (Sumatra), which may find its way into JDK 8 or 9 by 2015.
 
Not all desktop tasks are parallel in nature, meaning only specialized applications can take advantage of OpenCL.

What day-to-day software would you think needs highly parallel computing?
 
Never.

Welcome to the Apple Marketing Hype Machine.

There are many situations where OpenCL simply isn't applicable. Those situations aren't changing anytime soon. Furthermore, there are concerns about different GPU families returning slightly different results via OpenCL. This makes a lot of GPU processing even more unattractive if you need stability in your results across different machines.

WYSIWYG, basically. There are a few more high end applications that I'm sure will be supporting OpenCL in the future, but that's about it. Don't expect it to revolutionize non-niche tasks. You can blame Apple for making it sound like the be-all-end-all solution to everything.

-SC
 
I have the hex-core Late 2013 Mac Pro. Before that, my primary system was a Late 2012 Mac Mini 2.6Ghz quad-core i7. And before that, I had the 2006 Mac Pro 1,1 quad-core 2.66Ghz.

Even without OpenCL, it "feels" to me that Apple could still do more to take advantage of all the CPU cores.

Mavericks has Grand Central Dispatch, but just watching Activity Monitor I'm kind of surprised how often I see something sitting at 100% CPU for a long time and the box otherwise essentially idle.
--
 
About as long as it took everyone to rewrite their software to take advantage of Altivec.
 
How long do you think it will take for the majority of desktop apps to be able to leverage the GPU to improve performance?

Depends on what we're talking about.

Apps like Microsoft Word and mail clients have very little reason to use GPGPU under normal use. There is no reason to ever optimize them. So if those are part of the "majority of desktop apps", that may never happen.

CoreAnimation under OS X already has some OpenCL support, so that means apps like iPhoto, Safari, Maps should be seeing GPGPU gains, although they already used OpenGL anyway.
 
Is it possible we'll see any games take advantage of OpenCL or would it not be beneficial?
 
Nope.

Game engines target the lowest common denominator, and attempt to run as well as they can on that (typically that's your Xbox and Playstation consoles). Even if there were operations you could speedup using OpenCL (AI, physics, and audio processing spring to mind), there'd be no money in it because it would require a specialized machine to take advantage of.

-SC
 
Is it possible we'll see any games take advantage of OpenCL or would it not be beneficial?

OpenCL is for calculation, what you're referring to would be the job of OpenGL/Mantle/Direct3D/DirectX

Maybe for game development they'll be some need for OpenCL, but for the actual game itself, I doubt it.

And yes, it'll take a loooong time for softwares to make use of OpenCL. Heck, many software does not even fully optimize multi-core CPUs.
 
I think HSA is one of the key technologies that will drive the adoption of OpenCL. Unfortunately, there is no counterpart from Intel, yet.

Once the use of the GPU is as easy as the use of a floating point coprocessor (FPU) we will see the use of GPUs everywhere. I assume, by then it's not so much in the form of OpenCL but in the form of libraries which use OpenCL under the hood.
 
Pretty sure Iris Pro meets the requirements for HSA.

You're right. At least some parts are already there. From the Intel developer docs:

> 5.3
> Intel Iris Graphics Extension for Instant Access
> The 4th gen Intel Core processors use a unified memory architecture, so the
> GPU and the CPU share the same physical memory. Therefore, it is a waste of
> resources (bandwidth and power) to force memory copies when the CPU
> wants to write to or read from GPU resources.

Of course it would be much easier if AMD and Intel would agree on a common standard like they did with the 64-bit extensions to x86 AMD developed and Intel later adopted (after their titanium x64 architecture did not take off as expected).

Anyway, I think AMD and Intel are moving in the right direction. Instead of copying huge amounts of data between RAM and VRAM now it is only pointing to a place in RAM.
 
About as long as it took everyone to rewrite their software to take advantage of Altivec.

I'm not entirely sure how similar to AltiVec OpenCL actually is. A lot of articles I've read point to it being next to useless for software synth processing requirements and only theoretically useful for convulsion reverb effects.

Back when I had a G4, the demise of a heavily optimised software synth that used AltiVec turned it into a calculator over night when I tried doing the same kind of plugin heavy sessions using other synths that just relied on brute force CPU power. Even swapping the CPU for a dual one didn't make much difference.
 
Last edited:
Nope.

Game engines target the lowest common denominator, and attempt to run as well as they can on that (typically that's your Xbox and Playstation consoles). Even if there were operations you could speedup using OpenCL (AI, physics, and audio processing spring to mind), there'd be no money in it because it would require a specialized machine to take advantage of.

-SC

OpenCL is for calculation, what you're referring to would be the job of OpenGL/Mantle/Direct3D/DirectX

Maybe for game development they'll be some need for OpenCL, but for the actual game itself, I doubt it.

While all true, OpenCL is used by Dice for Physics in the new Battlefield. There is some uses, although it's tiny.
It's also used by Havok physics, and for Lara Croft's hair. So it has some uses, although it's far from being widely adopted.

I do wonder if AMD's new Audio tech might leverage OpenCL as well, considering how good their products are at it.
 
Of course it would be much easier if AMD and Intel would agree on a common standard like they did with the 64-bit extensions to x86 AMD developed and Intel later adopted (after their titanium x64 architecture did not take off as expected).

They did, it's OpenCL. :)

The newest version of OpenCL adds support for HSA.

----------

I'm not entirely sure how similar to AltiVec OpenCL actually is. A lot of articles I've read point to it being next to useless for software synth processing requirements and only theoretically useful for convulsion reverb effects.

Back when I had a G4, the demise of a heavily optimised software synth that used AltiVec turned it into a calculator over night when I tried doing the same kind of plugin heavy sessions using other synths that just relied on brute force CPU power. Even swapping the CPU for a dual one didn't make much difference.

Kind of? And kind of not. They're both used for processing large chunks of data, but Altivec is more useful for time sensitive operations.

Altivec and Intel's SSE are actually very similar. And I think they're more preferable for real time operations. OpenCL is better for less time sensitive operations and things that are on the GPU anyway.
 
It's also used by Havok physics, and for Lara Croft's hair. So it has some uses, although it's far from being widely adopted.

bleh....fiber bundles...

That is where gpu computation is strong, massively parallel computation.
 
Many, many years

Think about how long it took before it became the norm for software to take advantage of GPU hardware acceleration for graphics and video, and to take advantage of many CPU cores. Even today we have software that could benefit from these two, but do not take advantage of them.

And I think it's fair to say that multiple CPU cores is more generically useful than GPGPU, so I think OpenCL will have an even longer hill to climb.

That being said, Apple seems to be pushing for it in both software and hardware, so if you have an Apple workflow you'll benefit from OpenCL more quickly than the norm.
 
Retina software happened quickly

I was pleasantly surprised with how quickly so many apps changed to use the retina display on rMBP so perhaps that could happen with Open CL too.

I have been wondering how does Photoshop fit into the argument that 'many apps don't need GPU processing'. I think it only uses it for a few things at the moment although sharpening was added last week. Could using the GPU more massively speed up Photoshop?
 
I was pleasantly surprised with how quickly so many apps changed to use the retina display on rMBP so perhaps that could happen with Open CL too.

I have been wondering how does Photoshop fit into the argument that 'many apps don't need GPU processing'. I think it only uses it for a few things at the moment although sharpening was added last week. Could using the GPU more massively speed up Photoshop?

Why?

There is a huge difference between "we should just scale up our UI icons, and let Mac OS X figure things out" and "we should completely re-write all our engine code to support OpenCL".

For a large application, you're talking about the difference in cost between tens of thousands of dollars, and hundreds of thousands of dollars (or millions).

-SC
 
Most important really would be applications like the creative cloud and pro audio apps to get adapted.

Actually today I tried video encoding using Adobe media encoder with software, cuda and open cl support on my late 2012 iMac.

Guess what: open Cl was even a bit faster than cuda already!!
I read a lot on the adobe blog and forum that they want to become hardware independent.
So I'm pretty sure they will absolutely embrace open cl dual graphics on the nMP.

Another post that fed my hope was a recent one on the adobe after effects blog. The developer asked what users would think if Adobe would spend the entire year 2014 on just one thing: making after effects blazing fast.
The positive response was overwhelming and he then wrote that they got the message.

He said it is not what Adobe has planned for 2014 sadly but they are considering doing it in 2015 or maybe even change their plans.
I guess the best chance to make a tool like afx real time capable in the viewport would be through gpu acceleration.
 
I was pleasantly surprised with how quickly so many apps changed to use the retina display on rMBP so perhaps that could happen with Open CL too.

I have been wondering how does Photoshop fit into the argument that 'many apps don't need GPU processing'. I think it only uses it for a few things at the moment although sharpening was added last week. Could using the GPU more massively speed up Photoshop?

Yes, it'll massively speed up Photoshop, but only if they rewrite a bunch of codes.

In the end, it'll really depend on software developers to implement existing technologies. Regarding OpenCL and CUDA, my bet is on OpenCL mainly cause it's not hardware manufacturer dependant and I can't see how a pure software company will place their bets on a single manufacturer.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.