Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
There is every reason in the world to think that Apple is thinking about more than just price, at least short-term price. Apple doesn't want to live in a world where only Intel and nVidia exist as parts suppliers. Even if Apple wants to use nVidia and Intel hardware, they want those companies to have competition to keep prices lower and performance gains going, long-term. Apple had enough trouble in the PPC days and they are not interested in getting into that same situation again.

I also think Apple very much does want to kill CUDA. They'd much prefer OpenCL be king so that they and their developers can make portable code. Apple doesn't want to get locked into CUDA any more than their customers do. If you've already gotten yourself locked into CUDA, I sympathize, but you should be pressuring your software suppliers to port over to OpenCL ASAP.

So, if Apple can send some money AMD's way to keep them alive, and help kill off CUDA in the process, that's win-win for Apple, and a win for Apple's customers too. And why do you think AMD is so willing to do custom form factors for less, anyway? AMD is grateful for Apple's support.

Great points. Apple obviously doesn't want to restrict itself to a compute API in which most of the lineup lacks the necessary hardware (i.e., most machines have no discrete GPU). And of course they don't want to limit themselves to only choosing nvidia for its discrete GPUs.

And yet for the first time in Blizzard's history they're releasing a new IP that doesn't support OS X. Not exactly comforting knowledge.

Give it some time. Metal has only been shipping for a few months. Its impressive that Blizzard already has something working.

For comparison, Directx12 has been shipping since last summer (with dev builds before that) and not a single title has been released for it.
 
Give it some time. Metal has only been shipping for a few months. Its impressive that Blizzard already has something working.

For comparison, Directx12 has been shipping since last summer (with dev builds before that) and not a single title has been released for it.

Not sure you're comprehending. There is no OS X version for Overwatch whatsoever, and the developers have yet to indicate any plans to develop one.

That game is designed mainly for Xbox and Playstation. They have a PC version but not many folks will buy it.
They will get it mainly from the Xbox/Playstation store.

I'm not sure what makes you think that. I'm pretty sure the PC version will sell better than Xbox/Playstation combined.
 
That game is designed mainly for Xbox and Playstation. They have a PC version but not many folks will buy it.
They will get it mainly from the Xbox/Playstation store.

Are you trolling? That's some very inaccurate information. It's as if you were unaware gaming computers are overtaking console sales in both the hardware and "app" sales. Why do you think every console out there is being marketed as ****** emachines?
 
Last edited:
I also think Apple very much does want to kill CUDA.

If Apple wanted to kill CUDA, they wouldn't do it by limiting themselves to a single (lower performance, hotter, more power hungry) GPU supplier. You can do that in software, and still have your choice of graphics card vendor.

CUDA going away is a symptom, not a cause.
 
If Apple wanted to kill CUDA, they wouldn't do it by limiting themselves to a single (lower performance, hotter, more power hungry) GPU supplier. You can do that in software, and still have your choice of graphics card vendor.

CUDA going away is a symptom, not a cause.

Apple's been pushing OpenCL for years, you can't deny that. It's been a major part of their pitch for OS X and their computers going back a long time. Wanting to kill CUDA is not the sole cause, and maybe not even the primary cause of Apple's GPU choices lately, but it's a cause, one of several factors that surely played into their decision making, and a significant part of their long-term strategy for both Mac OS X and iOS.

Besides, all I said was that Apple does, in fact, want to kill CUDA. Even if you disagree that Apple chose AMD GPUs in the nMP based on that desire, I don't think you can make a case for Apple wanting CUDA to exist, especially long-term, when OpenCL fits more into their computing strategy, and they absolutely do not want to get locked into any one vendor and lose control over their own destiny. Since Steve Jobs returned, Apple has been increasingly focused on maintaining as much control as they can, and ceding as little of it as they can get away with to other companies. Focusing on open standards has been a big part of that.
 
I don't think you can make a case for Apple wanting CUDA to exist, especially long-term, when OpenCL fits more into their computing strategy

I think you're confusing simply ignoring it, with working to an active strategy to destroy it. That's the thing, I don't think Apple cares enough about it to spend the effort on being actively hostile. nVidia aren't in the machines for cost reasons, there's no need for any sort of strategy regarding CUDA given that fact.
 
I also think Apple very much does want to kill CUDA. They'd much prefer OpenCL be king so that they and their developers can make portable code. Apple doesn't want to get locked into CUDA any more than their customers do.

If Apple does prefer OpenCL so much, why don't we have OpenCL 2.0 yet? AMD offers OpenCL 2.0 support on Windows, Intel offers OpenCL 2.0 support on Windows. Only nVidia is dragging their feet with OpenCL 1.2. And I don't think that Apple stayed on OpenCL 1.2 because they did not want to embarrass nVidia.

Similar to the OpenGL capabilities chart they could support OpenCL 2.0 and OpenCL 1.2 in a mixed fashion. I'd rather think, they want graphics and compute to run on Metal. It's not so much about killing other platforms, but about being in control.
 
Apple's been pushing OpenCL for years, you can't deny that. It's been a major part of their pitch for OS X and their computers going back a long time.

On the other side, they seem to have lost interest in OpenCL recently in favour of their own proprietary API (Metal).
[doublepost=1454488235][/doublepost]
If Apple does prefer OpenCL so much, why don't we have OpenCL 2.0 yet? AMD offers OpenCL 2.0 support on Windows, Intel offers OpenCL 2.0 support on Windows. Only nVidia is dragging their feet with OpenCL 1.2. And I don't think that Apple stayed on OpenCL 1.2 because they did not want to embarrass nVidia.

Similar to the OpenGL capabilities chart they could support OpenCL 2.0 and OpenCL 1.2 in a mixed fashion. I'd rather think, they want graphics and compute to run on Metal. It's not so much about killing other platforms, but about being in control.

I guess we had the same though at more or less the same time :) You were quicker though.

It is certainly true what you are saying (about being in control), on the other side though Metal is a VERY nice API, even though its still immature. I am saddened by Apple's apparent decision to abandon the open APIs here, but they offer a decent (and a smart) alternative.
 
  • Like
Reactions: mrxak
It is certainly true what you are saying (about being in control), on the other side though Metal is a VERY nice API, even though its still immature. I am saddened by Apple's apparent decision to abandon the open APIs here, but they offer a decent (and a smart) alternative.

Are you talking about recent changes to 10.11.x? I have read about API changes affecting After Effects on several levels.
 
Are you talking about recent changes to 10.11.x? I have read about API changes affecting After Effects on several levels.

I am not sure what you mean. I am talking about Apple pushing Metal as an alternative for both OpenGL and OpenCL on their platforms.
 
Apple's been pushing OpenCL for years, you can't deny that. It's been a major part of their pitch for OS X and their computers going back a long time. Wanting to kill CUDA is not the sole cause, and maybe not even the primary cause of Apple's GPU choices lately, but it's a cause, one of several factors that surely played into their decision making, and a significant part of their long-term strategy for both Mac OS X and iOS.

Besides, all I said was that Apple does, in fact, want to kill CUDA. Even if you disagree that Apple chose AMD GPUs in the nMP based on that desire, I don't think you can make a case for Apple wanting CUDA to exist, especially long-term, when OpenCL fits more into their computing strategy, and they absolutely do not want to get locked into any one vendor and lose control over their own destiny. Since Steve Jobs returned, Apple has been increasingly focused on maintaining as much control as they can, and ceding as little of it as they can get away with to other companies. Focusing on open standards has been a big part of that.


OK .. Let's get REAL ... who doesn't want to KILL every selfish proprietary "MINE MINE MINE" technology and provider out there. IF ... nVidia wants to play ball ... SUPPORT OpenCL. And ... while were at it ... If those of us who LOVE APPLE were to admit it ... we HATE it when Apple does it. SO ... I love me CUDA funtionality. Sadly it's nVidia only and so I left. Hate me if you want to ... my OLD 7970 stomps the mess outa the 680 Monster I used to drool over. nVidia ... you listening ... where is your Big Bad Cuda machines on the Awesome Ahead of it's time New Mac Pro form factor cards so we who want those glorious TUBE's can also have some CUDA if we wanted it. Sure there is eGPU solutions coming. nVidia didn't give a flying rip about MAC so ... RIP CUDA. Atleast in my world anyway.
 
NVidia does give a hoot about Apple hence why they're still pumping out drivers for Apple computers.
You can't have an NVidia GPU in your nMP because Apple won't let you have one, not NVidia's fault really... If the nMP used standard MXM board or plain old PCIe cards, you could have one today! And for your information, even Apple doesn't believe in OpenCL anymore since it's being replaced by Metal.
 
NVidia does give a hoot about Apple hence why they're still pumping out drivers for Apple computers.
You can't have an NVidia GPU in your nMP because Apple won't let you have one, not NVidia's fault really... If the nMP used standard MXM board or plain old PCIe cards, you could have one today! And for your information, even Apple doesn't believe in OpenCL anymore since it's being replaced by Metal.

Nvidia cares about its users locked into its proprietary CUDA & Gameworks.
 
  • Like
Reactions: mrxak
Nvidia cares about its users locked into its proprietary CUDA & Gameworks.

Better using a proprietary solution that works instead of an open source one that doesn't. Even Apple as realized this since they're canning OpenCL to make way for their own proprietary solution...
 
  • Like
Reactions: MacVidCards
Better using a proprietary solution that works instead of an open source one that doesn't. Even Apple as realized this since they're canning OpenCL to make way for their own proprietary solution...

I'm not optimistic about the future of OpenCL these days. Nobody is supporting OpenCL 2.0 well if at all, and for HPC GPGPU applications, it's either Nvidia Tesla using CUDA, or Intel Xeon Phi using whatever API Intel chooses to support this month, which may or may not be some version of OpenCL. I can totally understand Apple wanting to further develop Metal for everything, as OpenCL is very painful to use and support.
 
It would be a shame if the panic guys wouldn't do a OS X version :)

That's Firewatch, not Overwatch.
[doublepost=1454804349][/doublepost]
Better using a proprietary solution that works instead of an open source one that doesn't. Even Apple as realized this since they're canning OpenCL to make way for their own proprietary solution...

Ok. This needs a big citation. Please explain why CUDA as an API is better than OpenCL as an API. What magical powers would Adobe lose if they ported from CUDA to OpenCL?

For that matter, why is Metal better than OpenCL? You do realize Metal's compute component is basically OpenCL, right?
[doublepost=1454804398][/doublepost]
I'm not optimistic about the future of OpenCL these days. Nobody is supporting OpenCL 2.0 well if at all, and for HPC GPGPU applications, it's either Nvidia Tesla using CUDA, or Intel Xeon Phi using whatever API Intel chooses to support this month, which may or may not be some version of OpenCL. I can totally understand Apple wanting to further develop Metal for everything, as OpenCL is very painful to use and support.

OpenCL 2, at least on Windows and Linux, isn't doing well because one company has decided not to support it.

You can guess which company that is. It's not hard.
 
Ok. This needs a big citation. Please explain why CUDA as an API is better than OpenCL as an API. What magical powers would Adobe lose if they ported from CUDA to OpenCL?

For that matter, why is Metal better than OpenCL? You do realize Metal's compute component is basically OpenCL, right?

Its simple: ease of development. OpenCL applications are difficult to debug and often tedious to setup. Both CUDA and Metal offer more flexible shading languages, less boilerplate code, a set of standard development tools as well as superior debugging capabilities. I can kind of understand why CUDA is so popular with scientific crowd, you can just write your device and host code in the same file and fire it off without much overhead, which means that you can concentrate on what is important. With Metal, its not that different, because Xcode takes care of all the tedious thing for you. Especially if you want to mix graphics and general compute, Metal is bliss. You can have a Metal graphical app that does not use the graphical pipeline at all and instead produces the visual output with compute shaders, which is very beneficial in some situations (e.g. voxel rendering) and is so easy to set up that its basically trivial. My first small Metal sample code was up and running in literally 10 minutes and took about 40-50 lines of code (1/3 of it diagnostic output).

Now, many of these issues have been fixed with OpenCL 2.1 and SPIR-V. I still hope that Apple will support SPIR-V as a shader language or at least, that someone writes a byte code converter. Shouldn't be too difficult to do, as Apple's compiled metal shaders are simply LLVM byte code from what I have seen.
 
Last edited:
Card spotted at Zauba, might be big Polaris. To be taken with the usual grain of salt though.
Could indicate that it's somewhat almost ready, wishful thinking.
But I'm not holding my breath anymore for the nMP.
[doublepost=1454876307][/doublepost]Seen the Late 2013 nMP GPU repair program?
I can already see one person here coming up with the usual comments regarding AMD GPUs being rubbish and burning up all the time!!
It seems to be confined to the Tahiti GPUs.
 
With Metal, its not that different, because Xcode takes care of all the tedious thing for you. Especially if you want to mix graphics and general compute, Metal is bliss. You can have a Metal graphical app that does not use the graphical pipeline at all and instead produces the visual output with compute shaders, which is very beneficial in some situations (e.g. voxel rendering) and is so easy to set up that its basically trivial. My first small Metal sample code was up and running in literally 10 minutes and took about 40-50 lines of code (1/3 of it diagnostic output).

See, that's kind of the thing. On the Mac, CUDA's tooling is eh. It's not bad, but it's not great. OpenCL is a little more painful, but once you've laid your foundation, it doesn't really matter. Metal absolutely destroys them both in development tooling, but it's proprietary.

But once you get past that, computing is different than graphics in that it's just math. You can really have a good discussion about the differences between OpenGL and DirectX based on features and capabilities. CUDA vs. Metal vs. OpenCL? They're tools for putting math equations onto a card, and running math. It's not like the laws of math are different in CUDA vs. OpenCL. Sure, the languages are different, but it's not like CUDA has some new version of mathematics that doesn't exist in OpenCL that bends the laws of time and space.

It's also why, in general, CUDA, OpenCL, and Metal all perform pretty similarly. They're all just different ways of loading the same equations onto the same card. And the same equation is going to run pretty much the same no mater what the API was that loaded it onto the card.

The tooling is interesting, but it's not worth the tradeoff of locking yourself onto a closed standard. Nvidia put a lot of money and free stuff into academia and business to get them to adopt CUDA (free Teslas for everyone yay), but the tooling differences aren't really that big of a deal. If it all came down to tooling, everyone would be really excited about Metal. But they aren't.
 
A little information how Nvidia software manages DX12.
http://forums.anandtech.com/showpost.php?p=38014058&postcount=207
http://forums.anandtech.com/showpost.php?p=38014138&postcount=213
http://forums.anandtech.com/showpost.php?p=38014146&postcount=216

Mahigan said:
AMD, hardware wise, are several years ahead of the competition. AMD software wise, are several years behind the competition. The exact opposite is true for NVIDIA.
What I have been saying for sometime now. The question really is: what can catch up faster? Software or Hardware?
 
But once you get past that, computing is different than graphics in that it's just math. You can really have a good discussion about the differences between OpenGL and DirectX based on features and capabilities. CUDA vs. Metal vs. OpenCL? They're tools for putting math equations onto a card, and running math. It's not like the laws of math are different in CUDA vs. OpenCL. Sure, the languages are different, but it's not like CUDA has some new version of mathematics that doesn't exist

Thats the difference between theory and practice. In theory, you are right (even though you also need to take into account that CUDA and Metal have more flexible shading languages, which again is more convenient for development). In practice, there are subtle (and less subtle) differences between different IHVs. Your kernel might work great on one platform but fail or suffer bad performance on another one. Its not as bad as what we have with OpenGL, but its not optimal either.

Khronos has made a number of very smart steps lately, in that they reduce the implementation pressure on the IHVs by offering a standard set of frontend compilers and validation tools. OpenCL might still have a bright future in front of it (eventually as part of Vulkan).
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.