Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What is the conflict with "dead on Mac" when you are ruling out Macs because you need CUDA? You own choice is indicative that it has future growth problem in at least part of the Mac space.
No, my choice ruled out the new Mac Pro, not all Macs. I bought a 5,1 Mac Pro and put an NVIDIA card in it. If I bought a new iMac or a new MacBook Pro, CUDA apps would work there as well. @Jasonvp said it properly: CUDA is dead on the nMP, for now. Maybe in a couple years it will be dead across the while Mac line, but that's not the case now, only on the nMP. And should this upgradable nMP ever have an NVIDIA GPU replacement, even that model would regain CUDA support.

Look, this is all future speculation. But when someone says CUDA is now dead on the Mac, but all the Mac software options I have are CUDA only, I have a hard time reconciling the two.

CUDA isn't going to die overnight or even in a couple of years.
Wait, I thought it was already dead? :p
 
Look, this is all future speculation. But when someone says CUDA is now dead on the Mac, but all the Mac software options I have are CUDA only, I have a hard time reconciling the two.

it's a figure of speech and there were other words/posts which accompanied it..

let's try it this way-- i'll propose a challenge ;)

go find the developer who is, right now, saying "i am continuing to work on & improve my cuda implementation on the mac platform"

i don't think you'll find that person.. ie- applications which are currently cuda based on mac have peaked.. they aren't going to get any better and there aren't going to be new mac applications supporting cuda.. what you see now is what you get.. there's no future improvements.. it's dead.. it'd be like writing flash based apps for iPhones.


[edit] ha.. i get the problem here after re-reading that... i'm using too strong of words which when heard by someone with an opposing viewpoint, end up sounding extra-extra strong..
 
Last edited:
I also find the notion Apple will be using AMD cards in the MBPr and iMac in future iterations HILLARIOUS. AMDS laptop cards are so far behind Nvida on power to heat its not even funny. They've basically given up and devoted all there energies in to the combined APUs that basically compete with Intel iris integrated graphics.
 
I also find the notion Apple will be using AMD cards in the MBPr and iMac in future iterations HILLARIOUS. AMDS laptop cards are so far behind Nvida on power to heat its not even funny. They've basically given up and devoted all there energies in to the combined APUs that basically compete with Intel iris integrated graphics.

If Sony, Microsoft and Nintendo chose AMD GPU's for their next gen consoles AND now Apple follow suit in their Pro line then you can be sure that AMD are doing something right to convince them. The belief that Nvidia is king, which if I'm honest, I always believed too and have stuck with them since upgrading my Voodoo 3DFX card is now a little confusing.
 
If Sony, Microsoft and Nintendo chose AMD GPU's for their next gen consoles AND now Apple follow suit in their Pro line then you can be sure that AMD are doing something right to convince them. The belief that Nvidia is king, which if I'm honest, I always believed too and have stuck with them since upgrading my Voodoo 3DFX card is now a little confusing.

Consoles have really nothing to do with PC gaming graphics and especially nothing with professional environment. AMD was the cheapest to manufacture the hardware and unlike NVidia it is not shooting to provide its own gaming system, I am pretty much sure NVidia was not even considered for silicon.

As soon as you start messing with their territory you get the cut just like Nvidia got the cut from Apple on Mac Pros long time ago and it will get the cut in mobile systems as soon as Intel steps up with their GPUs. Its inevitable as Apple doesn't want you to be able to run and utilize their competition's technology. Also one of the reasons why GPU inside of the new Mac are not replaceable, even though they could have been.
 
let's try it this way-- i'll propose a challenge ;)

go find the developer who is, right now, saying "i am continuing to work on & improve my cuda implementation on the mac platform"

i don't think you'll find that person..
Err, are you sure you want to propose that challenge? ;) Maybe like a hotdog eating challenge instead? I'd definitely lose that one.

Ok, I'll point to two big ones: Chaos Group (V-ray) and OTOY (Octane). V-ray is arguably the most widely used GPU-capable rendering engine, and it's working on V3 beta right now with CUDA enhancements. Octane just announced a huge partnership with Amazon, NVIDIA, and Mozilla that brings cloud rendering and application streaming access to the browser, as well as V1.5 beta of their GPU rendering application. CUDA only.

Do I get a cookie?
 
OpenCL roadmap according to an Adobe spokesperson

Several months ago at the Createsphere Blackmagic Design Event in Burbank, I spoke to the Adobe Speaker about the future of OpenCL/CUDA. I can't remember his name but he was in charge of customer feedback. He believed that in about 18 months OpenCL would be on par with CUDA. So around early 2015, or 1 year into the reign of the new MacPro, perhaps we can expect to see them on an even footing.

Luckily for us, that would mean we still get at least 12 more months to complain about all this. Then that would be the perfect time for Apple to make the move from Intel back to PowerPC just to keep the Forum churning.
 
Or use the same style SOC as iPhone a on the Mac, which is way more likely
 
Consoles have really nothing to do with PC gaming graphics and especially nothing with professional environment.
I find that quite narrow sighted. AMD uses OpenCL in their tech, that means that all the console developers will be using, tweaking and optimising their software and games to get the most from OpenCL for the next 8 years?. That knowledge is transferable and thus OpenCL suddenly gains a huge amount of industry support. Try not to think of this as the end product 'Game' but lower level knowledge, algorithms, documentation, bug fixes, optimisations etc etc. When a standard is open, everybody benefits from it unlike CUDA which is closed, updates are closed, you don't know whats going on with CUDA until Nvidia make a release.

AMD was the cheapest to manufacture the hardware and unlike NVidia it is not shooting to provide its own gaming system, I am pretty much sure NVidia was not even considered for silicon.
If AMD managed to bag ALL the billion dollar console manufactures because they were simply cheaper then in that light Nvidia just lost a ton of business/opportunities/progression to their biggest competitor, wouldn't you agree?

I don't understand what you mean when you say 'Nvidia was not even considered for silicon'

Its inevitable as Apple doesn't want you to be able to run and utilize their competition's technology. Also one of the reasons why GPU inside of the new Mac are not replaceable, even though they could have been.

Sorry for being ignorant again, Apple competitor technology? Do you mean the change from the old mac pro to the new one? I would think that was more to squeeze out performance/efficiency from the new design rather than a "lets make it impossible to modify" stand point. Although both can be true and some of us are a little miffed why no Nvidia options. You can still get TB PCI-e enclosures. But yes the GPU is custom. Maybe Nvidia will sell a Titan Mac Pro Edition at some point as we don't know if you can just pop these custom cards out yet? News that CPU and Ram can be upgraded is surfacing already and some people are betting the PCI-e storage will be replaceable too. We can't be sure yet.

Anim
 
I find that quite narrow sighted. AMD uses OpenCL in their tech, that means that all the console developers will be using, tweaking and optimising their software and games to get the most from OpenCL for the next 8 years?.

Nope. Game makers don't generally need to do things with massively parallelized computations. AMD is releasing their own proprietary API (sound familiar?) for gaming called Mantle. That's what the console folks will be writing too, if said consoles stick with AMD.
 
Windows vs Linux, IOS vs Android, DirectX vs OpenGL, lol even Flash vs Html5. Open sources track record as a development platform is pretty shakey.
 
Err, are you sure you want to propose that challenge? ;) Maybe like a hotdog eating challenge instead? I'd definitely lose that one.

Ok, I'll point to two big ones: Chaos Group (V-ray) and OTOY (Octane). V-ray is arguably the most widely used GPU-capable rendering engine, and it's working on V3 beta right now with CUDA enhancements. Octane just announced a huge partnership with Amazon, NVIDIA, and Mozilla that brings cloud rendering and application streaming access to the browser, as well as V1.5 beta of their GPU rendering application. CUDA only.

Do I get a cookie?

i'll give you a cookie anyway.. there's a super sweet bakery nearby that i'm always hyping up to anybody that will listen # #

but the challenge was to quote a developer saying "i am continuing to work on & improve my cuda implementation on the mac platform".. not "i'm working on cuda for windows"

i'm internet friends with devin, one of the vray plugin writers.. i'll mail him today to see what he has to say about it.
 
Look, this is all future speculation. But when someone says CUDA is now dead on the Mac, but all the Mac software options I have are CUDA only, I have a hard time reconciling the two.


Wait, I thought it was already dead? :p

Another way to phrase it is that GPU compute on nMP is dead outside of some Apple-written applications and light Adobe users.

If you're a current 3D renderer developer, using CUDA as most are, is the nMP market big enough to develop for? Will you put precious resources toward supporting a small subset of the potential market?

Vast majority of 3D users are on Windows or Linux and majority of those are on nVidia.
 
Windows vs Linux, IOS vs Android, DirectX vs OpenGL, lol even Flash vs Html5. Open sources track record as a development platform is pretty shakey.

that's a tricky game to play.
how about things like C/C++, python, java, javascript, ruby, etcetc..

i mean, most (if not all) of your applications are being coded using open source languages.


[edit]
10 most popular coding languages of 2013:

mostpop.png
 
Last edited:
Nope. Game makers don't generally need to do things with massively parallelized computations.

From what I understand they are doing it.

All the latest gaming cards showing more and more cuda cores/streams. It is a selling point as much as the clock speed these days. And thats 'gaming' cards. I put that down to demand from the gaming industry.

The most obvious cases of use in gaming are realtime physics, fluids and effects (e.g. fur and turbulence if we include COD Ghosts) all offloaded to the GPU leaving the CPU for other tasks.

Also, the OpenCL statement says

OpenCL™ is the first open, royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers and handheld/embedded devices. OpenCL (Open Computing Language) greatly improves speed and responsiveness for a wide spectrum of applications in numerous market categories from gaming and entertainment to scientific and medical software.

Thats entertainment and gaming, right there in their own mission statement :)

I agree it is still early days but people are leveraging the technology in different ways with huge results.

Anim

----------

Windows vs Linux, IOS vs Android, DirectX vs OpenGL, lol even Flash vs Html5. Open sources track record as a development platform is pretty shakey.

Hmm, you make a good point.
 
that's a tricky game to play.
how about things like C/C++, python, java, javascript, ruby, etcetc..

i mean, most (if not all) of your applications are being coded using open source languages.


[edit]
10 most popular coding languages of 2013:

Image
Is that a chart of languages learnt or developed for? Either way, you don't need either OpenGL or CUDA if your willing to go super low level. Which incidentally is one of the problems with OpenCL.
 
Is that a chart of languages learnt or developed for? Either way, you don't need either OpenGL or CUDA if your willing to go super low level. Which incidentally is one of the problems with OpenCL.

what exactly are you arguing about anyway?
my understanding is that you're saying a proprietary language is somehow better than an open source language.. is that what your point is? if not, what's your point?
 
what exactly are you arguing about anyway?
my understanding is that you're saying a proprietary language is somehow better than an open source language.. is that what your point is? if not, what's your point?
My point is that being "free and open" isn't a magic bullet that automatically makes a development environment the best. There is a lot more to consider.
 
My point is that being "free and open" isn't a magic bullet that automatically makes a development environment the best. There is a lot more to consider.

you're right.. free and open isn't a magic bullet..

but you also said "Open sources track record as a development platform is pretty shakey."
to me, that's a ridiculous statement.. or more likely, being said from a misinformed viewpoint.
 
My point is that being "free and open" isn't a magic bullet that automatically makes a development environment the best. There is a lot more to consider.

Can we also consider this isn't a higher/broader level SDK in the case of say DirectX vs OpenGL but a smaller set of libraries to gain access to hardware, GPU in this case. So history will not hopefully repeat itself quite the same.

Thoughts?
 
you're right.. free and open isn't a magic bullet..

but you also said "Open sources track record as a development platform is pretty shakey."
to me, that's a ridiculous statement.. or more likely, being said from a misinformed viewpoint.
I think that in this context, the statement was fair! But by all means feel free to derail with a lecture on why java and C are totally better than C# because they're free and open.
 
I think that in this context, the statement was fair! But by all means feel free to derail with a lecture on why java and C are totally better than C# because they're free and open.

hmm.. i'm really not trying to derail the thread..
it's just that you keep saying/implying that somehow open source languages are 'bad' or whatever.. or that cuda is somehow better than openCL because it's proprietary (and limited to a single manufacturer's hardware).
i think you'll have a very hard time finding someone who actually codes for a living to agree with that sentiment.
 
No, my choice ruled out the new Mac Pro, not all Macs. I bought a 5,1 Mac Pro and put an NVIDIA card in it. If I bought a new iMac or a new MacBook Pro, CUDA apps would work there as well.

There is buried premise here that Nvidia won't get shut out. I'm sure AMD shows up for the which x86 CPU in next version bake off every time Apple has one. In that space, they aren't hitting Apple's specs and so haven't gotten a win.


@Jasonvp said it properly: CUDA is dead on the nMP, for now.

You leaving out his last sentence in that response.
"... And don't be too surprised to see Apple push similar direction down to those platforms. .... " ( where 'those platforms' was MBP , iMac ).

Nvidia lapsing on OpenCL support isn't speculative at all. They have. If they continue down that track there will be ramifications. That are indications that either Nivida is drinking their own kool-aid (" CUDA won; the GPGPU API battle is over ") or just playing dumb to incrementally juice their increasing limited scope market. Either way, it is highly sub optimal to them winning new Apple GPU bake-offs for Macs.

Perhaps NVidia is playing a "tortoise vs. hare" game where they only support OpenCL while there is an active window for an Apple design bake-off. Perhaps, but I doubt they are fooling anyone at Apple with that. It might work if AMD have fumbled their upcoming updates badly.

Apple picks vendors who deliver what they ask for. Apple is asking for OpenCL (you are smoking something crazy strong if you want to label that as speculative). Nvidia is not delivering on OpenCL right now.


Maybe in a couple years it will be dead across the while Mac line,

The big MBP/iMac design bake off is probably happening around now (or are already over for 2014 designs); not a couple of years from now. There was no bake-off window for 2013 designs as both Nvidia and AMD basically had just speed bump upgrades.

And should this upgradable nMP ever have an NVIDIA GPU replacement, even that model would regain CUDA support.

Who is speculating now? It is a custom card with apparent custom connector with support for AMD Crossfire. I'm wouldn't hold my breath on that. By all appearances so far, these cards are going to be specific to this Mac Pro iteration and vice versa. Just like other Macs with embedded GPUs.

Look, this is all future speculation. But when someone says CUDA is now dead on the Mac, but all the Mac software options I have are CUDA only, I have a hard time reconciling the two.

"dead" is more so a reference to the future momentum. What you are focusing on is more so the past inertia ( past design wins and legacy software libraries ). Legacy inertia doesn't necessarily win in Apple design bake offs. There is a clear and established track record on that. That is hardly speculative.

Apple never said they were committed to supporting CUDA. It always was the case that Apple got to pick the GPUs in the sold configurations. There used to be a limited scope where developers could just say "Screw Apple's choices, I'm going to pick my customers GPU". That 's now gone. It is Apple's call across the whole line up now. That isn't speculative at all. Mac developers who think they still calls the shots on that have gone from denial to deep denial.


Wait, I thought it was already dead? :p

This isn't particularly about "dead" right now when talking about future plans of software developers. It is about where things are going. I fully understand developers who might say "we didn't do any planning for OpenCL 2-3 years ago so have nothing now on the Mac platform". They were ingoring lots of guidelines Apple was laying down, but AMD and Intel were not implementation all that well to that either. At this point though Intel and AMD have turn the corner and somewhat strangely, from a Mac perspective, Nvidia is going in the opposite direction from Apple guidelines. In that context, for developers to say they doubling down on Nvidia is either denial or just clueless.
 
Another way to phrase it is that GPU compute on nMP is dead outside of some Apple-written applications and light Adobe users.

It could be phrased that way. It would nothing to do with the truth though.
Adobe isn't the only explicit adopter. Also, there are paths through standard OS X frameworks for Mac apps to invoke OpenCL which dramatically widens the scope of apps and how fast the adoption path evolves.

If you're a current 3D renderer developer, using CUDA as most are, is the nMP market big enough to develop for? Will you put precious resources toward supporting a small subset of the potential market?

As I pointed out there are developers leaving the Mac ecosystem all the time. Limited resource constraints is quite prevalent as a reason. However, the OpenCL space is not the Mac ecosystem. The number of deployed GPUs that can run OpenCL far outnumber the number of CUDA ones. If making a scope argument the potential scope of OpenCL is FAR larger than CUDA.

CUDA only is far more a commitment of the software vendor to a smaller, narrow scoped market with what they probably expect much higher margins to offset the far lower unit count they are pragmatically targeting.

Vast majority of 3D users are on Windows or Linux and majority of those are on nVidia.

And the vast majority of those folks by Windows/Linux boxes from Windows/Linux system vendors. What has to do with OS X software planning is at best a tangent. The software vendors who want to deliver cross OS platform software should be looking for cross platform standards to leverage; not proprietary lock-in standards. If you though lock-in APIs at your cross platform software you tend to get trapping in a tarpit after a while. Largely being trapping you a tarpit is exactly what one of the APIs objectives is. Especially when there is viable more open alternative.
 
The number of deployed GPUs that can run OpenCL far outnumber the number of CUDA ones. If making a scope argument the potential scope of OpenCL is FAR larger than CUDA.

You're aware of how small AMD's market share is in the DCC market, right? That's what I'm referring to here.

Don't have the most recent numbers but it's usually around 10-12%. Perhaps smaller right now because they haven't updated the FirePro line in so long.

People writing production level 3D renderers don't really care that mom's Dell or your iPhone support OpenCL.

Although technically yes the OpenCL addressable market is larger, since nVidia cards support it as well.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.