Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

Bubba Satori

Suspended
Feb 15, 2008
4,726
3,756
B'ham
That is only his opinion. CUDA has much better documentation, and support, than OpenCL, that is no question. But from compute CAPABILITIES perspective, they are the same. There is nothing that CUDA allows that OpenCL could not allow. Unless you believe in Nvidia magic. But the implementation and running software on it is easier.

P.S. Great to know that VMware praises Nvidia for their marketing efforts.

"There's nothing CUDA allows..."
Well, except a full line of products that perform much better than the equivalent AMD product.

I'm hoping Polaris FirePros knock it out of the park,
But I'm not holding my breath.

:cool:

big+lebowski.jpg
 
  • Like
Reactions: tuxon86

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
"There's nothing CUDA allows..."
Well, except a full line of products that perform much better than the equivalent AMD product.

I'm hoping Polaris FirePros knock it out of the park,
But I'm not holding my breath.

:cool:

big+lebowski.jpg
So after all, Compute on Nvidia side is more magical than compute on AMD side. Nice to know that Nvidia has magic.
 

flat five

macrumors 603
Feb 6, 2007
5,580
2,657
newyorkcity
Well, except a full line of products that perform much better than the equivalent AMD product.
except there isn't actually a 'full line of products' using either cuda or openCL right now.

these things are still in infancy.. raytracing and other simulations are still highly reliant on cpu processing.. very very few softwares are actually using cuda or openCL to their fullest extent and most software claiming cuda/openCL support are simply using them for acceleration (ie- very little benefit when compared to what's theoretically (and even not-so-theoretically) possible with gpgpu)

like- most (all?) of the cuda vs openCL arguments around here are based off benchmarks and/or amount of software claiming support of one or the other but there's little to no relevance when it comes to real world applications since hardly anyone is using it.. yet.

this landscape will probably (hopefully) be different in the coming years but as of now, the CUDA vs openCL arguments hold little to no practical value.
[doublepost=1457635833][/doublepost]
Any of you younger punks know what the teapot in the icon is a reference to?

https://en.wikipedia.org/wiki/Utah_teapot

;)

(edit) at least, that's what i thought it was referencing.. not saying what i think is right though
 

Bubba Satori

Suspended
Feb 15, 2008
4,726
3,756
B'ham
So after all, Compute on Nvidia side is more magical than compute on AMD side. Nice to know that Nvidia has magic.

Not magic. Think of it as awesome sauce.
Back in the days of Athlon and Opteron it was AMD that had the awesome sauce for awhile.
And Intel was stinking up the cpu world with Netburst P4s and crap xeons.
[doublepost=1457636008][/doublepost]
except there isn't actually a 'full line of products' using either cuda or openCL right now.

these things are still in infancy.. raytracing and other simulations are still highly reliant on cpu processing.. very very few softwares are actually using cuda or openCL to their fullest extent and most software claiming cuda/openCL support are simply using them for acceleration (ie- very little benefit when compared to what's theoretically (and even not-so-theoretically) possible with gpgpu)

like- most (all?) of the cuda vs openCL arguments around here are based off benchmarks and/or amount of software claiming support of one or the other but there's little to no relevance when it comes to real world applications since hardly anyone is using it.. yet.

this landscape will probably (hopefully) be different in the coming years but as of now, the CUDA vs openCL arguments hold little to no practical value.
[doublepost=1457635833][/doublepost]

https://en.wikipedia.org/wiki/Utah_teapot

;)

(edit) at least, that's what i thought it was referencing.. not saying what i think is right though

You're right. Got a good chuckle out of the Utah teapot reference..
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Not magic. Think of it as awesome sauce.
Back in the days of Athlon and Opteron it was AMD that had the awesome sauce for awhile.
And Intel was stinking up the cpu world with Netburst P4s and crap xeons.
What the hell are you talking about? "Compute" are one and only Mathematical Algorithms. Nothing more nothing else. CUDA and OpenCL are APIs that allow work of GPGPU for professional applications. Mathematical calculations. There is nothing that would say that Nvidia Mathematical Calculations are better than AMD. CUDA is better/worse than OpenCL - that is not my intention here. Stop thinking as compute as CUDA only, because that is not true.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
except there isn't actually a 'full line of products' using either cuda or openCL right now.

these things are still in infancy.. raytracing and other simulations are still highly reliant on cpu processing.. very very few softwares are actually using cuda or openCL to their fullest extent and most software claiming cuda/openCL support are simply using them for acceleration (ie- very little benefit when compared to what's theoretically (and even not-so-theoretically) possible with gpgpu)

like- most (all?) of the cuda vs openCL arguments around here are based off benchmarks and/or amount of software claiming support of one or the other but there's little to no relevance when it comes to real world applications since hardly anyone is using it.. yet.

this landscape will probably (hopefully) be different in the coming years but as of now, the CUDA vs openCL arguments hold little to no practical value.
[doublepost=1457635833][/doublepost]

https://en.wikipedia.org/wiki/Utah_teapot

;)

(edit) at least, that's what i thought it was referencing.. not saying what i think is right though

for reference: http://www.nvidia.com/object/gpu-applications.html
21 pages worth...
[doublepost=1457636555][/doublepost]
What the hell are you talking about? "Compute" are one and only Mathematical Algorithms. Nothing more nothing else. CUDA and OpenCL are APIs that allow work of GPGPU for professional applications. Mathematical calculations. There is nothing that would say that Nvidia Mathematical Calculations are better than AMD. CUDA is better/worse than OpenCL - that is not my intention here. Stop thinking as compute as CUDA only, because that is not true.

No one as said anything of the sort... What I'm saying at least is that NVidia is the top dog presently with Cuda. In six month the situation may change but at present it is what it is.
 

flat five

macrumors 603
Feb 6, 2007
5,580
2,657
newyorkcity
You're right. Got a good chuckle out of the Utah teapot reference..
heh.
to tie the teapot into the cuda/openCL thing..

the rendering software i use (Indigo) is in the process of being re-written from a cpu based w/ gpu acceleration renderer to one that is complete gpgpu (openCL)..

the developers have a progress thread at the forum and use a teapot model to show off various aspects (mainly material support) for the new software

http://www.indigorenderer.com/forum/viewtopic.php?f=7&t=13556


(i assume that thread is visible to the public without needing to be a forum member.. if not, apologies in advance)
 

Machines

macrumors 6502
Jan 23, 2015
426
89
Fox River Valley , Illinois
"Jules Urbach, chief executive of Los Angeles-based Otoy, said in an exclusive interview with GamesBeat that Nvidia’s CUDA language is superior and enables much richer graphics software. Hence, OpenCL hasn’t provided a true market alternative to CUDA. That’s why building the CUDA “cross compiler” was an important task. And Urbach said the Otoy research and development team was able to do it in 9 weeks."

That's what we've been saying to you for the past months... Nice of you to see the light at least.
[doublepost=1457618868][/doublepost]In other news:

Here's the whole fascinating article :

http://venturebeat.com/2016/03/09/o...-the-best-graphics-software-across-platforms/

Now , the relevant questions are :

Will Otoy license this cross compiler for use in third party apps or will they keep it in house ?

How stable will it be ? Nobody loves glitches and artifacts .

CUDA apps coming to Metal ...
 

dpny

macrumors 6502
Jan 5, 2013
273
109
On the other hand, there's a very significant chance that Apple will simply declare the MP6,1 to be the "death of the pickup truck", and drop the line - citing poor sales for a system that nobody asked for.

While I doubt Apple will drop the nMP soon, we all need to face a simple fact: the market for high-end workstation machines is steadily shrinking, and it will turn into a true niche market sooner than most of us want it to. As it stands the availability of cheap commodity components is on the borderline of making really fast machines almost disposable, as evidenced by an earlier post of mine in which a guy I worked with built a machine just for rendering, used it for one job and then promptly sold it.

Which is to say, I don't think Apple ever expected big sales from the nMP, simply because the days of big sales from desktops are over. As it stands I will probably never buy another desktop from Apple, simply because for my work (print and digital pre-press/production) the combination of faster CPUs and Adobe's complete lack of interest in making their apps multi-threaded means a fast i7, an SSD and enough RAM is more than enough to handle almost anything. I will still have a PC desktop for gaming, but, even there, the advances we're seeing in gaming laptops are pretty amazing.

I don't know if Apple will ever abandon their pro workflow, but I think they will transition it to other form factors which, obviously, will piss off some people. Given the enormous advances Apple is making in the performance of their mobile chips, god only knows what we'll be doing on our phones in ten years.

And, for people who can't see a post about an iPhone/iPad without foaming at the mouth about how they're toys, I remember sitting in a newspaper's composition room in 1987, surrounded by people banging away at $80,000 dedicated typesetting terminals, listening to one of them bitch about having to use that little, toy computer in the corner to make an ad. She then went on to explain why that Mac never be of any use to anyone. Seven years later all those people were out of a job and I was running the pre-press department for a commercial and financial printer using those little, toy computers.

Rant/digression over. We now return you to our regularly scheduled bitching/speculating.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Here's the whole fascinating article :

http://venturebeat.com/2016/03/09/o...-the-best-graphics-software-across-platforms/

Now , the relevant questions are :

Will Otoy license this cross compiler for use in third party apps or will they keep it in house ?

How stable will it be ? Nobody loves glitches and artifacts .

CUDA apps coming to Metal ...
As I wrote before, It looks like Otoy used Boltzmann Initiative and CUDA compiler from AMD, and made it better: http://gpuopen.com/compute-product/hip-convert-cuda-to-portable-c-code/
This is source information.
All in all it looks very interesting. We know that MoltenVK is bringing Vulkan to OS X(question is: how it will work), now this. All looks like they are simply extensions to the API, just like extensions are touted for Vulkan.
No one as said anything of the sort... What I'm saying at least is that NVidia is the top dog presently with Cuda. In six month the situation may change but at present it is what it is.
Lets hope that it will explode, because... it is best thing for everyone. For every platform, and every user.

One of the things that make me slightly apprehensive is how CUDA code will be executed on AMD hardware. We know that for proper work Nvidia GPUs need CPU - static scheduling(lack of hardware scheduler). AMD GPUs on the other hand have HWS and can manage themselves within compute boarders that are coming from CPU. It will be interesting from functioning perspective.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
As I wrote before, It looks like Otoy used Boltzmann Initiative and CUDA compiler from AMD, and made it better: http://gpuopen.com/compute-product/hip-convert-cuda-to-portable-c-code/
This is source information.
All in all it looks very interesting. We know that MoltenVK is bringing Vulkan to OS X(question is: how it will work), now this. All looks like they are simply extensions to the API, just like extensions are touted for Vulkan.

Lets hope that it will explode, because... it is best thing for everyone. For every platform, and every user.

One of the things that make me slightly apprehensive is how CUDA code will be executed on AMD hardware. We know that for proper work Nvidia GPUs need CPU - static scheduling(lack of hardware scheduler). AMD GPUs on the other hand have HWS and can manage themselves within compute boarders that are coming from CPU. It will be interesting from functioning perspective.

For it to explode there would have to be some viable competition which sadly AMD isn't. We can't even be sure that AMD will still be around in the near future if their new IPs fizzle out like their present one. Press releases of upcoming product isn't a mesure of assured success when the previous products missed the mark by a few miles.

You may have the best paper launch but if your products bench lower and consume more power than the competition then you are just digging your own grave.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
For it to explode there would have to be some viable competition which sadly AMD isn't. We can't even be sure that AMD will still be around in the near future if their new IPs fizzle out like their present one. Press releases of upcoming product isn't a mesure of assured success when the previous products missed the mark by a few miles.

You may have the best paper launch but if your products bench lower and consume more power than the competition then you are just digging your own grave.
I genuinely suggest educating yourself...

CUDA on AMD arch. is currently starting to explode, gaming benchmarks show that in every price range, every performance bracket AMD is better than Nvidia, and there is DirectX12 coming, in which - every benchmark shows that Nvidia is demolished(like in Hitman, where R9 390 is faster than GTX980, and R9 390X almost ties with heavily OC'ed GTX 980 Ti).
Tide is turning for AMD very fast.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
I genuinely suggest educating yourself...

CUDA on AMD arch. is currently starting to explode, gaming benchmarks show that in every price range, every performance bracket AMD is better than Nvidia, and there is DirectX12 coming, in which - every benchmark shows that Nvidia is demolished(like in Hitman, where R9 390 is faster than GTX980, and R9 390X almost ties with heavily OC'ed GTX 980 Ti).
Tide is turning for AMD very fast.

Not even close. Again you are using a single point and trying to prove a trend with it. I can list other games that are way better on NVidia. And DirectX 12 support isn't finalized yet. In fact neither AMD or NVidia support all of DirectX functionality presently.

Maybe you should stop reading to much into AMD marketing dept literature and see what is actually going on in the real world.
 
  • Like
Reactions: netkas

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Not even close. Again you are using a single point and trying to prove a trend with it. I can list other games that are way better on NVidia. And DirectX 12 support isn't finalized yet. In fact neither AMD or NVidia support all of DirectX functionality presently.

Maybe you should stop reading to much into AMD marketing dept literature and see what is actually going on in the real world.
If we will look at techpowerup review suite, every performance bracket is dominated by AMD. Fury X is best in 1440p, and in 4K, over the higher amount of games than Titan X, but here I would call a draw. Because the differences are marginal.
R9 390X is faster in 11 out of 15 games than GTX 980 in 1440p and in 4K. R9 390 is faster than GTX 970 in 13 out of 15 games in 1440p, IF I remember the numbers correctly. And those are DX11 games. The only benefit where Nvidia sees is GameWorse, and that is not always the case. R9 380, not even the X model, just 380, is faster in 10 games than GTX 960 in 1080p and is cheaper than the Nvidia GPU. R9 380X is currently getting into performance range of GTX 970. All of the GPUs considered here are reference models, only, both for AMD and Nvidia. Yes, that is how AMD drivers have improved over time. Every site currently shows that this is exactly the case.

Secondly, why do you accuse me on reading AMD marketing literature? Just because something is not fitting to your world you have to discard in worst possible way?

P.S. Two DX12 games show exactly the same thing. Means that it has to be something with the games, not with Nvidia GPUs or AMD GPUs. Nothing to do with the architecture, only with games.
 

t0mat0

macrumors 603
Aug 29, 2006
5,473
284
Home
What's the current support on OS X for multiple GPU - is it limited to a maximum number by model (e.g. 2 for nMP) - or limited by not having more than 2? Could Apple allow for more than 2 GPU?

Just wondering about eGPU - if you could eGPU a hypothetical new Mac Pro - could you have a chassis that had more than eGPU card in? Or would you be stuck with one? Could you attach more than one eGPU?

If you could eGPU, just one card, then I guess gamers would want the fastest gaming card that's compatible, and MP owners might want the fastest workstation suitable card. We have MFi but no Made For Mac program. Looks like AMD is highlighting Connect as for gaming currently.

Little bit more on XConnect now on the AMD website: http://www.amd.com/en-gb/innovations/software-technologies/technologies-gaming/xconnect

Big boon for laptop eGPU - "no need to reboot the PC to connect or disconnect the Razer Core thanks to AMD XConnect™ technology." Guess you wouldn't be hot swapping much on a MP, but might help for MBP if it becomes available. And with AMD getting early implementation, might be a possible positive towards AMD GPU in Macs for the short while.
 
Last edited:

tuxon86

macrumors 65816
May 22, 2012
1,321
477
If we will look at techpowerup review suite, every performance bracket is dominated by AMD. Fury X is best in 1440p, and in 4K, over the higher amount of games than Titan X, but here I would call a draw. Because the differences are marginal.
R9 390X is faster in 11 out of 15 games than GTX 980 in 1440p and in 4K. R9 390 is faster than GTX 970 in 13 out of 15 games in 1440p, IF I remember the numbers correctly. And those are DX11 games. The only benefit where Nvidia sees is GameWorse, and that is not always the case. R9 380, not even the X model, just 380, is faster in 10 games than GTX 960 in 1080p and is cheaper than the Nvidia GPU. R9 380X is currently getting into performance range of GTX 970. All of the GPUs considered here are reference models, only, both for AMD and Nvidia. Yes, that is how AMD drivers have improved over time. Every site currently shows that this is exactly the case.

Secondly, why do you accuse me on reading AMD marketing literature? Just because something is not fitting to your world you have to discard in worst possible way?

P.S. Two DX12 games show exactly the same thing. Means that it has to be something with the games, not with Nvidia GPUs or AMD GPUs. Nothing to do with the architecture, only with games.

Again you are picking and choosing your stats to bolster your side. If I was dishonest I could also site some benchmark from an NVidia friendly website to prove my point but I leave such tactics to others.

You have a long history of reposting AMD marketing blurbs. Your posting history is available to all to see.

P.S.: Yeah two games sponsored by AMD... And you're complaining against gameworks... Lol
[doublepost=1457642281][/doublepost]
What's the current support on OS X for multiple GPU - is it limited to a maximum number by model (e.g. 2 for nMP) - or limited by not having more than 2? Could Apple allow for more than 2 GPU?

Just wondering about eGPU - if you could eGPU a hypothetical new Mac Pro - could you have a chassis that had more than eGPU card in? Or would you be stuck with one? Could you attach more than one eGPU?

If you could eGPU, just one card, then I guess gamers would want the fastest gaming card that's compatible, and MP owners might want the fastest workstation suitable card. We have MFi but no Made For Mac program.

Little bit more on XConnect now on the AMD website: http://www.amd.com/en-gb/innovations/software-technologies/technologies-gaming/xconnect

On the nMP the second card is for OpenCL only presently.
eGPU is promising but with the desk already filled with TB HDD enclosure and now with another TB box on your desk I wonder what those around here who bought the nMP because of it's design and esthetics feel about having to put up with all those wires and box clutering the desk...
 

t0mat0

macrumors 603
Aug 29, 2006
5,473
284
Home
Yeah clutter and all, but with a $100 active TB3 cable if it'd work with that, you could have all that stuff away from a desk. Agreed that nMP design is basically octopus like, hook stuff up with cables, rather than having internal space for drives, cards etc.

Raw extra GPU power beyond the D700, that might soothe the clutter angst.

Unlikely option - but what if Apple knew of this, and so planned the nMP accordingly - if they knew eGPU was coming - what if they had in mind the first nMP to have 2xGPU + 1xCPU, then could in a new design, have 2xCPU +1xGPU, with option for an eGPU?
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Again you are picking and choosing your stats to bolster your side. If I was dishonest I could also site some benchmark from an NVidia friendly website to prove my point but I leave such tactics to others.

You have a long history of reposting AMD marketing blurbs. Your posting history is available to all to see.

P.S.: Yeah two games sponsored by AMD... And you're complaining against gameworks... Lol
It is like talking to the wall.

I give up. Im not going to bother respond again to you.

P.S. Techpowerup is Nvidia biased site. How are you going to defend it now, if Nvidia biased site is showing advantage for AMD cards? Doesn't matter. You will discard it either.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
CUDA has much more market presence, and its debugger is about 4 years ahead OpenCL, and some of the fastest cards are nVidia (but sometimes AMD goes aheas as will do when launches the FuryX2 at least until nVidia releases its new Pascal core, again for a while until AMD releases Polaris 11).

My personal experience programming HPC solutions on CUDA and OpenCL, CUDA is better for programming, but OpenCL is better for end users.

My personal strategy is to build/debug on Cuda, then migrate that debugged app to OpenCL its trivial on OpenCl you enjoy more HW Vendors or platforms from GPU and CPU to FPGA also some mobile SOC now support OpenCL, but openCL debugging its archaic to said something.
 

Machines

macrumors 6502
Jan 23, 2015
426
89
Fox River Valley , Illinois
  • Like
Reactions: t0mat0
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.