Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Windows vs Linux,

That's kind of a silly comparison. Linux was aimed at the Unix market. And yes it is quite dominate over Solaris , AIX , HP-UX , Digitial Unix , etc.

In part because Linux was open Linux was not limited to "winning to desktop" to get to success.

IOS vs Android,

Even more perplexing.... iOS is smaller share of the market than Android. And at the OS heart of Android .... Linux. Yearly deployed Linux kernels is clearly on track to passing Windows in several years.


DirectX vs OpenGL,

Every hear of OpenGL ES? You know the one embedded on about every rapidly growing mobile operating system out there?

Sure if you want to limit the scope to legacy PC form factor systems. But the actual deployed systems out there in use by users....


lol even Flash vs Html5.

Again legacy Pc form factor systems perhps. But the rest of the PC space ... surely you jest. Flash, the "king of the lab", ... errr not.


Open sources track record as a development platform is pretty shakey.

Open source track record is methodical. Never mind that OpenCL isn't open source.

----------

It's strange, even now there are new CUDA only GPU accelerated utilities and renderers out there... If anything, being a low-level, poorly documented open sourced foundation is a massive hinderance on, especially small developers with limited resources starting out, especially when a higher level, fully documented alternative exists.

OpenCL is not open source. It is an open standard. Similar to how C or C++ is an open standard and CUDA is not.

But by all means continue your fact challenged fan boy rant.
 
I find that quite narrow sighted. AMD uses OpenCL in their tech, that means that all the console developers will be using, tweaking and optimising their software and games to get the most from OpenCL for the next 8 years?. That knowledge is transferable and thus OpenCL suddenly gains a huge amount of industry support. Try not to think of this as the end product 'Game' but lower level knowledge, algorithms, documentation, bug fixes, optimisations etc etc. When a standard is open, everybody benefits from it unlike CUDA which is closed, updates are closed, you don't know whats going on with CUDA until Nvidia make a release.


If AMD managed to bag ALL the billion dollar console manufactures because they were simply cheaper then in that light Nvidia just lost a ton of business/opportunities/progression to their biggest competitor, wouldn't you agree?

I don't understand what you mean when you say 'Nvidia was not even considered for silicon'



Sorry for being ignorant again, Apple competitor technology? Do you mean the change from the old mac pro to the new one? I would think that was more to squeeze out performance/efficiency from the new design rather than a "lets make it impossible to modify" stand point. Although both can be true and some of us are a little miffed why no Nvidia options. You can still get TB PCI-e enclosures. But yes the GPU is custom. Maybe Nvidia will sell a Titan Mac Pro Edition at some point as we don't know if you can just pop these custom cards out yet? News that CPU and Ram can be upgraded is surfacing already and some people are betting the PCI-e storage will be replaceable too. We can't be sure yet.

Anim

Apple as founder of OpenCL is well behind OpenCL movement and CUDA is it's arch nemesis :)

When it comes to video games its OpenGL/DirectX that does all the magic, not OpenCL. They are complimenting technologies and they can live side by side but until our computer become ultra stupid fast OpenCL will not make a dent in video game market. Put it this way OpenGL is a rendering package while OpenCL is computing package. OpenGL renders what you feed it such as image based textures while OpenCL would do procedural textures.

When I said nVidida wasn't even considered that was my assumption, it is certainly not the fact but Microsoft and Sony didn't choose AMD by accident. I also believe there was a mutual agreement between Sony and Microsoft what to use for internals for cheaper manufacturing costs to both sides. nVidia already did their gaming handheld and with its Tegra tech I am sure MS and Sony don't want another one to get involved into home entertainment. By picking nVidia to do the internals for their consoles they would give nVidia boost to get involved. That's how I would conduct business.
 
I also find the notion Apple will be using AMD cards in the MBPr and iMac in future iterations HILLARIOUS.

From your warped perspective I'm sure you do.


AMDS laptop cards are so far behind Nvida on power to heat its not even funny.

Not really. AMD is in the similar position that Nvidia was in when Nvidia did a generation iteration more focused on boosting GPGPU performance than on power savings. That's when AMD took over in the previous MBP/iMac design iteration to the one current deployed. For the current line up AMD focused on catching up GPGPU wise, and one some OpenCL marks they are ahead while consuming more power.

It depends upon what the focuses are on the next new process designs probably rolling out for both in 2014 on TSMC (and perhaps other fabs) new smaller , 20nm, lithography.


They've basically given up and devoted all there energies in to the combined APUs that basically compete with Intel iris integrated graphics.

Oh, because getting a GPU so share die space and power with a CPU implementation means you do not have to put substantive effort into being more power efficient. ...... errrrrr not! If that is a focus arguing that they aren't going to be power efficient is loopy. Having to share resources at chip package level leaves no room for not being able to get along well with others on limited resources.

There really isn't much of a big limitation there. Given a modular design (which all the modern GPUs follow) simply just adding more GPU modules into the space were dropped out x86 cores soaking up die space relatively easily leads to more performance with the same power saving features already necessitated by the shared die design. Deriving discrete GPU only from that starting point isn't that hard.

Sure AMD could screw it up, but not because of the objectives. They just could have sufficient competent R&D staff to pull it off. Those objectives ( computing spread out over both x86 and GPGPU cores ) are not out lf line with Apple's MBP and iMac (and Mac Pro) objectives at all.
 
i'll give you a cookie anyway.. there's a super sweet bakery nearby that i'm always hyping up to anybody that will listen # #

but the challenge was to quote a developer saying "i am continuing to work on & improve my cuda implementation on the mac platform".. not "i'm working on cuda for windows"

i'm internet friends with devin, one of the vray plugin writers.. i'll mail him today to see what he has to say about it.

I just went to a great bakery in NYC a few weeks ago. Wish I could remember the name of it…

Yea, I'd be very interested to here what a V-ray dev's take is on it. Especially since they're working on a plugin for my main app MODO, sshhhh. As far as the Windows thing, V-ray and Octane are Mac, Windows, and Linux compatible. Did I misread the challenge? ;) Actually, I'm not aware of any Mac-only GPU renderers at all.

Who is speculating now?
I am speculating. You are speculating. We're all speculating, lol.
 
Apple as founder of OpenCL is well behind OpenCL movement and CUDA is it's arch nemesis :)

I suspect Apple doesn't particularly care one way or another about CUDA. If Nvidia wants to sink additional effort into that then fine it is their choice. Apple isn't going to put up with though is kneecapping, inhibiting, or not robustly implementing OpenCL.

The only complete in part. As a widely implemented standard, CUDA doesn't particularly compete at all.


When I said nVidida wasn't even considered that was my assumption, it is certainly not the fact but Microsoft and Sony didn't choose AMD by accident. I also believe there was a mutual agreement between Sony and Microsoft what to use for internals for cheaper manufacturing costs to both sides. nVidia already did their gaming handheld and with its Tegra tech I am sure MS and Sony don't want another one to get involved into home entertainment.

This relatively missing the point that you actually made earlier. One of the big drivers on these consoles is costs. (selling priced subsidized consoles sucks) Combining the CPU and GPU and sharing RAM for graphics/compute saves costs. The major problem for Nvidia is they don't have a x86 solution and current ARM cores trying to be competitive for 2-3 years with current desktop gaming CPUs is comical.

Nvidia doesn't have a x86 CPU chipset, so only viable solution Nvidia would have involved a discrete GPU + VRAM + x86 CPU (which would already have a redundant GPU if pulling from the basic mainstream designs) and chipset. The AMD solution has a lower chip count and cost. Intel is still years behind the curve at being competitive, probably wouldn't do a custom, and behind the curve on costs even with their fewer component solutions. Also think the console folks are hoping to leverage some of the low level x86 code optimization toolchain than Windows PC gaming folks use.

AMD was willing to do custom CPU+GPU SoC for the console vendors and NVidia and Intel probably were not. Then add cheaper bill of material for the consoles and it isn't hard to see how AMD came out on top.

Nvidia's gaming thing is a giggler for the console vendors. iOS and Andriod in general are more creditable competitors from that end of the marketplace.


By picking nVidia to do the internals for their consoles they would give nVidia boost to get involved. That's how I would conduct business.

Big boys in the tech market know all about coopetition. Separating Nvidia the parts vendor from Nvidia the system vendors wouldn't have been hard. Same way Sony sells PCs with Windows and yet competes with Microsoft in game consoles.

Right now Nvidia just isn't particularly good at higher performance SoC solutions. Only marginally good at the lower power ones ( just about all the flagship phones and tablets this year are not Tegra/Nvidia based. ) Retreat back into the standard PCI-e connector discrete GPU card space the are a leader but outside of that they have problems with competitors right now.
 
Apple as founder of OpenCL is well behind OpenCL movement and CUDA is it's arch nemesis :)

I can't agree here - OpenCL and CUDA are not really competing APIs; simply because CUDA is locked to a specific hardware vendor. Does popularity of CUDA hinder the adoption of OpenCL? I am not sure. I think its has more to do with past OpenCL's immaturity, bugs and lack of proper developer tools. Now, OpenCL has matured a lot, so I wouldn't even consider CUDA if I needed to write a GPGPU kernel. Note that I am NOT talking about some highly-specialized software designed to run on a specific mainframe.

When it comes to video games its OpenGL/DirectX that does all the magic, not OpenCL. They are complimenting technologies and they can live side by side but until our computer become ultra stupid fast OpenCL will not make a dent in video game market.

OpenCL has enough usage cases in game programming: physics, mesh generation etc. Not really sure what you mean by making a dent in video game market though.

When I said nVidida wasn't even considered that was my assumption, it is certainly not the fact but Microsoft and Sony didn't choose AMD by accident. I also believe there was a mutual agreement between Sony and Microsoft what to use for internals for cheaper manufacturing costs to both sides.

The reason why AMD was chosen is simply because of their CPU/GPU fusion technology. Its perfect for consoles and reduces the R&D investments.
 
Apple as founder of OpenCL is well behind OpenCL movement and CUDA is it's arch nemesis :)

i think apple wrote it to begin with but then gave up control to a non-profit organization.

but i think in it's current form, apple doesn't really have much to gain (via direct money transfer from developers using openCL in their programs) other than maybe licensing fees if this image is used for sales purposes:

OpenCL_Logo.png


?

i'm not 100% on this but yeah, i don't think apple 'owns' openCL anymore.. i'll research it some a little later unless someone here already knows/shares..
 
Apple are a promoter (type of member) at the Khronos group if that helps.

ah here we are, its in the wiki:

OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Qualcomm, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008, the Khronos Compute Working Group was formed[4] with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008.[5] This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.

Edit:
Promoters - act as the "Board of Directors" to set the direction of the Group, with final specification ratification voting rights.

So they can have a say if they wish.

More info here: https://www.khronos.org/opencl/
 
You're aware of how small AMD's market share is in the DCC market, right?

You mean the sentences you snipped out immediate after the one you quoted?

I'm keenly aware of your intent to point at smaller submarkets and then highlight highlight how the percentages are skewed there but are disconnected from the issue.

Developers who aim at extremely narrow markets are often limited on resources primarily because their customer base is so small. They are making choices partially driven by what they have to do to get by as opposed to be aligned with where customers and system vendors are going.
Typically they engage a core group of customers where mutual groupthink is is engaged to prop up rational as being one of optimal path.

If Apple shifted from Cocoa something new ( similar to OS 9 -> Cocoa ) move they would be equally flummoxed as their are with OpenCL's position in OS X contexts going forward. Not enough resources to adapt so circle the wagons around the legacy code.


Although technically yes the OpenCL addressable market is larger, since nVidia cards support it as well.

Pragmatically they don't since Nvidia has gone AWOL on new driver support. That will be a factor as to whether they get selected for future Macs. Software vendors and customers drinking gallons of CUDA kool-aid and ignoring that issue are missing the boat.
 
but i think in it's current form, apple doesn't really have much to gain (via direct money transfer from developers using openCL in their programs)

Apple has gobs to gain monetarily. Two competing GPU vendors gets them better component pricing. That both puts more money in their pockets and makes the Mac products more competitive (for the savings that pass along).


Apple jumping onto a bandwagon where only one of the GPU vendors can win will long term just get them higher costs and less competitive products.
That's why Apple tends to follows standards. When they don't it tends to be their own defacto standard (usually because wasn't a viable standard they adapt to fit their design constraints. ).


OpenCL isn't on a super rapid evolutionary path, but most source code bases weren't going to follow closely if it was. OpenCL 2.0 is better. It takes a bit long to get a group of folks to agree but long term, if engage is good sized group of bright folks, tend to get implementations out of the process.
 
OpenCL has enough usage cases in game programming: physics, mesh generation etc. Not really sure what you mean by making a dent in video game market though.

OpenCL computations in games are done by CPU not GPU if I'm not mistaken.
 
OpenCL computations in games are done by CPU not GPU if I'm not mistaken.

Eh, whats the point of that. The CPU is dog slow compared to the GPU for parallel processing. 6/12 threads vs 2045 cores/streams. Just no point trying to do this on the cpu accept for hellishly slow compatibility.
 
OpenCL computations in games are done by CPU not GPU if I'm not mistaken.

They are done on whatever device you run them on. Usually, it makes no sense to run them on the CPU (as Anim points out). I mean, if I want to run something on the CPU, I am currently better of writing optimised SIMD code.

Eh, whats the point of that. The CPU is dog slow compared to the GPU for parallel processing. 6/12 threads vs 2045 cores/streams. Just no point trying to do this on the cpu accept for hellishly slow compatibility.

True, but CPUs are slowly getting there. Haswell SIMD units have many characteristics of a GPU (they pack quite a punch too) and with future AVX-512 CPUs stream computations on a CPU will be just as viable as on a mid-range GPU.
 
They are done on whatever device you run them on. Usually, it makes no sense to run them on the CPU (as Anim points out). I mean, if I want to run something on the CPU, I am currently better of writing optimised SIMD code.



True, but CPUs are slowly getting there. Haswell SIMD units have many characteristics of a GPU (they pack quite a punch too) and with future AVX-512 CPUs stream computations on a CPU will be just as viable as on a mid-range GPU.

Interesting read on AVX, I guess as the CPU's still runs 3 to 4 times the speed of GPU's when it comes to cycles, this vector processing could be very big.

Seems to be coming in Skylake architecture and could be as early as 2015?

But I must confess, this is over my head on a technical level. As a layman way of thinking I wondered what Intel were going to do with GPU's eating into their territory. And there is evidence of that right here. We now have dual GPU instead of dual CPU. So is this the solution that Intel are hoping narrows the divide between GPU and CPU for performance?

I myself upgraded 3 iterations of graphic cards and still have the original (2008) i7 920 cpu in the box. incrementally it worked great but no point continuing as the bottleneck is the CPU/motherboard now and to be honest I dread replacing that, it's not as simple as just pulling a card and slotting a new one in.

http://software.intel.com/en-us/blogs/2013/avx-512-instructions

Anim
 
But I must confess, this is over my head on a technical level. As a layman way of thinking I wondered what Intel were going to do with GPU's eating into their territory. And there is evidence of that right here. We now have dual GPU instead of dual CPU. So is this the solution that Intel are hoping narrows the divide between GPU and CPU for performance?

Its actually the other way around - Intel is eating into the GPUs territory. Their integrated GPUs are slowly but surely reaching levels of dedicated graphics and their CPUs are getting more and more GPU-like capabilities. In a few years time, barely any laptop will have dedicated graphics (unless something unexpected happens in the industry). Of course, GPUs are still vastly better at parallel processing - and that's what the new Mac Pro is aimed at; hence the dual GPUs.
 
They are done on whatever device you run them on. Usually, it makes no sense to run them on the CPU (as Anim points out). I mean, if I want to run something on the CPU, I am currently better of writing optimised SIMD code.

the more important point (as i see it) is that openCL will run on cpus, nvidia gpus, intel gpus, amd gpus.. etc.
 
Yea, I'd be very interested to here what a V-ray dev's take is on it.

i started to type up the mail to him but realized half way through it that my sole purpose of doing so was to settle an internet forum argument so i decided against..
dunno, i've developed a relationship with quite a few coders within the field of apps i use and i'd rather not burn any bridges for silly reasons..
devin does use macs (and writes the plugin on mac) so maybe i'll just invite him to this thread instead. ;)
 
i started to type up the mail to him but realized half way through it that my sole purpose of doing so was to settle an internet forum argument so i decided against..
Haha. Yes, it's good to keep it all in perspective.
 
Although Nvidia cards can run OpenCL, their implementations are suboptimal. I mean, even the Iris pro beats a 750M in OpenCl performance, which says a lot.

Why would Nvidia threaten its own creation with a good OpenCL implementation?

Is the OpenCL implementation completly driver based?
Could a 750M smoke an Iris Pro at OpenCL if NVIDIA chose to make it so or is there a hardware limitation?

Im a bit disapointed with the 750M performance for Adobe productivity when compared to Iris Pro hoping NVIDIA can tweak something.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.