Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

If Apple Inc. let you choose the "Home Team" brand of GPU for nMP, which would it be?

  • AMD FirePro cards

    Votes: 16 18.2%
  • Nvidia Quadro and GTX cards

    Votes: 52 59.1%
  • Who cares, I just want it

    Votes: 20 22.7%

  • Total voters
    88
They aren't as openly hostile to it as with Flash. I doubt Apple kneecapped Nvidia on the Mac Pro for that purpose. Flash was basically an outcast. Nivida is in the rest of the Mac line up , in part because Nvidia reversed course on performance/Watt and price and also support OpenCL (at least some places ). It could be a problem, but not a major hurdle on OS X.

Long term, yes. The OS X system libraries are only going to use OpenCL. Apple is telling no developers to put effort into CUDA. CUDA is likely never going to show up in iOS whether developers want it or not. Apple is going to have zero interest in solutions that can't possibly cross those two platforms. They are not pressing iOS hard to expose OpenCL to general developers yet.

That appears to be part of Nvidia's excuse.

Imagination Tech/PowerVR .... check (2 years ago. I wonder who a major user their GPU is? Hmmmmm ) http://withimagination.imgtec.com/index.php/news/powervr-sgx-cores-get-opencl-conformance

Adreno ... check 320 has OpenCL support https://developer.qualcomm.com/discover/chipsets-and-modems/adreno-gpu

Mali .... check http://malideveloper.arm.com/develop-for-mali/sdks/mali-opencl-sdk/

Nvidia .... apparently it doesn't matter on any of them.

"... "Today's mobile apps do not take advantage of OCL (OpenCL), CUDA or Advanced OGL (OpenGL), nor are these APIs exposed in any OS. Tegra 4's GPU is very powerful and dedicates its resources toward improving real end user experiences. ... "
http://www.brightsideofnews.com/new...cs-disappoint2c-nvidia-in-defensive-mode.aspx

well geee, if it is not there uniformly it is pretty obvious the OS aren't going to expose it. Wonder if someone cares.... [ same article]

"... To put this answer in perspective, Nvidia - a company almost always known for innovation in the desktop and mobile computing space - does not consider that API's such as OpenCL and its own CUDA are important for ultra-efficient computing. This attitude already resulted in a substantial design win turn sour, as the company was thrown out of BMW Group, a year and a few quarters after it triumphantly pushed Intel out of BMW's structure.
... "

Vivantae .... check OpenCL and Android's Renderscript http://www.vivantecorp.com/index.php/en/technology/gpgpu.html

And yet two days ago it was announce that nvidia bought the portland group the HPC leading independant compiler supplier and maker of OpenACC, Cuda fortran and Cuda x86 amongs other thing...

So what were you saying about nvidia not finding cuda important enough?
 
And yet two days ago it was announce that nvidia bought the portland group the HPC leading independant compiler supplier and maker of OpenACC, Cuda fortran and Cuda x86 amongs other thing...

So what were you saying about nvidia not finding cuda important enough?

Nobody said Nvidia didn't think CUDA was important (indeed, they probably see it as a lock in strategy.) We're saying Apple doesn't care about CUDA, and is to some degree probably hostile to it (due to them being the primary backer of OpenCL.)

Apple's response to CUDA was basically to create OpenCL and hand it to AMD.

They aren't as openly hostile to it as with Flash. I doubt Apple kneecapped Nvidia on the Mac Pro for that purpose. Flash was basically an outcast. Nivida is in the rest of the Mac line up , in part because Nvidia reversed course on performance/Watt and price and also support OpenCL (at least some places ). It could be a problem, but not a major hurdle on OS X.

No, I don't think the new Mac Pro has AMD because Apple is actively trying to block CUDA (and we're yet to see if all GPU options will be AMD.) But I don't think it factors into their decision making, and anyone trying to convince them to support CUDA better is simply going to be pointed to OpenCL.

Apple doesn't want to do anything that will lock them into one vendor.
 
Today, based off the fact most software that is using the GPU is utilizing CUDA, Nvidia is an easy choice.

The fact is Apple doesn't like the current GPU landscape and is trying to change the future, and move the market to OpenCL something AMD is currently better at, that plus they probably got a great deal from AMD.

Is it a coincidence that Apple, the PS4 and Xbox one all went AMD.. I think AMD is more willing to play ball, and Nvidia got left out in the cold thinking its so awesome it could throw its weights around.
 
Today, based off the fact most software that is using the GPU is utilizing CUDA, Nvidia is an easy choice.

The fact is Apple is trying to change the future, and move the market to OpenCl, that plus they probably got a great deal from AMD.

Is it a coincidence that Apple, the PS4 and Xbox one all went AMD.. I think AMD is more willing to play ball, and Nvidia got left out in the cold thinking its so awesome it could throw its weights around.

Videogame console aren't really a measure of success... How many cell based pc are available today? Cell were used by both the ps3 and the xbox360, yet it never went anywhere.

And the game console isn't exactly soaring either. Except for the fps market people, and more and more kids, are getting their vg fix On portable devices.
 
I don't pretend to understand the technical issues of GPUs.

What I have observed is that alliances shift and fortunes wax and wane. I wouldn't at all be surprised to see Nvidia in a MP in the future.
 
I don't pretend to understand the technical issues of GPUs.

What I have observed is that alliances shift and fortunes wax and wane. I wouldn't at all be surprised to see Nvidia in a MP in the future.

I agree 100%, and i'll be waiting for it.

I'm just a bit sad about apple turning into sony, trying to push restrictive and proprietary standard on a plateform that wasn't before.
 
Videogame console aren't really a measure of success... How many cell based pc are available today? Cell were used by both the ps3 and the xbox360, yet it never went anywhere.

And the game console isn't exactly soaring either. Except for the fps market people, and more and more kids, are getting their vg fix On portable devices.

I'm not saying its a measure of success, but the PS3 and 360 represent a 150 million units to put GPU's in. To me it indicates Apple liked the deal better AMD came to the table with.. as did Sony and MS. I'd expect heads to have rolled at Nvidia over not landing any of those hardware deals.. 150 million units + 100K units or whatever Apple represents is not chump change.
 
I'm not saying its a measure of success, but the PS3 and 360 represent a 150 million units to put GPU's in. To me it indicates Apple liked the deal better AMD came to the table with.. as did Sony and MS. I'd expect heads to have rolled at Nvidia over not landing any of those hardware deals.. 150 million units + 100K units or whatever Apple represents is not chump change.

Both the xbox and ps3 were sold bellow cost... Heads have rolled. Sony fired it's pres and changes were also made at microsoft. Nvidia wasn't excluded because amd were better. They chose amd because they wanted an apu/soc solution. Both of those console are to be sold as media center first and vg console second. This is clear by the effort both company are making in integrating online media streaming and interactive media watching. This isn't a new thing either since a big portion of the ps3 were sold to people who never even inserted a game disk in it, but instead used it as a bluray player and media extender. For such a task amd is perfect and is a good choice.
 
Always double source, so you don't become dependent on one supplier; that's the rule of the market.
 
Both the xbox and ps3 were sold bellow cost... Heads have rolled. Sony fired it's pres and changes were also made at microsoft. Nvidia wasn't excluded because amd were better. They chose amd because they wanted an apu/soc solution. Both of those console are to be sold as media center first and vg console second. This is clear by the effort both company are making in integrating online media streaming and interactive media watching. This isn't a new thing either since a big portion of the ps3 were sold to people who never even inserted a game disk in it, but instead used it as a bluray player and media extender. For such a task amd is perfect and is a good choice.

So.?? AMD and Nvidia didn't get paid for their graphics cards because people chose to use their PS3 as a blu-ray player 360 as a media box?

They were sold at a loss by MS and Sony.

Apple's choice to go AMD over NVidia is likely a highly financially motivated one. With the OpenCL flag waving a result of that choice, when people say, but all my apps use CUDA.
 
Both the xbox and ps3 were sold bellow cost... Heads have rolled. Sony fired it's pres and changes were also made at microsoft. Nvidia wasn't excluded because amd were better. They chose amd because they wanted an apu/soc solution. Both of those console are to be sold as media center first and vg console second. This is clear by the effort both company are making in integrating online media streaming and interactive media watching. This isn't a new thing either since a big portion of the ps3 were sold to people who never even inserted a game disk in it, but instead used it as a bluray player and media extender. For such a task amd is perfect and is a good choice.

There are a lot of technical reasons they went with an APU. Both consoles use the shared memory structure of an APU based solution to their advantage.

There ARE performance advantages to moving to an APU that both consoles are heavily using.

Saying that it was purely a financial decision would be... naive and misinformed. I doubt the APU they're using is actually cheaper than using a dGPU anyway. The DDR5 that's being forced on the PS4 because of the APU is not cheap.

I'm not sure of the specifics of the negotiations, but I'd bet one reason we don't see Nvidia on the next gen consoles is because both MS and Sony didn't want to go with two vendors (one for CPU, one for GPU, the mess), and there was interest in the performance enhancements one could get from a high end APU.
 
There are a lot of technical reasons they went with an APU. Both consoles use the shared memory structure of an APU based solution to their advantage.

There ARE performance advantages to moving to an APU that both consoles are heavily using.

Saying that it was purely a financial decision would be... naive and misinformed. I doubt the APU they're using is actually cheaper than using a dGPU anyway. The DDR5 that's being forced on the PS4 because of the APU is not cheap.

I'm not sure of the specifics of the negotiations, but I'd bet one reason we don't see Nvidia on the next gen consoles is because both MS and Sony didn't want to go with two vendors (one for CPU, one for GPU, the mess), and there was interest in the performance enhancements one could get from a high end APU.

Except that said enhancement have never really materialized. The AMD Ax PC APU are sub par compared to what you can get from intel and the over hyped performance of the Cell never materialized either. I would say that both company are more interested in fighting the homebrew and piracy after market, that is made more dificult by using a non standard pc architecture.
 
I would choose the manufacturer that makes the most sense from a business point of view... and that is exactly what Apple have done.

Yup. As a CEO or member of the board I would have done this as well. As a user or someone concerned with delivering "the best" machine I would have of course selected NVidia and made it easy for them to get in the game as you put it. I think that's what MVC is trying to say as well.
 
Except that said enhancement have never really materialized.

Huh?

The enhancements are pretty simple to figure out. Bolting the GPU directly to the CPU gives them access to the same memory and prevents latency due to the PCI-E bus.

I did a paper on this in college. I had some CUDA code that was running faster on a 9400m as opposed to a 9600m, and the reason was because the 9400m was closer to the CPU. Now, not ALL my CUDA code ran that way, but the code that was more bandwidth constrained than processor constrained did. And that was years ago.

The reason that this config hasn't really shown as much speed as dGPU solutions is because the GPU they bolt on to the CPU in these cases is usually pretty low end. With the XBox One and the PS4, the GPU they're using is much higher end, given those boxes the benefits of both a faster GPU and a fast link to the CPU.

The AMD Ax PC APU are sub par compared to what you can get from intel

Yes, because Intel CPUs are faster. This has nothing to do with an APU design.

and the over hyped performance of the Cell never materialized either.

The Cell never used GPU APUs. Why are we talking about it?

But as long as we are, the biggest problem with the Cell was it was too hard to code for. The theoretical performance was there, but the processor was so non-standard the tools just didn't exist. Which is why both the PS4 and the XBox One are using a standard PC architecture. Which leads me to your next comment...

I would say that both company are more interested in fighting the homebrew and piracy after market, that is made more dificult by using a non standard pc architecture.

What the what what? Both the XBox One and the PS4 are using x86 processors with integrated AMD GPUs. AMD already makes versions of these chips for PCs. How exactly is this a non standard PC architecture? The entire point is that it's a standard PC architecture. The XBox one is using DirectX and the PS4 using OpenGL even.
 
Last edited:
Huh?

The enhancements are pretty simple to figure out. Bolting the GPU directly to the CPU gives them access to the same memory and prevents latency due to the PCI-E bus.

I did a paper on this in college. I had some CUDA code that was running faster on a 9400m as opposed to a 9600m, and the reason was because the 9400m was closer to the CPU. Now, not ALL my CUDA code ran that way, but the code that was more bandwidth constrained than processor constrained did. And that was years ago.

The reason that this config hasn't really shown as much speed as dGPU solutions is because the GPU they bolt on to the CPU in these cases is usually pretty low end. With the XBox One and the PS4, the GPU they're using is much higher end, given those boxes the benefits of both a faster GPU and a fast link to the CPU.



Yes, because Intel CPUs are faster. This has nothing to do with an APU design.



The Cell never used GPU APUs. Why are we talking about it?

But as long as we are, the biggest problem with the Cell was it was too hard to code for. The theoretical performance was there, but the processor was so non-standard the tools just didn't exist. Which is why both the PS4 and the XBox One are using a standard PC architecture. Which leads me to your next comment...



What the what what? Both the XBox One and the PS4 are using x86 processors with integrated AMD GPUs. AMD already makes versions of these chips for PCs. How exactly is this a non standard PC architecture? The entire point is that it's a standard PC architecture.

You are right about them being x86, but they are based on Jaguar which IS a underpower netbook/tablet apu. The version used in the console is an 8 core version which doesn't mean all that much considering it's low clock speed. Here is where it classify itself:

PS4-GPU-Performance1.gif


Full article:

http://wccftech.com/playstation-4-specifications-analysis-similar-cost-pc/
 
You are right about them being x86, but they are based on Jaguar which IS a underpower netbook/tablet apu. The version used in the console is an 8 core version which doesn't mean all that much considering it's low clock speed.

So? They traded core count for clock speed. What's the point?

And the graph you posted is for GPU, not CPU (so it has nothing to do with the 8 cores or the low clock speed.) It's exactly what I said it was. A high-mid end GPU.
 
So? They traded core count for clock speed. What's the point?

And the graph you posted is for GPU, not CPU (so it has nothing to do with the 8 cores or the low clock speed.) It's exactly what I said it was. A high-mid end GPU.

It's a tablet/netbook cpu with a soon to be obsolete gpu... By the time they come out you'll get a better gpu for less money for your pc to play more demanding games....

In anycase we veered way off topic...

If I was Tim Cook, I would still have made the cylinder, albeit a bigger one which would have include two full x16 pcie compliant slot mounted on a pivot bracket so people could install their own and the taller wider volume would have give me the possibility to include one or two cpu. 4 overlapping pcie SSD socket would be positionned between the two videocard brackets.

It would be taller and wider but it would be a workstation that all of us could love.
 
It's a tablet/netbook cpu with a soon to be obsolete gpu... By the time they come out you'll get a better gpu for less money for your pc to play more demanding games....

In anycase we veered way off topic...

Yes. What you're saying has nothing to do with the topic. You're trying to make analogies about a technology by comparing a CPU targeted at the price of a game console to a CPU targeted at the price of a workstation.

A $200 CPU is slower than a $1000 CPU? Do tell...
 
Yes. What you're saying has nothing to do with the topic. You're trying to make analogies about a technology by comparing a CPU targeted at the price of a game console to a CPU targeted at the price of a workstation.

A $200 CPU is slower than a $1000 CPU? Do tell...

Actually for $200 you can get an i5-4570 which will beat any AMD APU including the A10-6800k, but still, you are defending AMD as if it was something personnal to you. They make great low power, low performance and low cost solution. Nothing wrong with that, except they've decided to concentrate on that market.

On the other hand NVidia is pushing in the other direction. Look at their high end GPU/Computational product and projects released or announced this year and last. Everybody is talking about Tesla, Titan or Quadro. Not so much going on about FirePro except around here since the announcement of the nMP.

I go to CG and FX forums and read up what people in that industry are using and for the most part it's NVidia products. Cuda is still king. OpenCL may get there one day but people don't buy hardware for what it will do one day, but instead for what it can do today.
 
And yet two days ago it was announce that nvidia bought the portland group the HPC leading independant compiler supplier and maker of OpenACC, Cuda fortran and Cuda x86 amongs other thing...

You do realize that Portland group system makes for a smoother transition off Nvidia onto Xeon Phi much easier for folks with legacy CUDA code base problems?

Portland group is also doing work to port to and optimize AMD APU set ups.

Portland group is also doing work to bring their toolset to Xeon Phi. In the last top500 group Xeon Phi went from very low to 20% in months:

"... A total of 54 systems on the list are using accelerator/co-processor technology, down from 62 in November 2012. Thirty-nine of these use NVIDIA chips, three use ATI Radeon, and eleven systems use Intel MIC technology (Xeon Phi). ..."
http://www.top500.org/blog/lists/2013/06/press-release/

versus November previous.
"...A total of 62 systems on the list are using Accelerator/Co-Processor technology, including Titan and the Chinese Tianhe-1A system (No. 8), which use NVIDIA GPUs to accelerate computation and Stampede and six others, which are accelerated by the new Intel Xeon Phi processors. Six months ago, 58 systems used accelerators or co-processors. ..."


Even OpenCL to Android.

http://www.pgroup.com/lit/articles/insider/v5n1a4.htm


So what were you saying about nvidia not finding cuda important enough?

I didn't. You are the one postulating these narrow , almost singular, focuses for these companies. They aren't that narrow. Nvidia isn't. AMD isn't. Intel isn't.

As pointed out above that CUDA-86 is about as big a threat to Nvidia as a benefit depending upon how Intel executes on Xeon Phi. If CUDA code doesn't need Nvidia hardware...... it isn't going to bring Nvidia money. Especially code on x86 hardware for Nvidia has no license for. That hardware money is either going to AMD or Intel.

The legacy code base of high end HPC x86 code is much taller and deeper than that of CUDA.


It is probably too little, too late if Nvidia is reversing course out of the proprietary tarpit they set up.
 
You do realize that Portland group system makes for a smoother transition off Nvidia onto Xeon Phi much easier for folks with legacy CUDA code base problems?

Portland group is also doing work to port to and optimize AMD APU set ups.

Portland group is also doing work to bring their toolset to Xeon Phi. In the last top500 group Xeon Phi went from very low to 20% in months:

"... A total of 54 systems on the list are using accelerator/co-processor technology, down from 62 in November 2012. Thirty-nine of these use NVIDIA chips, three use ATI Radeon, and eleven systems use Intel MIC technology (Xeon Phi). ..."
http://www.top500.org/blog/lists/2013/06/press-release/

versus November previous.
"...A total of 62 systems on the list are using Accelerator/Co-Processor technology, including Titan and the Chinese Tianhe-1A system (No. 8), which use NVIDIA GPUs to accelerate computation and Stampede and six others, which are accelerated by the new Intel Xeon Phi processors. Six months ago, 58 systems used accelerators or co-processors. ..."


Even OpenCL to Android.

http://www.pgroup.com/lit/articles/insider/v5n1a4.htm




I didn't. You are the one postulating these narrow , almost singular, focuses for these companies. They aren't that narrow. Nvidia isn't. AMD isn't. Intel isn't.

As pointed out above that CUDA-86 is about as big a threat to Nvidia as a benefit depending upon how Intel executes on Xeon Phi. If CUDA code doesn't need Nvidia hardware...... it isn't going to bring Nvidia money. Especially code on x86 hardware for Nvidia has no license for. That hardware money is either going to AMD or Intel.

The legacy code base of high end HPC x86 code is much taller and deeper than that of CUDA.


It is probably too little, too late if Nvidia is reversing course out of the proprietary tarpit they set up.

Yawn... And in never hit you that maybe NVidia just pulled the rug from under the competition by buying TPG?

Of course it didn't because that would prove you wrong, something you are physically incapable of doing it seems. Cuda, according to your figures represent 3/4 of the market for accel/copro. The result for a quarter does not make a trend.

Stop trying to pass yourself for something that you ain't.
 
Yup. As a CEO or member of the board I would have done this as well. As a user or someone concerned with delivering "the best" machine I would have of course selected NVidia and made it easy for them to get in the game as you put it. I think that's what MVC is trying to say as well.

Indeed, but you know that the world does not work like this. In this particular case, there must be a financial reason why have chosen to discontinue the current Mac Pro form factor. It definitely was not a personal snub at the 150 MR users that want the current shape to stay.
 
I think it would have been a difficult choice. The best option would to offer both high end AMD and Nvidia solutions so that everyone is happy.
 
Another solution would be for apple to drop both amd and nvidia and create their own gpu.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.