2007-8 MBP with GeForce 8600M also was a widely known dud. Mine died, got a free mobo replacement in 2010 when Apple extended the warranty on these models, and it died again after a year.
Actually the MBP with dying GPU was '11 model with AMD (MBP 15/17 2011), iMacs with nVidia GPU was the other Mac I remember recently having Dying GPU, about the MBPs the issue was related to thermal compound degradation (a 2011 MBP15 was my last MBP, and died due GPU very earlier, then i got an iPad Pro 12").
My Time Spy result with modest 200Mhz OC on CPU and GPU, basic free version of 3D Mark that has no settings
http://www.3dmark.com/3dm/13276967
.
Sadly the 2010 MBPs also had defective GPUs, NVIDIAs GT330M and GT320M which broke down over time. That was more hush hush, I never knew about the recall and missed the window on my 15" MBP.
For comparison my 4,1 to 5,1 with dual X5677 and Titan X. Impressive to see the 4 Skylake cores effectively matching my 8 core. I can't WAIT to build a modern PC!!!
http://www.3dmark.com/3dm/13266198
SDAVE, all you write are myths that are coming from very far past. Today world is different. Like drivers from AMD are much better, than they were before, and drivers from Nvidia are much worse then they were before. At this moment they are equal in terms of driver quality.
P.S. Apple likes AMD because it is easy for them to write drivers for Metal, whereas Nvidia's architecture is not documented on this area. Nvidia like to keep control of their drivers.
Secondly, AMD hardware offers much higher compute power than Nvidia, if we consider 28 nm process, and is better for purposes which Apple targets(video Editing, OpenCL, etc).
Compute is just that: mathemathical algorithms. The amount of power is defined by how much TFLOPs can a GPU push.AMD only offers higher compute powers in OpenCL, which is why Apple keeps going to them. But in CUDA, the computational power is some of the best in the industry. Take Otoy's Octane renderer for example. With CUDA, the performance is astounding. It doesn't support OpenCL yet, though, but Otoy has an internal OpenCL version going for testing purposes and they state that CUDA just kills OpenCL in comparable cards.
Compute is just that: mathemathical algorithms. The amount of power is defined by how much TFLOPs can a GPU push.
For example: R9 390X will be very close in terms of compute performance to GTX 980 TI, because both have similar compute power ~ 6 TFLOPs. CUDA, and OpenCL are just APIs that expose that for applications. There is nothing in CUDA that makes 6 TFLOPs Nvidia GPU faster than 6 TFLOPs GPU from AMD.
On the other hand applications using CUDA are better optimised for Nvidia hardware because that is proprietary Nvidia API, that is not available anywhere else.
Other hand of this is: Developers are not always competent enough to see that Application ported from CUDA might suffer.
There is word in the industry about people testing this internally, about the nature of CUDA, that when you port from CUDA designed application to OpenCL, you end up with lower performance on other GPUs(AMD GPUs suffer). This behaviour is not observed if you have the same application designed for OpenCL from the beginning, and ported for CUDA(GPUs from different architectures perform pretty much on the same level). But that remains to be confirmed.
The rest of your post is simple overgeneralising on the matter.
Compute is just that: mathemathical algorithms. The amount of power is defined by how much TFLOPs can a GPU push.
Remember, TFLOPS only refers to theoretical performance. I'll assume by power you mean performance. Performance can only be determined by determining how the GPU performs on certain tasks. Sure, the 390X had very similar TFLOPS to the GTX 980 Ti, but in gaming tasks and single precision compute tasks the 980 Ti performed much better than the 390X.
What is, that nVidia makes better GPU's than AMD?
It's not an opinion, it's a fact.
Is it really a fact? Or opinion.
I've tried to highlight the risks in blindly accepting theoretical TFLOPS as deliverable performance, but to no effect.
Who cares how many TFLOPS the 2nd GPU on an MP6,1 could theoretically produce - when most of the time in most of the applications it's sitting idle and wasting power?
Similarly, it's pointless to debate the ratio of FP32 to FP64 if your application doesn't need FP64. Nvidia knows this, and has the Kepler and GP100 for FP64 tasks, and went with weak FP64 on the Maxwell.
It does not matter which one is better. None of is better than another. CUDA does not make your GPUs faster just because. It is heavily optimised for Nvidia hardware, and your devs do not have to do anything when they implement it into their applications. But perception in the world is simple: Nvidia makes better GPUs. Even if they don't.Remember, TFLOPS only refers to theoretical performance. I'll assume by power you mean performance. Performance can only be determined by determining how the GPU performs on certain tasks. Sure, the 390X had very similar TFLOPS to the GTX 980 Ti, but in gaming tasks and single precision compute tasks the 980 Ti performed much better than the 390X.
I'm not going to wade into the debate of whether OpenCL or CUDA is better. For whats it worth, my experience in academia has shown that CUDA has much broader support and even engineering classes are taught in how to program in CUDA. However I get why Apple would stick to OpenCL and Metal. Its the only library they can run across all macs (including those without discrete GPUs) and it doesn't tie them to a single GPU vendor.
Given Nvidia's lead in GPU efficiency the last couple generations its been disappointing that they haven't ended up in any macs. Especially when Apple only uses GPUs that are relatively low power.
Nvidia does not make better GPU's. Fiji XT made on 28nm process is still faster in compute applications, that are open and Multiplatform than GTX 1080, which has been proven in this thread, recently. Without understanding big scheme your perception of situation will be faulty, unfortunately.What is, that nVidia makes better GPU's than AMD?
It's not an opinion, it's a fact.
Like I said, AMD is great in it's own right for affordable GPU's and computational OpenCL operations.
I like AMD, and purchased an RX480 (with plans to crossfire) but after seeing the 1080/1070 performance, I jumped on the 1080 because I need to use VR for dev purposes.
Forget about this benchmark, as something meaningful for DX12. One guy from Futuremark company came to Anandtech Forum and says that... It is DX12 FL11_0. What this means is that there is no most important parts of DX12 in the rendering pipeline, including... Asynchronous Compute.My Time Spy result with modest 200Mhz OC on CPU and GPU, basic free version of 3D Mark that has no settings
http://www.3dmark.com/3dm/13276967
I managed to break 6000 points with 4500mhz on the CPU.
CPU at 4600mhz was unstable. GPU with 250mhz OC was unstable. You need water cooling for these settings.
Forget about this benchmark, as something meaningful for DX12. One guy from Futuremark company came to Anandtech Forum and says that... It is DX12 FL11_0. What this means is that there is no most important parts of DX12 in the rendering pipeline, including... Asynchronous Compute.
Thats because Time Spy is NOT using Asynchronous Compute Shaders, but simple preemption. Big difference.It's a good stress test even if using a more backward compatible feature set. However, there are plenty of front page articles showing Async on and off. Gains are seen on Radeons and Pascals. Maxwell nothing, even a loss.
Pascal even weaker at FP64
I have said this before, I will repeat this: Ask developers to not be bloody lazy! Make good applications, that utilize 100% of GPU capabilities. Like for example Final Cut Pro X that is properly coded for OpenCL.
Nvidia does not make better GPU's. Fiji XT made on 28nm process is still faster in compute applications
In one paragraph of your post you state something for it to in some way contradict in another. .This is naive. These low level APIs like metal, directx12 and vulkan give the programmer more control, but it also requires more work. If you are a manager at a software company, you may very well say its not worth days/weeks/months of optimization to get out a small increase in performance. Especially if that performance increase only applies to a subset of all your users.
Additionally, just because you use these APIs does not automatically gain you more performance. The most recent version of tomb raider came out with a directx12 code path and saw no performance benefit on either AMD or Nvidia hardware.
Again, this depends on the application you are looking at. Even if we disregard CUDA, Nvidia is still faster in some single precision compute benchmarks. Look here, here, here and here. Fiji had a 1:16 ratio of SP/DP compute units, meaning that it still gets beat by Hawaii in DP tasks.
Pascal is not weaker than maxwell.
Look at the clockspeed. Per clock Pascal is down from Maxwell. Per clock Maxwell was down from Kepler.
How come in OpenCL GTX 980 was faster than GTX 780 Ti, even if Maxwell had lower compute power than GTX 780 Ti?Look at the clockspeed. Per clock Pascal is down from Maxwell. Per clock Maxwell was down from Kepler.
So what? GPUs don't need to compete in performance per clock. Nvidia wanted to take advantage of the high clocks they could achieve with the 16 nm process. In every performance and efficiency metric Pascal beats Maxwell hands down.
How come in OpenCL GTX 980 was faster than GTX 780 Ti, even if Maxwell had lower compute power than GTX 780 Ti?