Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Jackotai

macrumors newbie
Original poster
Nov 4, 2015
9
0
I am thinking it is because AMD is ready for directx12 which almost the same thing of apple side "Metal".
recently i have tested the ashes of the singularity which is the first fundamentally directx12 supported game with my m395x. i figure out my card gain around 15% improvement on directx12 which almost 80% more frame rate compare with GTX 970M and 10% below 980m. Would likely to see more performance gains on directx12 by driver update later. Therefore i don't think apple will shift to nvidia on the coming update.
 
I am guessing it was a typo but 80% more than a 970m???

I just fired up Ashes in DX12 on both my iMac m395x and on an Alienware 17r3 with 970m at 1080p high settings the m395x is about 22% better than the 970m
 
I'm thinking it was just purely financial, 10 percent performance difference isn't that much to shift to a new GPU.
 
I am guessing it was a typo but 80% more than a 970m???

I just fired up Ashes in DX12 on both my iMac m395x and on an Alienware 17r3 with 970m at 1080p high settings the m395x is about 22% better than the 970m
My test is based on crazy setting 1080P(while 395x got 23.8FPS and 970M got 15.xFPS), would you please test again to make sure if i am wrong? :)
 
My test is based on crazy setting 1080P(while 395x got 23.8FPS and 970M got 15.xFPS), would you please test again to make sure if i am wrong? :)

I am pretty sure it is party due to AMD's current compute prowess (hardware and software OpenCL support). Nvidia has been pushing CUDA, instead.
 
Money. AMD is in no situation to carefully pick its buyers, they're ready to do anything and everything for money at this point. Because of this I predict that the next GPUs in Apple hardware will also be from AMD.
 
apple_tim_cook_money-580x418.jpg
 
i think its because nvidias dont support imac 5k screen resolution and they probably designed that new lcd timer controler (?) with amd. just a gues
Ye, I also read somewhere that current mobile nVidia GFX cards do not support 5K displays. Something with the connector they were using to drive the display that maxed out at 4k I believe. That should be probably the biggest reason why apple went with AMD this time around. 2nd one being probably higher margins.

But hey, at least they updated the bootcamp drivers!
 
i think its because nvidias dont support imac 5k screen resolution and they probably designed that new lcd timer controler (?) with amd. just a gues
Excellent point, something that I really didn't even consider.
 
Ye, I also read somewhere that current mobile nVidia GFX cards do not support 5K displays.

Hmm...maybe they should consider using a desktop GPU for a desktop computer?

What a concept!

lol

The desktop NVIDIA's absolutely do 5k (I'm looking at one using a Dell 5k w/ 980ti)
 
The reason why Apple chooses AMD is because they made a good deal with them. They get discounted custom made GPUs.
 
AMD = dirt cheap, speedy but high failure rate
NVidia = Expensive, slower but low failure rate

All depends on how you look at it, Apple may have gone with AMD just to keep the profit on the rMBP low due to the other components that are more expensive.

Apple does flip flop between NVidia and AMD from time to time, perhaps now is just the time for AMD.
 
In DirectX12 all the bottlenecking of AMD GPUs has been lifted, and Nvidia cannot gain anything because... it wasn't bottlenecked anywhere. Scheduling of DX11 was the problem, serial matter of the API to be precise. In DX12 SIMD vs SIMD performance will be relatively equal for both vendors. But what matters here is compute performance, that reflects into games. Finally 6.1 TFLOPs GPU is as fast as 6.1 TFLOPs GPU regardless of vendor(exactly what we see with R9 390X and GTX 980 Ti(reference)) and 8.6 TFLOPs GPU is much faster that 6.7 TFLOPs GPU(Fury X vs Titan X).

Also it is quite funny to see that R9 380X ties with GTX 970. Only thing that was making Nvidia cards better was proprietary software and the nature of API that most games used. AMD hardware is at least 2 years ahead of Nvidia hardware. What was closing the gap was software: CUDA, Iray, Drivers, GameWorks. Currently, software catched up, and there is quite a gap between last gen architectures.

I genuinely suggest educating yourself guys, by reading hardware forums(Anandtech, for example). For about 8 months people rumbled about all this.
 
Apple went with AMD because AMD offered a cheap deal. It was all about the $$$. They should have gone with Nvidia IMO. They make much more reliable graphics cards, and they run at cooler temperatures.

But, Apple should allow people to choose between AMD or Nvidia. I even talked to the feedback team at Apple about this, and even they thought it was a good idea. And, that choice should be across all their product line. From iMac's to Macbook Pro's.
 
Last edited:
Apple went with AMD because AMD offered a cheap deal. It was all about the $$$. They should have gone with Nvidia IMO. They make much more reliable graphics cards, and they run at cooler temperatures.

But, Apple should allow people to choose between AMD or Nvidia. I even talked to the feedback team at Apple about this, and even they thought it was a good idea. And, that choice should be across all their product line. From iMac's to Macbook Pro's.
Then explain to me. How 120W GPU can operate on lower temperatures than 120W GPU? How can one be less efficient than another?

GTX980M and R9 M395X - both have exactly the same TDP rating.
 
Then explain to me. How 120W GPU can operate on lower temperatures than 120W GPU? How can one be less efficient than another?

GTX980M and R9 M395X - both have exactly the same TDP rating.

I was thinking about arguing the point of one might run at a higher speed and require more voltage more often than the other, but I'll leave that up to someone more versed in GPUs.

As for TDP, that's just how much heat it can take, if they are both designed around the same blueprint then the TDP wouldn't be that much different, 120W is just how much power it needs to run at max, what is more important to Apple is which one can run with the least amount of voltage being pushed to it because if you're staring at the desktop and one requires 80W of power and the other requires 75W of power, well the one that uses only 75W of power will get you the most battery life.
 
Then explain to me. How 120W GPU can operate on lower temperatures than 120W GPU? How can one be less efficient than another?

GTX980M and R9 M395X - both have exactly the same TDP rating.
980M is a much superior card. That should have been in iMac's as far as I am concernced.
 
980M is a much superior card. That should have been in iMac's as far as I am concernced.
By what measure? Compute - weaker. Graphics without any bottlenecks - weaker. Thermal envelope and power - the same.
I was thinking about arguing the point of one might run at a higher speed and require more voltage more often than the other, but I'll leave that up to someone more versed in GPUs.

As for TDP, that's just how much heat it can take, if they are both designed around the same blueprint then the TDP wouldn't be that much different, 120W is just how much power it needs to run at max, what is more important to Apple is which one can run with the least amount of voltage being pushed to it because if you're staring at the desktop and one requires 80W of power and the other requires 75W of power, well the one that uses only 75W of power will get you the most battery life.
Base clock for 1536 CUDA cores from Maxwell Architecture is 1035 MHz, with Turbo state 1127. TDP is for base clock(similar Turbo mode to Intel CPUs). So for very short amount of time it will be boosted, but will rapidly go back down again.

Both GPUs consume the same amount of power and produce the same amount of heat. Because both are power gated. Thats how currently TDP works.
 
  • Like
Reactions: h9826790
By what measure? Compute - weaker. Graphics without any bottlenecks - weaker. Thermal envelope and power - the same.
By looking at benchmarks. 980M is a better card.
 
Last edited by a moderator:
By what measure? Compute - weaker. Graphics without any bottlenecks - weaker. Thermal envelope and power - the same.
Base clock for 1536 CUDA cores from Maxwell Architecture is 1035 MHz, with Turbo state 1127. TDP is for base clock(similar Turbo mode to Intel CPUs). So for very short amount of time it will be boosted, but will rapidly go back down again.

Both GPUs consume the same amount of power and produce the same amount of heat. Because both are power gated. Thats how currently TDP works.

Well, I guess all that's left is simply pricing and logistical.

This.
Ain't no two ways about it.

Yep, try to come up with something more constructive instead of something that all businesses do to make money.

Windows manufacturers just waste your time by throwing on a bunch of bloatware that helps them cut costs. Otherwise you end up paying the same price for an equivalent <insert manufacturer here> Signature series laptop.
 
By looking at benchmarks. 980M is a better card.
No it isn't. GTX 980M is based on GTX 970 desktop. And according to DX12 benchmarks R9 380X which is exactly the same die as R9 M395X is equal to GTX 970. The problem is that GTX 980M is cut down version of GTX 970, and with lower clocks to maintain it in 120W of TDP. R9 M395X has 909 MHz core clock with exactly the same amount of GCN cores as R9 380X, but with lower core clock: 980 MHz, vs 909. There is no way it can be faster with current drivers and in current environment that R9 M395X.

Also compute is much lower on GTX 980M: 3.1 TFLOPs vs. 3.7 TFLOPs from R9 M395X. The only thing that makes GTX980M better is CUDA. Proprietary software. Nothing else.
 
  • Like
Reactions: Born2bwild
No it isn't. GTX 980M is based on GTX 970 desktop. And according to DX12 benchmarks R9 380X which is exactly the same die as R9 M395X is equal to GTX 970. The problem is that GTX 980M is cut down version of GTX 970, and with lower clocks to maintain it in 120W of TDP. R9 M395X has 909 MHz core clock with exactly the same amount of GCN cores as R9 380X, but with lower core clock: 980 MHz, vs 909. There is no way it can be faster with current drivers and in current environment that R9 M395X.

Also compute is much lower on GTX 980M: 3.1 TFLOPs vs. 3.7 TFLOPs from R9 M395X. The only thing that makes GTX980M better is CUDA. Proprietary software. Nothing else.

Umm, you kind of argued against yourself, because a lower clock means it's slower, but it's boost is higher, which means it outperforms the 980 at boost, with the core clock being 909 vs 980 then it requires less voltage. Apple will pick energy-savings over performance every time.

Also yes NVidia is proprietary but so is Apple, so what's the point in that argument? The fact that NVidia is proprietary means it's that much more difficult for Apple? A driver is a driver is a driver, it's all about hardware requirements and what uses the least for the most equivalent power in the sub-tier.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.