Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Umm, you kind of argued against yourself, because a lower clock means it's slower, but it's boost is higher, which means it outperforms the 980 at boost, with the core clock being 909 vs 980 then it requires less voltage. Apple will pick energy-savings over performance every time.

Also yes NVidia is proprietary but so is Apple, so what's the point in that argument? The fact that NVidia is proprietary means it's that much more difficult for Apple? A driver is a driver is a driver, it's all about hardware requirements and what uses the least for the most equivalent power in the sub-tier.
Because I'd rather have proprietary whole platform, than only one part of it that dictates everything to the whole platform.

Secondly, you did not understand. R9 380X - desktop GPU, has 980MHz core clock with 2048 GCN cores. R9 M395X has 909 MHz with 2048 GCN cores. The dies are exactly the same, but differ in core clocks. Also R9 380X is rated at 180W if I remember correctly. But R9 M395X has 120W of TDP. It outperforms 980M even if the other has higher boost clock. The difference in core clocks just cannot pay enough to close the gap between the amount of cores of the GPU.
 
Because I'd rather have proprietary whole platform, than only one part of it that dictates everything to the whole platform.

Secondly, you did not understand. R9 380X - desktop GPU, has 980MHz core clock with 2048 GCN cores. R9 M395X has 909 MHz with 2048 GCN cores. The dies are exactly the same, but differ in core clocks. Also R9 380X is rated at 180W if I remember correctly. But R9 M395X has 120W of TDP. It outperforms 980M even if the other has higher boost clock. The difference in core clocks just cannot pay enough to close the gap between the amount of cores of the GPU.
I'd rather have NVidia myself, so I hear ya on that. I'm still thinking it has to do with the idle voltage usage.
 
They struck a better deal with AMD and not only that, it was also determined that the performance decrease between the AMD and equivalent nvidia chips is not high enough to prevent them from going with the AMD option and not only that, the perceived performance decrease also wouldn't affect sales in such a way that any significant projected decrease in return could be noticed.
 
Also compute is much lower on GTX 980M: 3.1 TFLOPs vs. 3.7 TFLOPs from R9 M395X. The only thing that makes GTX980M better is CUDA. Proprietary software. Nothing else.
Nvidia flop's aren't the same as AMD's. They are completely different.
 
Last edited by a moderator:
Umm, you kind of argued against yourself, because a lower clock means it's slower, but it's boost is higher, which means it outperforms the 980 at boost, with the core clock being 909 vs 980 then it requires less voltage. Apple will pick energy-savings over performance every time.

Also yes NVidia is proprietary but so is Apple, so what's the point in that argument? The fact that NVidia is proprietary means it's that much more difficult for Apple? A driver is a driver is a driver, it's all about hardware requirements and what uses the least for the most equivalent power in the sub-tier.

Lower clock does not automatically mean slower. How long is the pipe? How many steps does it take to execute function X? That is like saying a two year old walks faster than Usain Bolt because she takes more steps per minute.

Nvidia proprietary software belongs to Nvidia. Apples proprietary software belongs to Apple. See which one is easier for Apple to use?

Playing the two against each other allows Apple to negotiate better deals and inspire competition. If Apple gave up on Nvidia for a couple of years, Nvidia might give up writing drivers for Apple at all. Then Apple would have to play ball with AMD. Ditto for going the other way.

AMD's process looks better (at Global Foundries 14nm) than Nvidia's (at TSMC 16nm) at the moment. For all that people have dogged AMD manufacturing over the years, it really is cherry picking the data. Intel has a worse track record of delivering on time sine 1992 or so. TSMC is having trouble getting good yields below 28nm, and a dreadfully expensive. Really, making anything at this level is astonishing, and most people cannot comprehend the magnitude. 15 BILLION transistors where a particle so small it is difficult to see with an electron microscope could kill the unit with a failure rate in the <10 per million units sold is incredible. Think about that. While some of those 15 billion transistors have back-ups, not all do.

The manufacturing process is in the two month range, and the sodium in a drop of sweat could scrap thousands of wafers.

Yes, it is about money, but there is a LOT that goes into that statement. It isn't just who is selling cheap today, but your plans for the next several years, including how sales will tank if you throw a crappy product in the market.
 
They are two different architectures that measure FLOPs differently.
Im sorry but... HAHAHAHAHAHAHAHA

Most ridiculous post I have ever seen on any technological forum. Compute is this: Mathematical algorithms. And TFLOPs shows how many of them can be done in one cycle. The higher number the more power particular GPU can output.

Defending Nvidia came to ridiculous state.
 
Im sorry but... HAHAHAHAHAHAHAHA

Most ridiculous post I have ever seen on any technological forum. Compute is this: Mathematical algorithms. And TFLOPs shows how many of them can be done in one cycle. The higher number the more power particular GPU can output.

Defending Nvidia came to ridiculous state.
Laugh all you want.
How is this ridiculous? Look it up, each graphics manufacturer have different architecuters, design, and thats why they measure FLOPs differently. Nvidia FLOPS don't equal AMD FLOPS.

For example:The 7970 is rated at ~3.8 TFLOPS and the 680 at ~3.1 TFLOPs. Yet in most benchmark comparisons I've seen(Tomshardware). The 680 still edges out the 7970 in most tests.

And as I've seen it nVidia's FLOP rating has been considered "real world performance" or more accurate.
 
Last edited by a moderator:
Laugh all you want.
How is this ridiculous? Look it up, each graphics manufacturer have different architecuters, design, and thats why they measure FLOPs differently. Nvidia FLOPS don't equal AMD FLOPS.

For example:The 7970 is rated at ~3.8 TFLOPS and the 680 at ~3.1 TFLOPs. Yet in most benchmark comparisons I've seen(Tomshardware). The 680 still edges out the 7970 in most tests.

And as I've seen it nVidia's FLOP rating has been considered "real world performance" or more accurate.
You are arguing about things you do not understand. And yet you are coming from "expert" perspective. I genuinely suggest to you educating yourself.

Because all what you post here makes people laugh. Thats how it is ridiculous. You have no idea about compute, what it does, yet you talk about it. Yes, both architectures are different. That does not mean that compute is different on both vendors. Compute is just what it is - compute. Computation. Mathematical Algorithms. The more TFLOPs GPU has the faster processes them. The more they can do in one clock cycle. There is no philosophy here.

You prove me right when I said in another thread on this forum that Nvidia reality distortion field and mindshare lets people believe that 4 TFLOPs GPU from Nvidia is much more powerful than 4 TFLOPs GPU from ANY other vendor. There is absolutely nothing worse than making statements based on your preferences without scientific research. Without understanding architectures. Without educating yourself.

Unless you look for things to confirm your bias. Then yes, Nvidia will always be better. For you.

Check the tests of Lexmark 3.1 and compare GPUs. They are OpenCL benchmarks. Check the latest DX11 and DX12 benchmarks with latest drivers, not benchmarks from 2012. The world has changed.
 
  • Like
Reactions: vbedia
For example:The 7970 is rated at ~3.8 TFLOPS and the 680 at ~3.1 TFLOPs. Yet in most benchmark comparisons I've seen(Tomshardware). The 680 still edges out the 7970 in most tests.
Tflops is a unit of measure it has to be the same between different gpus otherwise it will be useless.

Also just because its rated at a given speed doesn't mean that a benchmark pushed the GPU to its fullest potential. There could be other factors as to why one GPU is faster then another, especially in real world usage.
 
  • Like
Reactions: vbedia
Ugh. This again. Pretty sure we have a thread a gazillion pages long that stretches back to when the Retina iMacs were first announced in 2014...

At this point I think we can summarize the AMD/Nvidia in the iMac debate as follows:

AMD:
+Better (OpenCL) Compute performance
+(potentially) faster when unhindered (Metal, DirectX12, Vulkan) by "legacy" APIs (OpenGL, DirectX 11)
+ (probably) much cheaper for Apple (due to AMD's position in the market place)
-Throttling, fan noise, and potential reliability issues due to horrible thermals (108 Celcius in the 2014 models...)
-An architecture a little TOO ahead of its time (Sorry but no one really cares about your performance two years from now if you can't get it right today)
-Poor drivers for legacy APIs (OpenGL, DirectX 11)

Nvidia:
+Superior OpenGL/DirectX 11 performance
+CUDA
+Far superior thermal design (based on past iMacs/windows notebooks featuring the 970/980m)
+Better driver support
-Inferior OpenCL performance
-Performance gap in gaming/3D (possibly) eliminated or reduced when utilizing next gen APIs (Metal/DirectX 12/Vulkan)
-(Most likely) Expensive for Apple

In the end Apple likely choose AMD using the same formula we all do when making purchases Price/Performance. AMD provided Apple with a customized product that offered superior OpenCL Compute performance for significantly less money than Nvidia would have been willing to sell the 970m/980m. While its fun to speculate, I think most of the facts are available for anyone who's been paying attention.

For the record, i certainly would have preferred a 980m(or even a 970m), but I understand and have made peace with why Apple made the decision they did.
 
In Luxmark renderer I was under the impression that AMD GPUs are fully utilized, but they are not. LuxMark was ported from CUDA to OpenCL, therefore it uses compatible with Nvidia architecture 32 thread wavefronts. If it would be fully optimized for GCN it would crash completely on Nvidia GPUs(which happens in applications that are not optimized for Nvidia hardware like Quantum Break, for example).

If anyone will ask me if I am an AMD fanboy I have one answer, as always. I am Nvidia fan as a brand. I was mostly under the impression of Nvidia marketing like 80% of the market today is. But also I love hardware, and I did my research. Properly. I educated myself. And most most myths about AMD are just that: myths.

P.S. Do not say that AMD hardware is hot and Nvidia is not. 120W(GTX 980M) from Nvidia would still overheat in iMac like AMD GPU is. But thats the flaw of iMac design, not GPUs. 60W of thermal power is 60W, it is exactly the same regardless of brand. Pay attention to how much heat GPU outputs in Wattage, not to brand. Because otherwise you end up saying the same thing: myth.

qb_ultra.png

This is the point I have been making for last 8 months on this, and other forums. It is not sponsored by AMD title. In this title 4 TFLOPs GPU from AMD is roughly the same in performance as 4.2 TFLOPs GPU from Nvidia. Of course: R9 380X vs GTX 980. 5.9 TFLOPs from AMD - roughly the same as 6.1 TFLOPs from Nvidia: GTX 980 Ti.
 
Last edited:
Yep, try to come up with something more constructive instead of something that all businesses do to make money.

Windows manufacturers just waste your time by throwing on a bunch of bloatware that helps them cut costs. Otherwise you end up paying the same price for an equivalent <insert manufacturer here> Signature series laptop.

Not sure what other manufacturers have to do with Apple choosing AMD.
There is nothing to explain. It's very simple. They went with AMD to save money.
 
  • Like
Reactions: keysofanxiety
Let's face it. AMD is cheap and Apple wants a good profit margin for their overpriced Macs.
Remind me again where it is you bought your equally well built, superiour specced, all in one with a 5K screen, at a lower price than the 5K iMac?

OK. You know what, I'll lower the bar for you. Remind me again where you can buy a computer with an equally nice 5K display for anywhere near the iMac's price?
 
Last edited:
Remind me again where it is you bought your equally well built, superiour specced, all in one with a 5K screen, at a lower price than the 5K iMac?

OK. You know what, I'll lower the bar for you. Remind me again where you can buy a computer with an equally nice 5K display for anywhere near the iMac's price?

5k is... kinda pointless in a word where we haven't even got 4k gaming and video streaming to work yet. I could get a £600 5K monitor for my £700(ish) PC today if I wanted (Dell UP2715K) but what is the point? Why bother? Apple's 5K thing is nothing more than a gimmick in a world that isn't even fully prepared for 4K.

So let me ask you a question, where is all this 5K content hiding that makes a 5K iMac so damn desirable? And other than displaying stuff at a 5K resolution, can the iMac play Fallout 4 at 5K or make some heavy edits to a 5K video file? I already know the answer is no.
 
5k is... kinda pointless in a word where we haven't even got 4k gaming and video streaming to work yet. I could get a £600 5K monitor for my £700(ish) PC today if I wanted (Dell UP2715K) but what is the point? Why bother? Apple's 5K thing is nothing more than a gimmick in a world that isn't even fully prepared for 4K.

So let me ask you a question, where is all this 5K content hiding that makes a 5K iMac so damn desirable? And other than displaying stuff at a 5K resolution, can the iMac play Fallout 4 at 5K or make some heavy edits to a 5K video file? I already know the answer is no.
You can get a Dell UP2715K for £600 in the UK? New? Would you mind providing a link cause unless they sell it way cheaper in the UK than the rest of the world I call BS. The lowest price I could find in the US (probably the cheapest market for tech goods) is around $1600, right around the cost of the baseline 5K iMac.

If you can't find a use for a 5K screen just because you can't game at native res, or edit native 5K video, then by all means save yourself some money and buy a 27" 4K display.

The point of 5K is to have a "native" retina 1440p resolution (exactly 4x1440p) for more screen space(vs 4x1080p for 4K), to be able to edit 4K video and not have your content obscured by playback bars or tool windows, and to work with photos at near native resolution. Just because you can't utilize (or perhaps afford?) 5K doesn't mean the rest of us can't.
 
I can see the dell up2715k for £699 in the UK on amazon
Here in the US, its 2,000 dollars, I just looked and at the moment its discounted to 1899. So basically, I can buy a 5k 27" Monitor from dell for 1,899 or I can buy an iMac that has a 27" monitor.
 
You can get a Dell UP2715K for £600 in the UK? New? Would you mind providing a link cause unless they sell it way cheaper in the UK than the rest of the world I call BS.
Not BS. An extra £50 off what I estimated but this was literally the first result from a Google search. With time I'd be able to find a better price.

If you can't find a use for a 5K screen just because you can't game at native res, or edit native 5K video, then by all means save yourself some money and buy a 27" 4K display.
Pointless when the vast majority of world's content is at 1080p at best.

Just because you can't utilize (or perhaps afford?) 5K doesn't mean the rest of us can't.
I can afford. But why would I want to waste my cash when I know no games will run at 5K and no content will be available for it either? Hell, the internet in my area struggles to stream 1080p. As it stands at the moment, Apple's 5K iMac is nothing but an expensive gimmick. Not very useful for many at the current time.

Wait- you didn't even answer my question. But its ok, I already know the answer as I said.
 
  • Like
Reactions: tuxon86
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.