Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yep. And that is very reason why RX 480 is faster in OpenCL Blender than GTX 1060 using CUDA in the same application.

I don't understand the purpose of that comparison. Is it supposed to mean something that a "high end" AMD card is faster in one case, than a near bottom of the range Nvidia card?

What differes 110W of thermal output from Nvidia from 110W thermal output from AMD?

...Nvidia GPUs can do more work per unit of electricity consumed and heat produced.

GTX 1080 in iMac would heat up as much as R9 M395X. Why? Because of the design of the iMac.

...but it would have higher performance, and so to get the same performance it would make less heat.

Look, if AMD can make their magical space alien plans produce a good GPU, fantastic. They haven't yet*, and if their vision of computing requires anyone else to change for it to work, it won't ever. It's pretty obvious the future of workstation level pro computing is whatever Intel and Nvidia decide it's going to be. The further Apple travels from that, the less relevant their products will be.

*mentioning the 2016 Macbook Pro - that just seems like Nuts & Gum - a weak dGPU that eats the machine's tiny battery if you activate it when not plugged into mains power, and mediocre performance when compared to even a low end eGPU when you are.
 
There has been no point since 2012 in which AMD has offered consistently better performance across a wide variety of professional apps than Nvidia.

...on macOS. The "woeful" performance of Nvidia seems to be specific to Apple software.

These are absolutely ridiculous statements to make.

AMD graphics have no problem battling their Nvidia counterparts in GPU compute benchmarks.
If you're going to be comparing CUDA optimised based benchmarks to unoptimised OpenCL benchmarks to argue your point, then of course the Nvidia solutions are going to look great. But that's about as fair as using Mantle-optimised gaming benchmarks to make Nvidia graphics look bad.

And no; GK/M/P104's poor OpenCL performance is not due to macOS. I have no idea where or how this meme originated from. OpenCL, and thus general GPU compute, performance is poor on those chips because Nvidia cuts out a huge chunk of FP64 compute capable silicon on those chips to make more room for FP32 and FP16 silicon. FP64 (double precision floating point) is essential for compute and scientific computing, while FP32/FP16 is more important for gaming performance (in addition to things like pixel file, triangulation, tesselation performance and so on).

Conversely, all AMD GPUs are equipped with [too much] FP64 ALUs. This obviously means that they have to sacrifice FP32 and FP16 performance (less gaming performance), but it means that all their GPUs can easily be converted to GPU compute accelerators quite easily and cheaply.

Why do you think AMD GPUs are so popular for crypto-currency mining?

...aside from their "Pro" computer being built on gaming cards. You can't criticise Nvidia's options as being for "G4merz!" as if Apple is offering "workstation" hardware as an alternative. They're not. The choice is fast Nvidia gaming GPUs, or slow AMD gaming GPUs.

Except that the AMD GPUs are plenty fast for OpenCL compute benchmarks.

The only significant deficit AMD GPUs have is in gaming potential.

But again, this is Apple -- they do not care about gaming performance.

Yes, but they also don't win on actual getting work done benchmarks, either. That's why the 2013 has been such an unmitigated failure.

To your first point: not true. To your second point: well, that's up for debate.

If AMD GPUs were so horrible, then Apple would not have gone with AMD for the 2016 MacBook Pro. AMD GPUs have proven themselves to have fantastic perf/w when it comes to GPU compute (read: productivity), so I think we can reject the hypothesis that the FirePro GPUs were responsible for the MP6,1's low popularity.

I would bet that the MP6,1's challenges lay more in its high price, lack of upgradability, dependence on Thunderbolt 2, lack of CPU power, and high production costs.

Apple tried only supporting the technologies they want to support, making expensive targeted FCPX appliances, and telling everyone who didn't fit that narrow product to go buy a Windows workstation. That was the 2013 strategy. Unfortunately for Apple, Nvidia is better at making GPUs than Apple is at making Pro hardware and software.

Here's a few other things that aren't of critical importance to enough of the Pro market that you could fund development of a workstation by fixating on them:
  • Small desk footprint
  • Dead silence when running outside of a sound insulated cabinet
  • Low power draw
  • OpenCL
  • macOS
  • Final Cut

Wow... Just wow...

Here's a few things that are of critical importance to enough of the Pro market that you can fund development of a workstation by prioritising them:
  • Onboard, bulk data storage.
  • The ability to swap out and replace the GPU every 8-12 months, without replacing any other part of the machine.
  • Multiple Nvidia GPUs
  • CUDA
It turns out there aren't enough FCPX users to fund the development of a specialist workstation, and the rest of the pro app world is sufficiently Nvidia based, that they're not going to go all in on OpenCL unless it's better on Nvidia hardware than CUDA.

You clearly don't know Apple.

Apple will never, nor have they ever, supported another company's proprietary compute standard. Apple is well known for sometimes supporting proprietary hardware when they're in their infancy (3.5" floppy disks, USB, Thunderbolt...), but Apple has always eventually tried to push open standards in both hardware and software.

The day they support CUDA is the day they support Flash.

You're suggesting Apple just let the machine sit idle for 4 (to 5, 6,7?) years because? Maybe the 100% thermal failure rate of the D700 indicates that there's something more to the story than "on paper this should fit into the thermal constraints".

100% thermal failure rate? C'mon man, now that's just hyperbole.

If you love CUDA and Nvidia so much, you know what you can do? Just buy a PC and install Windows on there. :) Problem solved. You've already stated that pros don't need macOS or OpenCL, so I honestly think you would be a whole lot more happier if you just bought a GTX 1080 Ti and played your DX12, Nvidia Gameworks, Nvidia Hairworks riddled games on your Nvidia G-Sync powered $999 TN-panel monitor.

I honestly sometimes scratch my head as to why some people around here fall for the Nvidia PR spin. I mean -- ok sure, I was being a bit facetious with my cheap shots above -- but I used to run Nvidia hardware too, and have no issue recommending Nvidia to my friends who want to have the best gaming performance regardless of cost. But this notion that because Nvidia dominates gaming benchmarks in Windows/DX12 then they must absolutely dominate everywhere else is just silly.
 
I don't understand the purpose of that comparison. Is it supposed to mean something that a "high end" AMD card is faster in one case, than a near bottom of the range Nvidia card?
RX 480 is direct competitor for GTX 1060. I have no idea why you claim that GTX 1060 is near bottom end and RX 480 is high-end from AMD.

...Nvidia GPUs can do more work per unit of electricity consumed and heat produced.



...but it would have higher performance, and so to get the same performance it would make less heat.
It appears that more real statement is that both GPUs would have similar performance per watt. Unless you talk about gaming.

To show you the difference between both GPU vendors, lets compare GPUs from the same die size, and price points: GTX 1060 using CUDA in Blender, and RX 480 using OpenCL in the same Application:

https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL
Timings.png
 
RX 480 is direct competitor for GTX 1060. I have no idea why you claim that GTX 1060 is near bottom end and RX 480 is high-end from AMD.

...because Nvidia has multiple tiers above the 1060. What's AMD's 1080ti or XP equivalent? Can you even buy a GPU from AMD that has the same performance as Nvidia options? Apple has stated that they want the next Mac Pro to be about maximum throughput, so we can discount spurious ideas like "it's a weak GPU in absolute terms, but it's got great performance per watt".

The only thing that matters is what gives the most performance, because anything less than top of the range is going to be covered by iMacs and laptops going forward.
 
  • Like
Reactions: tuxon86
...because Nvidia has multiple tiers above the 1060. What's AMD's 1080ti or XP equivalent? Can you even buy a GPU from AMD that has the same performance as Nvidia options? Apple has stated that they want the next Mac Pro to be about maximum throughput, so we can discount spurious ideas like "it's a weak GPU in absolute terms, but it's got great performance per watt".

The only thing that matters is what gives the most performance, because anything less than top of the range is going to be covered by iMacs and laptops going forward.
It actually comes out within a month, possibly, on 5th June. Coincidentally, the same day as Apple WWDC conference. Will there be any Mac announced from Pro ranges? And if not, what difference it makes in High-end, vs low-end?

Lastly. Why do you guys are out of touch with schedules, and release dates for hardware? Not from one vendor, but all of them? Especially - Apple.
 
...because Nvidia has multiple tiers above the 1060. What's AMD's 1080ti or XP equivalent? Can you even buy a GPU from AMD that has the same performance as Nvidia options? Apple has stated that they want the next Mac Pro to be about maximum throughput, so we can discount spurious ideas like "it's a weak GPU in absolute terms, but it's got great performance per watt".

The only thing that matters is what gives the most performance, because anything less than top of the range is going to be covered by iMacs and laptops going forward.

*facepalm*
That's not how you compare GPU or CPU chips.

You need to compare chips on the basis of fabrication process, die size, transistor count, power draw, and -- most importantly -- cost.

Polaris 10 (RX 480/580) specification summary:
- 5.7 billion transistors;
- GlobalFoundries 14nm FinFET process;
- 232mm^2 die size;
- 256-bit wide GDDR5; and
- 150w TDP (RX 480 configuration).

GP106 (GTX 1060):
- 4.4 billion transistors;
- TSMC 16nm FinFET process;
- 200mm^2 die size;
- 192-bit wide GDDR5 memory controller; and
- 120w TDP (GTX 1060 Founder's Edition configuration).

GP104 (GTX 1070/1080):
- 7.2 billion transistors;
- TSMC 16nm FinFET process;
- 314mm^2 die size;
- 256-bit wide GDDR5X memory controller; and
- 180w TDP (GTX 1080 Founder's Edition configuration).

Sources:
http://www.anandtech.com/show/10446/the-amd-radeon-rx-480-preview
https://www.techpowerup.com/gpudb/2862/geforce-gtx-1060-6-gb
https://www.techpowerup.com/gpudb/2839/geforce-gtx-1080

Polaris 10 and GP106 are comparative rivals. GP104 and GP102 (GTX 1080 Ti/Titan X) are rivals to AMD's outgoing Fiji (Fury X) GPU, which will soon be replaced by the upcoming Vega GPU.
 
  • Like
Reactions: askunk
*facepalm*
That's not how you compare GPU or CPU chips.

You need to compare chips on the basis of fabrication process, die size, transistor count, power draw, and -- most importantly -- cost.

Why would I care about any of those things, when looking for which GPU provides the best performance in the Apps I use?

Die Size - don't care.
Transistor count - don't care.
Power Draw - don't care (powering a GPU is a problem to be solved, nothing more).
Cost - don't care (any consumer GPU is within impulse purchase territory for a piece of studio equipment).

Performance - that matters.

I compare GPUs on what I can buy retail that lets me move how large, complex and rendered a 3d model, and environment within an OpenGL viewport in realtime without pausing to render when I make changes. Or how many RAW 14 bit 36 megapixel images it can colour correct in how short an amount of time, or how how quickly it can distort the geometry of 150 of those images to map to the inside of a sphere.
 
If AMD GPUs were so horrible, then Apple would not have gone with AMD for the 2016 MacBook Pro.

AMDs component prices are significantly cheaper than Nvidia's, and the 2016 was already suffering from margin issues, due to the expense of the touchbar.


You clearly don't know Apple.

True, I've only been sitting in front of their products and developing content for their platforms for ~23 years, not as long as some. But, I know them well enough to recognise that the current Apple is a lot closer to Scully's and Spindler's Apple than a lot of people would like to believe. I know them well enough to know that Apple will always sow their own failures by prioritising near-term profitability, over long-term ubiquity (why Quicktime & QTInteractive lost to Flash). I know that Apple will always fall victim to the Great Man of History narrative about themselves, and attribute their successes to "only Apple" cultural factors, thinking that where they skate attracts the puck to them. I know that Apple's worst instinct is to control the whole stack, because it has lead repeatedly to a NIH myopia, and that synergies between products and vertical integration are just a way of isolating them from the effective competition they need to improve themselves.

but Apple has always eventually tried to push open standards in both hardware and software.

Facetime, Messages, Metal...


100% thermal failure rate? C'mon man, now that's just hyperbole.

The thermal failures in the D700 are a flaw inherent to the design of the card. If you drive them hard enough, for long enough, they will fail.

But this notion that because Nvidia dominates gaming benchmarks in Windows/DX12 then they must absolutely dominate everywhere else is just silly.

If an app has CUDA and OpenCL versions, and the CUDA version with an Nvidia card provides more result within a set time frame, than an OpenCL version with an AMD card, then that's the only benchmark that matters.
[doublepost=1494867034][/doublepost]
Is any of your apps on Apple platform? Do they use CUDA, OpenCL, or soon Metal?

My main apps are cross platform, and use OpenGL.
 
Last edited:
  • Like
Reactions: ActionableMango
If an app has CUDA and OpenCL versions, and the CUDA version with an Nvidia card provides more result within a set time frame, than an OpenCL version with an AMD card, then that's the only benchmark that matters.
What if CUDA on Nvidia gets worse results, than OpenCL version of the same app on AMD? ;)

I genuinely suggest looking at very good optimized software: Final Cut Pro X, right now Blender. Nobody can right now say that optimization for AMD is gimping performance on Nvidia if Nvidia GPUs use CUDA.

Start asking developers to optimize your software. AMD did great job lately with GPU open. There is no reason to not expect that when they will come up with 13 TFLOPs GPUs(Vega 10...) it will not be the same difference between it and GTX Titan Xp for example, as is with difference between RX 480 and GTX 1060 in Blender.
My main apps are cross platform, and use OpenGL.
So why do we even talk about CUDA?

P.S. Your software is way outdated...
 
What if CUDA on Nvidia gets worse results, than OpenCL version of the same app on AMD? ;)

I genuinely suggest looking at very good optimized software: Final Cut Pro X, right now Blender. Nobody can right now say that optimization for AMD is gimping performance on Nvidia if Nvidia GPUs use CUDA.

Start asking developers to optimize your software. AMD did great job lately with GPU open. There is no reason to not expect that when they will come up with 13 TFLOPs GPUs(Vega 10...) it will not be the same difference between it and GTX Titan Xp for example, as is with difference between RX 480 and GTX 1060 in Blender.

So why do we even talk about CUDA?

P.S. Your software is way outdated...
I have developed some accelerators in OpenCL 1.x and now in CUDA 8, same C code, in cuda runs incredible faster tan openCL I have an slave Xeon-D 1541 Mobo with an 1070 some process in the tcMP using both GPU on openCL takes about 20 minutes to complete, same work on the Xeon machine takes barely 6 minutes 5x faster and still not fully optimized for Cuda.
 
What if CUDA on Nvidia gets worse results, than OpenCL version of the same app on AMD? ;)

Then that person should probably buy an AMD card. All of which hinges on where the 2013 went wrong - mandating a specific GPU supplier, on the assumption that users would ditch apps that didn't reorient their software to Apple's hardware. The opposite happened.

I genuinely suggest looking at very good optimized software: Final Cut Pro X

Sorry, but I'm never going to build anything on an Apple-only software product again. Bitten hard enough by Aperture.

Start asking developers to optimize your software.

So why do we even talk about CUDA?

P.S. Your software is way outdated...

Well, you can say it's outdated, but OpenGL is alive and well on new products, and it isn't going anywhere. It's the only cross platform 3D option now, and for most companies that make apps that are currently cross platform, ditching the mac is probably better business sense than investing in an Apple-only 3d solution.
 
What if CUDA on Nvidia gets worse results, than OpenCL version of the same app on AMD? ;)

I genuinely suggest looking at very good optimized software: Final Cut Pro X, right now Blender. Nobody can right now say that optimization for AMD is gimping performance on Nvidia if Nvidia GPUs use CUDA.

Start asking developers to optimize your software. AMD did great job lately with GPU open. There is no reason to not expect that when they will come up with 13 TFLOPs GPUs(Vega 10...) it will not be the same difference between it and GTX Titan Xp for example, as is with difference between RX 480 and GTX 1060 in Blender.

So why do we even talk about CUDA?

P.S. Your software is way outdated...

"FCPX very good optimized software" compared to what?:eek:

Who cares if RX480 is better than GTX1060..... are you still comparing Polonez to Lada
I only care how fast I can render a scene and the fastest option is render farm with big ass Nvidia's.
When AMD come with faster cards, then we will build render farm with Vega, Sirius, Arcturus whatever star name they use.
 
  • Like
Reactions: Roykor
I don't know enough about this CPU thing - but can Apple (or a user) just plug an AMD into the Mac ecosystem and not skip a beat - other than swapping motherboards? Or will this be like the IMB chips to Intel chips, where (if Apple chooses to move to AMD) they have write new software code for compatibility purposes? ...Or is there some sort of open source standard that makes it hot swappable?
 
Last edited:
...but can Apple (or a user) just plug an AMD into the Mac ecosystem and not skip a beat - other than swapping motherboards?...
What does Apple do when AMD goes bankrupt and closes down?

And note that AMD's IP licenses with Intel for x86/x64 have a poison pill. If AMD sells out - the x64 IP licenses are terminated. Only the ATI wing (RTG) has independent value - but not much value.
 
"FCPX very good optimized software" compared to what?:eek:

Who cares if RX480 is better than GTX1060..... are you still comparing Polonez to Lada
I only care how fast I can render a scene and the fastest option is render farm with big ass Nvidia's.
When AMD come with faster cards, then we will build render farm with Vega, Sirius, Arcturus whatever star name they use.

THIS! But a lot of people here do not understand this. Apple haves a old relation with the graphic industry. its a old marriage, its divorced many years ago. But, not everyone knows this. And it looks like Apple doenst give a penny about it to. If they do, they would offer nVidia as a option. Why? Because so many content creating tools use CUDA for acceleration. I dont care about the story behind it, its a fact and 'we' need to deal with it. If you work al day with rendering etc, you want the best setup. And the best setup is much cores + Cuda.

I switched 7 months ago and it was a great move. I was scared, but it worked out just fine. I keep an eye on the new mac pro, but i keep my money in my pocket for a while and update my monster pc for few 100 bucks and will probably still kicks ass vs an new Mac pro.

Dont know why Apple is so anti Nvidia, they are defiantly not caring about their old userbase (the creative sector) from my perspective.
 
What does Apple do when AMD goes bankrupt and closes down?

And note that AMD's IP licenses with Intel for x86/x64 have a poison pill. If AMD sells out - the x64 IP licenses are terminated. Only the ATI wing (RTG) has independent value - but not much value.
With upcoming contract announcements, I rather expect that AMD will not go bankrupt anytime soon.
 
AMDs component prices are significantly cheaper than Nvidia's, and the 2016 was already suffering from margin issues, due to the expense of the touchbar.

Apple could easily purchase GTX 1050 or 950M en masse if they wanted an Nvidia solution for the same price as the RX 460.

True, I've only been sitting in front of their products and developing content for their platforms for ~23 years, not as long as some. But, I know them well enough to recognise that the current Apple is a lot closer to Scully's and Spindler's Apple than a lot of people would like to believe. I know them well enough to know that Apple will always sow their own failures by prioritising near-term profitability, over long-term ubiquity (why Quicktime & QTInteractive lost to Flash). I know that Apple will always fall victim to the Great Man of History narrative about themselves, and attribute their successes to "only Apple" cultural factors, thinking that where they skate attracts the puck to them. I know that Apple's worst instinct is to control the whole stack, because it has lead repeatedly to a NIH myopia, and that synergies between products and vertical integration are just a way of isolating them from the effective competition they need to improve themselves.

Sounds like you simply need to move onto the greener pastures of Windows powered by Nvidia graphics.

Facetime, Messages, Metal...

...which are based on open, international industry standards.

As for Metal, it was debuted on iOS before close-to-metal APIs were even a thing. There was no Vulkan (nor DX12) when Apple was preparing to roll out Metal.

Also, you have still yet to address or accept that Apple encourages the adoption of OpenCL and (albeit an older standard) OpenGL across its platforms.

So again, please answer you think Apple would want to even touch Nvidia -- a company that has basically **** on every open, industry standard solution over the past decade -- graphics solutions?


The thermal failures in the D700 are a flaw inherent to the design of the card. If you drive them hard enough, for long enough, they will fail.

Ridiculous. Please go and start a class action lawsuit if you think Apple is selling you defective hardware.

If an app has CUDA and OpenCL versions, and the CUDA version with an Nvidia card provides more result within a set time frame, than an OpenCL version with an AMD card, then that's the only benchmark that matters.

...and if a benchmark which runs on OpenCL/Vulkan on both Nvidia and AMD shows the Nvidia solution at a significant disadvantage?
[doublepost=1494924900][/doublepost]
Why would I care about any of those things, when looking for which GPU provides the best performance in the Apps I use?

Die Size - don't care.
Transistor count - don't care.
Power Draw - don't care (powering a GPU is a problem to be solved, nothing more).
Cost - don't care (any consumer GPU is within impulse purchase territory for a piece of studio equipment).

Performance - that matters.

I compare GPUs on what I can buy retail that lets me move how large, complex and rendered a 3d model, and environment within an OpenGL viewport in realtime without pausing to render when I make changes. Or how many RAW 14 bit 36 megapixel images it can colour correct in how short an amount of time, or how how quickly it can distort the geometry of 150 of those images to map to the inside of a sphere.

LOL! Well sure, if you think comparing a $15,000 Toyota to a $150,000 Porsche is an apt comparison, sure. But I'm pretty sure the Toyota engineer never intended to compete against the Porsche car; vice versa for the Porsche engineer.

Did you also have these criteria when Nvidia had no solution to the HD 5870, and when they showed up late to the game with the hot, underperforming, George Foreman grill GTX 480?

Look, I get it, if you have absolutely no regard for cost or performance efficiency, and just want the best balls to the walls performance, then fine. But most people don't follow that line of thought.
 
...which are based on open, international industry standards.

As for Metal, it was debuted on iOS before close-to-metal APIs were even a thing. There was no Vulkan (nor DX12) when Apple was preparing to roll out Metal.

Also, you have still yet to address or accept that Apple encourages the adoption of OpenCL and (albeit an older standard) OpenGL across its platforms.

So again, please answer you think Apple would want to even touch Nvidia -- a company that has basically **** on every open, industry standard solution over the past decade -- graphics solutions?
Metal is based on AMD Mantle.
 
  • Like
Reactions: Zarniwoop
Apple could easily purchase GTX 1050 or 950M en masse if they wanted an Nvidia solution for the same price as the RX 460.

Nvidia parts are, from what I've heard, a multiple more expensive for non-standard form factors, than equivalent performance AMD parts.

...which are based on open, international industry standards.

So "based on" a standard while producing a completely closed, proprietary result is now "supporting" a standard?

Also, you have still yet to address or accept that Apple encourages the adoption of OpenCL and (albeit an older standard) OpenGL across its platforms.

The current version of OpenGL is 4.5, the newest version Apple supports is 4.1 - which is from 2010.
The current version of OpenCL is 2.2, the newest version Apple supports is 1.2 - which is from 2011 (funnily enough, just like the GPUs in the "OpenCL Monster" 2013 Mac Pro).

Ceasing advancement at 7 and 6 yeas ago, that to you is "encouraging use"? To me, that screams "legacy product".

So again, please answer you think Apple would want to even touch Nvidia -- a company that has basically **** on every open, industry standard solution over the past decade -- graphics solutions?

Because the people who want a Mac Pro, whose needs can not met by iMacs and laptops, will leave, and have been leaving the platform, to get the availability of Nvidia cards.

Perhaps Apple will make Metal so good on Nvidia hardware, that CUDA ceases to be a competitive threat, because that's what this is all about, avoiding the possibility that an app or hardware vendor with higher customer loyalty than Apple can start dictating terms, which is what Adobe and Microsoft did during the Rhapsody era, forcing the entire Carbon / Mac OS X strategy.

Apple tried the strategy of neutering the threat of CUDA for a few years by removing contemporary CUDA-supporting hardware, in the hopes that would provide enough oxygen for OpenCL to thrive in the meantime. But as badly as Apple would like to commoditise GPUs as faceless and interchangeable behind their proprietary APIs, what actually happened is Adobe, Black Magic etc commoditised macOS, as faceless and interchangeable behind their application suites.


Ridiculous. Please go and start a class action lawsuit if you think Apple is selling you defective hardware.

Every 2013 Mac Pro has effectively eternal GPU replacement warranties - the extended GPU replacement program runs for all machines sold far enough back that their Applecare would run out, and everything newer than its "official" end date will be out of Applecare well after the replacement model is unveiled, so Apple can just do a machine swap for customer retention.

There was talk of class actions, prior to the extended GPU replacement programme.

...and if a benchmark which runs on OpenCL/Vulkan on both Nvidia and AMD shows the Nvidia solution at a significant disadvantage?

IF there is a product in which there is no Nvidia GPU capable of doing more work within a timeframe than an AMD card, then either buy an AMD card, or switch to a product that supports Nvidia hardware better. That's the point of a slotted workstation.

Look, I get it, if you have absolutely no regard for cost or performance efficiency, and just want the best balls to the walls performance, then fine. But most people don't follow that line of thought.

Yes, and for those people, Apple makes iMacs, the upcoming iMac Pro, and Macbooks.

"Balls to the wall" or as Schiller etc put it, "throughput optimised" is what Apple have been saying the next Mac Pro is going to be about, even if that means it's louder, hotter, and chews through more power.
 
  • Like
Reactions: ActionableMango
What does Apple do when AMD goes bankrupt and closes down?

And note that AMD's IP licenses with Intel for x86/x64 have a poison pill. If AMD sells out - the x64 IP licenses are terminated. Only the ATI wing (RTG) has independent value - but not much value.

...buy AMD for bargain basement price? LOL
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.