Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
An article based on rumors... Got anything official? I know you love posting AMD press release as gospel, maybe you could find some credible information instead of C&P links to rumors sites. Also the gddr5x/pascal is also a rumor...
It is not article based on rumors, but analysis of what happened, and what is happening.

It is quite funny now...
Foreign to you maybe, but it all seems foreign to you anyway...
koyoot, Intel Extreme Masters in Poland next month, going?
Hmmm, I am not aware that it is next month :D. Tempting, I have to say ;).
 
  • Like
Reactions: AleXXXa
  • Like
Reactions: tuxon86
Last edited:
  • Like
Reactions: AleXXXa
So basically Apple is pushing AMD because it's supports Apple's openCL framework better, instead of giving us a good performance based GPU. Totally makes sense.. in Apple's mind I guess.
Isn't Apple dropping OpenCL for Metal? A WartHog just flew through any argument about OpenCL.

Don't Maxwell cards match or beat ATI GPUs at OpenCL? Or are at least competitive (win some tests (but not by much), lose some tests (but not by much))? A Nimitz class carrier just sailed through any argument about OpenCL.

ps: Let's compare shipping cards, not Vaporware 5000 cards.

pps: Actually, to stick with the topic, let's compare the Radeon cards in the MP6,1 with what works in the MP5,1.
 
Last edited:
I laugh at AMD supposed GPU performance. In the top 10, AMD only has one card listed, and it's at number 9 in the list.

http://www.videocardbenchmark.net/high_end_gpus.html

Apple keeps stuffing the outdated, overheating crap of GPU's into their Pro machines, it makes no sense, except to Apple.

This is a terrible benchmark for judging compute based tasks in OS X.

So basically Apple is pushing AMD because it's supports Apple's openCL framework better, instead of giving us a good performance based GPU. Totally makes sense.. in Apple's mind I guess.

Apple doesn't want to restrict its compute workloads to requiring discrete GPUs with CUDA. Imagine if they did this. A 13" macbook pro wouldn't be able to run Final Cut Pro. Apple has a much different set of criteria in choosing GPUs than the rest of the PC industry. Namely, Apple doesn't care about game performance (and obviously directx), because so few games run on OS X. What they need is good OpenCL performance, because that is supported on intel integrated graphics and every discrete GPU.

Don't Maxwell cards match or beat ATI GPUs at OpenCL? Or are at least competitive (win some tests (but not by much), lose some tests (but not by much))? A Nimitz class carrier just sailed through any argument about OpenCL.

Maxwell cards do very well in compute tasks that don't require dual precision floating point. So essentially simpler compute tasks. Thats why Nvidia so heavily marketed maxwell towards applications like optical flow and less as a general compute product. They kept Kepler around as their general compute solution.

Even if Pascal is a compute focused Maxwell with improved memory bandwidth, this is a very competitive product. An Nvidia GTX 980 has essentially the same performance per watt as AMD's Nano. So with a node shrink Pascal will be a very competitive product. However, I don't think Apple will choose it because it still seems like performance is hit or miss on os x with maxwell based graphics cards. AMD's graphics cards seem to need less tuning and driver support to get performance out of them, which is something Apple likes. This will probably be even more true once metal gets wider support. This is besides the fact that AMD is willing to design custom form factors for Apple and give them a good deal.
 
It is not article based on rumors, but analysis of what happened, and what is happening.

It is quite funny now...

Hmmm, I am not aware that it is next month :D. Tempting, I have to say ;).
The article says it's based on rumours...
 
Remember, just because clock speeds have stayed the same doesn't mean that performance has. Intel has been increasing IPC (instructions per clock, i.e. performance per clock) with each generation. Also, frequencies have been moving slowly up since sandy bridge. A dual core 2.66 Ghz processor in a macbook pro from 2010 will be much slower than a quad core 2.2 Ghz Macbook Pro from 2015, even in single threaded tasks.

Right. I should mention my new top end Macbook Pro blows the doors off of my old top end Macbook Pro even at single threaded at a lower clock. Clock speed isn't a great measurement of anything. It'll jump up every so often (see: Skylake) but in general it's going to trend down.

Not true. Many tasks are still inherently single threaded. For instance many physics simulations are single threaded because you can't compute the next timestep until you have computed the current one. You can't divide the workload up like you can when you are encoding video. Obviously multithreaded performance is important but saying that single threaded performance doesn't matter for workstations is silly.

For simulating a single body, it's almost going to be impossible to optimize that across multiple cores. Single threaded performance isn't getting slower, but it's also going to slowly stop getting any faster.

Simulating multiple bodies is parallelizable. Not only could that be done across multiple cores, but there is some work into doing it on parallel on a GPU, especially around gaming.
[doublepost=1455390432][/doublepost]
Maxwell cards do very well in compute tasks that don't require dual precision floating point. So essentially simpler compute tasks. Thats why Nvidia so heavily marketed maxwell towards applications like optical flow and less as a general compute product. They kept Kepler around as their general compute solution.

Nvidia basically optimized their cards to do well in games and not pro apps (which require the higher precision.) So everyone runs around with game benchmarks pretending those apply to pro apps (spoiler: they don't.)

It's the kind of games Nvidia has played for a while with benchmarks. But Nvidia knows it's a good marketing strategy. It will start to fall apart though when games and higher precision GPU computing start colliding, like they are in DirectX 12.
 
  • Like
Reactions: koyoot
http://videocardz.com/58237/amds-project-f-is-232mm2-discrete-gpu-made-in-14lpp-process

So we should know now the die size of the GPU from AMD: 232 mm2.

Ahem ;).

Yeah, thats an interesting size. Probably would have equivalent performance to Tonga. This might be the chip AMD is talking about when they say bring VR at a cheaper price point. A slightly faster Tonga would be roughly equivalent to an Nvidia GTX 970, which is considered the baseline for the Rift.

Hopefully the rumors of 3 GPUs are true. AMD needs a smaller GPU for laptops and a bigger one to take the performance crown.
 
  • Like
Reactions: AleXXXa
I was going to say maybe Apple would use Fury Nano for the mid end and Polaris for the high end, but doesn't look like if Apple goes Polaris that would happen.

I wonder depending on performance if Fury Nano would become the low end, and Polaris would be the replacement for the D500 and D700.
 
  • Like
Reactions: AleXXXa
If that would be the case we would be seeing them already. IMO 4 GB of VRAM plus the cost of the GPUs rule out Fury from Mac Pro.
 
  • Like
Reactions: AleXXXa
If that would be the case we would be seeing them already. IMO 4 GB of VRAM plus the cost of the GPUs rule out Fury from Mac Pro.

Maybe. The existing D300s only have 2 GB of VRAM though. If the low end Polaris part is a mobile part and not a desktop part, hard to see where else Apple would source a D300 replacement. Mid end Polaris doesn't seem like a good fit for a D300 replacement.

Would be a heck of a line up if Fury Nano was actually the low end. I think Polaris on the mid end is probably going to be faster than Fury Nano.
 
  • Like
Reactions: AleXXXa
Maybe. The existing D300s only have 2 GB of VRAM though. If the low end Polaris part is a mobile part and not a desktop part, hard to see where else Apple would source a D300 replacement. Mid end Polaris doesn't seem like a good fit for a D300 replacement.

Would be a heck of a line up if Fury Nano was actually the low end. I think Polaris on the mid end is probably going to be faster than Fury Nano.
On graphics or compute side? ;)
I do not believe that apart for FP64 there will be any progress in compute power for Polaris 11 in comparison to Fiji.
Would it be logical to use 600$ GPU in their offering if similar or higher performance will offer GPU that has less than 250$ price?

P.S. This is much more interesting. Few days ago only computers that have had increased time on Build to Order configs were only those using mobile chips: MacBooks and Mac Mini. Currently ALL Apple lineup has increased waiting time for build to order configs.

What is interesting in the field of Mac Pro: 3 days ago wait time was around 3-5 business days. Currently its 7-10. Not implying anything, but it is interesting observation.
 
  • Like
Reactions: AleXXXa
On graphics or compute side? ;)
I do not believe that apart for FP64 there will be any progress in compute power for Polaris 11 in comparison to Fiji.

Mmmmmm the process change is going to mean a considerable change in performance. Remember the process size is being nearly cut in half, which isn't going to go entirely to power savings. Both on the Nvidia side and AMD side. I wouldn't be surprised to see Polaris significantly outperform Fiji, and Pascal to significantly outperform Maxwell.

If you're looking at 2x performance per watt on both sides, either power usage goes down by half, or performance doubles. I'd expect both sides to put things right in the middle and put performance at about 1.5x. Normally the change would be more gradual, but both Nvidia and AMD stalled on getting to a smaller process, so it's going to be a sudden jump.

Good time for GPU fans!
[doublepost=1455410175][/doublepost]
Isn't Apple dropping OpenCL for Metal? A WartHog just flew through any argument about OpenCL.

Oh yes, I'm SURE it was just an OpenCL issue, and Metal will fix everything.

Be right back... I have to go have a good laugh about Nvidia's Metal driver now. You... you go check out Nvidia's Metal driver and let me know how that goes. I'm sure it'll be so much better. If anything, I'm sure Nvidia has a quality Metal driver which will convince Apple to come back to them and not just further alienate them from Apple.

Oh boy that was a good chuckle.
 
Last edited:
  • Like
Reactions: AleXXXa
Oh yes, I'm SURE it was just an OpenCL issue, and Metal will fix everything.
Be right back... I have to go have a good laugh about Nvidia's Metal driver now. You... you go check out Nvidia's Metal driver and let me know how that goes. I'm sure it'll be so much better. If anything, I'm sure Nvidia has a quality Metal driver which will convince Apple to come back to them and not just further alienate them from Apple.
Oh boy that was a good chuckle.

In your eagerness to be snarky with Aiden and have a side swipe at Nvidia, I think you missed the actual MEANING of his post.

Apple was all "OpenGl for the Win!" until they fell laughably behind, at which point it became "OpenCl for the Win" and we all know that they are still laughably behind there. So, instead of competing, they switched courts again.

Now it is "Metal for the Win!" and has been well demonstrated, they are making this switch before anything resembling drivers exist. They are ready for "Stinky Birds" on an iPad, but little else.

With Apple switching tacks at every opportunity, what developer in their right mind would try to keep up? Just write a goofy "Find your Car Keys" iPhone app and live with your 66%.

It probably doesn't help that Apple's "centerpiece" and "magnum opus" is sporting some 2011 GPUs that are multiple generations behind. What do you code for? The features supported by the ancient things they are shipping today? The less-ancient things they may ship some day in the future? The miraculous "Vapourware 5000" offerings that will apparently solve all of the world's problems with "Asynchronous cold fusion" COMING SOON? It is an embarrassing debacle and does not bode well for creative types going into the future. The fleet of Appologists telling us that our eyes are lying to us isn't helping anything.

Facts remain there, the 2011 GPUs in the top-of-the-line $10K Mac are the items worthy off all that laughter you mentioned.
 
  • Like
Reactions: Ursadorable
Mmmmmm the process change is going to mean a considerable change in performance. Remember the process size is being nearly cut in half, which isn't going to go entirely to power savings. Both on the Nvidia side and AMD side. I wouldn't be surprised to see Polaris significantly outperform Fiji, and Pascal to significantly outperform Maxwell.

If you're looking at 2x performance per watt on both sides, either power usage goes down by half, or performance doubles. I'd expect both sides to put things right in the middle and put performance at about 1.5x. Normally the change would be more gradual, but both Nvidia and AMD stalled on getting to a smaller process, so it's going to be a sudden jump.

Good time for GPU fans!
Yeah, there comes pretty good time for GPU fans ;).

About AMD, lets look at what AMD staff says about Polaris.

First things first: Efficiency.
OC3D said:
While talking to PCPER AMD's Joe Macri stated that they expect FinFET to bring a 50-60% drop in power consumption for the same performance or a 25-30% performance boost with the same power consumption
So we look at die shrink that brings 50% better power consumption. So a GPU that had 250W TDP will have 125W TDP, thats to shrink itself. But, there is another bit of information...
OC3D said:
Staff from the Radeon Technology Group did admit that the bulk of the efficiency improvements that we will see with AMD's newest GPUs will come from the so-called "FinFET Advantage", with PCPER stating that is is "on the order of a 70/30 split".
So not only die shrink brings efficiency, but architecture itself. We are looking at 65% better efficiency overall for the GPUs. Lets think about it for a second. 200W R9 280X on 28 nm from TSMC, but with new architecture would draw around 170W at max. Without the shrink. After shrink it would be 85W, if we take 50% lower power consumption, and not account best case scenario.
Here is a link: http://www.overclock3d.net/articles/gpu_displays/amd_has_two_polaris_gpus_coming_this_year/1

Now density: https://www.semiwiki.com/forum/content/3884-who-will-lead-10nm.html On the bottom there is table which shows that density ration between TSMC 28 and GloFo/Samsung 14 nm FF is not 2x. It is 2.2x. So the same R9 280X without advancements in architecture on 14nm would be... 144mm2 die(60% smaller die). And we do not know how will new architecture affect the die sizes, it can be optimized for it, from ground up, and it looks like that is the case.
Lets get back for a second to previous rumor about the die size of one of GPUs from AMD: 232mm2. If Fiji would be ported to 14 nm without any advancements in architecture it would be 250mm2 die. Coincidence? I may be wrong here, of course, but what better way to bring VR into much lower price/performance brackets than by bringing that kind of performance here?

Third thing: Performance. If Mahigan from Anandtech forum is right it looks like we might be pretty surprised with performance of Polaris GPUs. http://forums.anandtech.com/showpost.php?p=38026437&postcount=357
http://forums.anandtech.com/showpost.php?p=38026062&postcount=355
I will not be surprised if my calculations and best case scenario are true, and we are looking at Titan X performance at a fraction of power consumption and cost.

Unfortunatelly I cannot bring anything for Pascal, because... there is no rumors. There is no silicon, we do not even know if the new arch from Nvidia will bring improved Asynchronous Compute(second engine) or even Hardware Scheduling. If the slides from Nvidia are true, and Pascal is only Maxwell on 14 nm with FP64 - the chances for both of them, which would make GIGANTIC difference, are almost null.
 
  • Like
Reactions: AleXXXa
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.