Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I don't see Apple gaining much from this. They have been exclusively using AMD for a couple years now and haven't needed a formal announcement to do it. Why take away the option of switching back to Nvidia if AMD drops the ball?



What would a Zen/Polaris APU bring to the AppleTV that isn't served by Apple's own processors? Gaming console capabilities? That seems like something Apple doesn't care about.
VR. Gaming Console capabilties. HTPC for living room. Metal shows that they care about their platform, after all ;).
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I don't see Apple gaining much from this. They have been exclusively using AMD for a couple years now and haven't needed a formal announcement to do it. Why take away the option of switching back to Nvidia if AMD drops the ball?



What would a Zen/Polaris APU bring to the AppleTV that isn't served by Apple's own processors? Gaming console capabilities? That seems like something Apple doesn't care about.



I bet we see Polaris 10/11 in the Mac Pro. The AMD Nano is gaming oriented, lacking in VRAM, expensive and is likely too hot to be a good fit in the Mac Pro. Polaris 11 probably makes it into the macbook pro unless Apple drops internal discrete GPUs entirely from its laptops.

I'll don't discard a dual fury nano on the nMP (still fit the 110W/ gpu TDP) but.. will it worth?

The same Apple could drop AMD for nVidia on some products or all not the first time..

What I don't see is Apple using nothing x86 (Zen APU) on something iOS based it will not happen.

Apple could end using a Zen APU on the retina iMac, and eventually a Zen cpu and Vega GPU on the Mac Pro, but this won't happen this year, maybe next year.

(assuming true Zen performance figures it could mean a big win for the Mac Pro over the Xeon, since it's much faster single thread).

Anyway about the Mac Pro nothing it's said, only leaks are about Thunderbolt 3, we don't know anymore, Ahh Apple is collaborating on CUDA 8.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Remember this post?

http://forums.anandtech.com/showpost.php?p=38154078&postcount=13 Read this. A lot is falling in line perfectly, so far.
Ive got an answer also why 2560 GCN core GPU with only 256 bit memory bus will be 232mm2 not less. Because the compute units inside the core are bigger due to the new architecture of power gating and management of the CU.

On this matter most informative post so far: http://forums.anandtech.com/showpost.php?p=38159428&postcount=155
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
I am not convinced we will see an AMD CPU in a mac anytime soon. Sure, lots of nerds have been talking about how great Zen is going to be for the last two years but it remains to be seen if they can actually compete with Intel. It seems like even the most optimistic expect it to be on par with Haswell, which is to say it will be 2-3 generations behind a modern Intel chip. If thats the best AMD can offer than I don't see Apple downgrading from an Intel CPU to an AMD.

Lets assume a Zen/Polaris APU still can't match the performance/watt of an Iris Skylake CPU. What product would Apple offer zen in? Not in any laptop since it would sacrifice battery life, and in the desktops with discrete GPUs there is more than enough space to add in a normal discrete GPU.

Sure, AMD may bring CPUs with more than 4 cores to consumers, but I have yet to see a case that most consumers benefit for more than 4 cores. I bet Apple sells way more machines with dual cores than quad cores.

Agree, does Imagination has some good looking girls on staff?

You are on point here. Apple is unlikely to be jumping on the desktop VR bandwagon anytime soon. Let Oculus and Valve go through the early growing pains. If there is a market there then maybe Apple will enter it. Sure, VR is a cool technology but I still haven't seen anything resembling a killer app. Remember, Apple wasn't the first out with a mp3 player, smart phone, tablet or smart watch, they just jumped in when they felt like they could offer a significant improvement over the rest of the market. Also you had better believe it will be a mobile first effort, since if there is one thing we learned at the last Apple press event, its that Apple still sees lots of growth in their mobile products and isn't afraid of trying to cannibalize the macs to do it.
 

--AG--

macrumors member
Dec 20, 2012
36
14
Just ordered a Dell Precision 7910 with dual liquid cooled 2687w v4 (in total 24 physical cores @ 3 Ghz). 32G ram, 512ssd and nvidia quadro k2200 gpu. I also have a nvidia tesla K20 that I will plug in when it arrives. So it is a pure number-crunching machine with no big need for memory or storage. For the same price I would have had to settle with one 2,7GHz 12core, 16Gb ram and dual D500 gpu if buying a nMP. I will not leave OS X however, in addition to the Precision I have an iMac at office and a MBP at home and when traveling. I am however slightly annoyed that apple cannot sell a decent workstation. Would have bought a classic MP with up-to-date hardware in a heartbeat (with realistic pricing).
 
Last edited:

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
I'll don't discard a dual fury nano on the nMP (still fit the 110W/ gpu TDP) but.. will it worth?

The same Apple could drop AMD for nVidia on some products or all not the first time..

What I don't see is Apple using nothing x86 (Zen APU) on something iOS based it will not happen.

Apple could end using a Zen APU on the retina iMac, and eventually a Zen cpu and Vega GPU on the Mac Pro, but this won't happen this year, maybe next year.

(assuming true Zen performance figures it could mean a big win for the Mac Pro over the Xeon, since it's much faster single thread).

Anyway about the Mac Pro nothing it's said, only leaks are about Thunderbolt 3, we don't know anymore, Ahh Apple is collaborating on CUDA 8.

What is the benefit of Apple switching to AMD processors? There's no evidence they're going to avoid the massive production bottlenecks Intel faces.
 

Bubba Satori

Suspended
Feb 15, 2008
4,726
3,756
B'ham
What is the benefit of Apple switching to AMD processors? There's no evidence they're going to avoid the massive production bottlenecks Intel faces.

What massive production bottlenecks is Intel facing? Source?
All their fabs are up and running.
They are getting more cpus per wafer with the new smaller process.
So supply is up.
Chip demand has been down the last few years.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I will tell you this, but supposedly nobody will believe me. I have written something similar 1.5 years ago on this forum.

AMD is currently working on their own CUDA-like API for professional, compute applications. It will be based on Mantle v2. What is more important it will be very similar to Metal. It will most probably be Open Source. That is very reason why AMD launched Boltzmann Initiative, and CUDA compilers for OpenCL.
 

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
What massive production bottlenecks is Intel facing? Source?
All their fabs are up and running.
They are getting more cpus per wafer with the new smaller process.
So supply is up.
Chip demand has been down the last few years.
Production bottlenecks as in roadmaps and Moore's law breaking down.
 
  • Like
Reactions: JimmyPainter

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Nvidia sent invites for presentation of new GPUs. GTX X80 with 8 GB of GDDR5 - 30% faster than GTX 980, with the same thermal envelope.

Boost clock is over 1400 MHz for core.
All in all: 2048 CUDA core GPU with 1400 MHz boost clock will be 30% faster than GTX 980.

New node, that provides 70% lower power consumption, or 30% higher performance at the same power target. And I was called AMD fanboy when I said that Pascal will not be as efficient as Maxwell was.

What is becoming also apparent that Pascal is only improved Maxwell. At least on lower end that the highest models.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
Nvidia sent invites for presentation of new GPUs. GTX X80 with 8 GB of GDDR5 - 30% faster than GTX 980, with the same thermal envelope.

Boost clock is over 1400 MHz for core.
All in all: 2048 CUDA core GPU with 1400 MHz boost clock will be 30% faster than GTX 980.

New node, that provides 70% lower power consumption, or 30% higher performance at the same power target. And I was called AMD fanboy when I said that Pascal will not be as efficient as Maxwell was.

What is becoming also apparent that Pascal is only improved Maxwell. At least on lower end that the highest models.

Did you read that in a blog or forum post from some anonymous guy.... again....
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Did you read that in a blog or forum post from some anonymous guy.... again....
There is nothing in Pascal architecture that differentiates it from Maxwell, apart from splitting 128 CUDA core SMM's into two bits that share the same amount of Cache, both with 64 Cores. Nvidia did this to allow better asynchronous compute performance in their hardware. On lower end, the GPUs that will have 2048 GCN cores, GDDR5 and lower ROP count than Maxwell GPUs they will be different from Maxwell only through higher core clocks.

Everything has been debunked tuxon, already. Volta was supposed to be after Maxwell, but TSMC was not able to deliver 20 nm process that was supposed to be for Maxwell. Thats why Nvidia took out FP64 from Maxwell, ported it to 28 nm, and sold as GTX 9XX series. But with that, they added one another lineup between Kepler and Volta, based on 16 nm process, added FP64, and 16 and called Pascal.

EVERYTHING falls in line perfectly. Only people who want to reject reality do not see it.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
There is nothing in Pascal architecture that differentiates it from Maxwell, apart from splitting 128 CUDA core SMM's into two bits that share the same amount of Cache, both with 64 Cores. Nvidia did this to allow better asynchronous compute performance in their hardware. On lower end, the GPUs that will have 2048 GCN cores, GDDR5 and lower ROP count than Maxwell GPUs they will be different from Maxwell only through higher core clocks.

Everything has been debunked tuxon, already. Volta was supposed to be after Maxwell, but TSMC was not able to deliver 20 nm process that was supposed to be for Maxwell. Thats why Nvidia took out FP64 from Maxwell, ported it to 28 nm, and sold as GTX 9XX series. But with that, they added one another lineup between Kepler and Volta, based on 16 nm process, added FP64, and 16 and called Pascal.

EVERYTHING falls in line perfectly. Only people who want to reject reality do not see it.
Yes, there is, and it's not trivial, pascal unified memory it's radical new and allow each sm to address the full address range inside the card(s) and the system also, maxwell only for a traditional "encapsulated" model where sm where restricted on memory access, requiring special steeps just to read data from other smm, now it's direct this means a lot for CUDA programming model, and actually unleash your code to do everything you can do on a conventional cpu.

To me this is a big deal on pascal and worth it over amd is Intel Xeon phi.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Yes, there is, and it's not trivial, pascal unified memory it's radical new and allow each sm to address the full address range inside the card(s) and the system also, maxwell only for a traditional "encapsulated" model where sm where restricted on memory access, requiring special steeps just to read data from other smm, now it's direct this means a lot for CUDA programming model, and actually unleash your code to do everything you can do on a conventional cpu.

To me this is a big deal on pascal and worth it over amd is Intel Xeon phi.
Unified Memory is HSA feature... And it is done through CUDA... For crying out loud, differentiate Hardware Features from Software ones...
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
There is nothing in Pascal architecture that differentiates it from Maxwell, apart from...
And there's nothing in the Vega architecture that distinguishes it from a 3D Rage, apart from.... ;)

Almost all changes are incremental when comparing adjacent generations, but over time significant progress is made.

While we have no facts about mid-range Pascal chips, the GP100 has clearly blown away other high end compute chips. Even you said that ATI has nothing to compete with the GP100. ( https://forums.macrumors.com/posts/22780361/ )
 
  • Like
Reactions: Mago

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Not the way as on pascal, check again it's a main differentiator and nothing trivial to implement
Do you know why? Nvlink. And software. On hardware level of the GPUs there is nothing that would stop Kepler or Maxwell to have the same feature. But Nvlink, and CUDA software blocks it. Why? Marketing to sell new GPUs. But that is absolutely understandable. What is not understandable however is that why people think that Pascal is new architecture, when it is only improved Maxwell with gigantic boost clocks? Why NVlink makes it different? Nvidia's Reality Distortion Field ;).
And there's nothing in the Vega architecture that distinguishes it from a 3D Rage, apart from.... ;)
Almost all changes are incremental when comparing adjacent generations, but over time significant progress is made.
While we have no facts about mid-range Pascal chips, the GP100 has clearly blown away other high end compute chips. Even you said that ATI has nothing to compete with the GP100. ( https://forums.macrumors.com/posts/22780361/ )
Did I said that, or it did confirmed your bias? Polaris does not compete with GP100. 232 mm2 GPU vs 600mm2 monster. It would be funny if it would compete however...

S9300X2 competes with it in SP performance, and beats it. 13.9 TFLOP in 300W of TDP vs 10.6 300W GPU. However there is absolutely NOTHING, yet, in the market that can compete with GP100 in DP.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Do you know why? Nvlink. And software. On hardware level of the GPUs there is nothing that would stop Kepler or Maxwell to have the same feature. But Nvlink, and CUDA software blocks it. Why? Marketing to sell new GPUs. But that is absolutely understandable. What is not understandable however is that why people think that Pascal is new architecture, when it is only improved Maxwell with gigantic boost clocks? Why NVlink makes it different? Nvidia Distortion Field ;).
Did I said that, or it did confirmed your bias? Polaris does not compete with GP100. 232 mm2 GPU vs 600mm2 monster. It would be funny if it would compete however...

S9300X2 competes with it in SP performance, and beats it. 13.9 TFLOP in 300W of TDP vs 10.6 300W GPU. However there is absolutely NOTHING, yet, in the market that can compete with GP100 in DP.
Not only nvLink you also need to figure how to drive bits from the sm to other sm and the system this requires silicon not present in maxwell architecture.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Not only nvLink you also need to figure how to drive bits from the sm to other sm and the system this requires silicon not present in maxwell architecture.
Shared caches between SM's and High Bandwidth Memory Controllers... There is nothing in Pascal scheme that shows what you say.
gp100_block_diagram-1.png

SM's share L1 cache. Cores are connected to HBM2 memory controllers, that share everything between the cores. It is improved Maxwell, it will behave exactly like fast, robust Maxwell. With slightly improved Asynchronous Compute capabilities.

Unified Memory comes from Software and Nvlink.
 
Last edited:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Shared caches between SM's and High Bandwidth Memory Controllers... There is nothing in Pascal scheme that shows what you say.
gp100_block_diagram-1.png

SM's share L1 cache. Cores are connected to HBM2 memory controllers, that share everything between the cores. It is improved Maxwell, it will behave exactly like fast, robust Maxwell. With slightly improved Asynchronous Compute capabilities.

Unified Memory comes from Software and Nvlink.
Maxwell unified memory is on software, on pascal is managed on hardware, this is from anandtech.com
2058976ce4cfac5eedc45186618705d2.jpg

. http://www.anandtech.com/show/7900/nvidia-updates-gpu-roadmap-unveils-pascal-architecture-for-2016

On nVidia forums there are more details, this actually an feature that enables a lot of algorithms to efficiently handle the memory ops as on large data array now you can define an array on the whole 128gb served by the 8 gpu interconnected by nvLink plus all what you can give thru system ram.
 
  • Like
Reactions: JimmyPainter

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
That is correct. And NOWHERE contradicts to what I have written. Unified Memory came from NVlink, and Software. Thats why Maxwell lost it, when It was ported back to 28 nm from failed 20 nm. On architecture level it is exactly the same. But NVlink allows to see whole pool of memory on GPUs to be seen as one big pool - by the software, and it comes from... CUDA. There is nothing on hardware of the chips, apart from HBM2 and shared L1 cache.

Unified Memory was before, but it was restricted to memory pool of GPU. Now, because there is direct connection to GPU through Nvlink - it is possible to expand beyond that limitation. There is no magic here ;).
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Nvidia sent invites for presentation of new GPUs. GTX X80 with 8 GB of GDDR5 - 30% faster than GTX 980, with the same thermal envelope.

And I was called AMD fanboy when I said that Pascal will not be as efficient as Maxwell was.

These two things contradict each other. If Pascal is faster in the same thermal envelope then by definition it is more efficient.

Maxwell was a huge step up in efficiency. If Pascal is a compute oriented Maxwell with the improvements brought about by a smaller node then it will be a fantastic chip. AMD has a lot to worry about on the compute front especially if they can't bring a bigger chip to market quickly.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
These two things contradict each other. If Pascal is faster in the same thermal envelope then by definition it is more efficient.

Maxwell was a huge step up in efficiency. If Pascal is a compute oriented Maxwell with the improvements brought about by a smaller node then it will be a fantastic chip. AMD has a lot to worry about on the compute front especially if they can't bring a bigger chip to market quickly.
No they are not. You compare 300mm2 GPU to 400mm2 GPUwith the same thermal envelope, but the smaller one is 30% faster... on a node that brings 70% power consumption reduction, or 30% increased performance. Scale of efficiency. Thats what I meant.

P.S. 2048 CUDA core GPU will be slower and less powerful than 2560 GCN4 core GPU, and will consume more power, than AMD counterpart.

2560 GCN cores, clocked at 1152 MHz. 125W TDP. 5.9 TFLOPs. 232 mm2.
2048 CUDA cores with 1400 MHz. 165W TDP. 5.7 TFLOPs. 300 mm2.
And graphical behavior of GCN4 will be... surprising for most people here.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.