Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
If we have to look at performance levels of the GPUs in current MP the old mid range and high end - that was the config.
D300 - Pitcairn - mid-range.
D500/D700-Tahiti- High-end.

So we have to expect something like:
Polaris 11 - D310
Vega 10 Cut down - D510
Full Vega 10 - D710.
 
We won't see anything before early summer at WWDC. Intel is coming out with Broadwell-EP early summer and AMD is coming out with polaris over the summer. At this point Apple might as well wait for them. Whats another few months after 2 years without an update, especially if launching an updated version now meant we would be stuck with the current gen technologies for another 2 years.



Apple may get first dibs at AMD's new GPU but don't expect them to have exclusive access. Its too high profile of a part for Apple to get all of them. I don't think this will be too much of a problem though since the mac pro probably doesn't sell in high enough quantities to require that sort of deal. Best case is the new mac pro hits shelves around the same time as the retail version of the graphics card.

If there is any reason for optimism its that AMD has been receiving engineering samples of its new Polaris architecture since last summer with recent rumors pointing towards more production ready samples in the last couple months. It shouldn't be too long before we start getting launch date rumors if they really are that close. It also wouldn't surprise me if we saw polaris show up in the 15" macbook pro first, potentially before summer.

Dr. Su said the second half of '16 for Polaris, WWDC is June that dove tails nicely especially if Apple can get early custom boards. Zen is late '16 so AMD isa going to need something to keep investors and creditors happy and a big splash about AMD being the sole provider for all Mac's might just tide the fidgety investors and Wall Street in general over until Zen all IMHO OFC. AMD has a tough road ahead and I think they need a big name on their side and Apple would fill that bill nicely.
 
If we have to look at performance levels of the GPUs in current MP the old mid range and high end - that was the config.
D300 - Pitcairn - mid-range.
D500/D700-Tahiti- High-end.

So we have to expect something like:
Polaris 11 - D310
Vega 10 Cut down - D510
Full Vega 10 - D710.

I think it's optimistic in the extreme to expect any new Mac Pro released this year to ship with Polaris or Vega. Apple has always been behind the curve with GPUs and I don't see any reason for this trend not to continue.

I'd expect any future Mac Pro GPUs to be based loosely on Fiji, no doubt Apple will brand them FirePro cards for some reason despite them not having ECC. AMD still to my knowledge haven't released a FirePro card based on Fiji, and with Polaris looming I doubt they will now.

The main problem that should have been addressed ages ago is the now laughable SSD performance of the current model, which is 40% of the speed of their iMac/MacBook Pro line. That over anything else makes the current offering look substandard and frankly bizarre.

The base 13" MacBook Pro for £1k has an SSD that can trounce the £2.5k+ Mac Pro in storage benchmarks, and that's just embarrassing.
 
I think it's optimistic in the extreme to expect any new Mac Pro released this year to ship with Polaris or Vega. Apple has always been behind the curve with GPUs and I don't see any reason for this trend not to continue.

I'd expect any future Mac Pro GPUs to be based loosely on Fiji, no doubt Apple will brand them FirePro cards for some reason despite them not having ECC. AMD still to my knowledge haven't released a FirePro card based on Fiji, and with Polaris looming I doubt they will now.

The main problem that should have been addressed ages ago is the now laughable SSD performance of the current model, which is 40% of the speed of their iMac/MacBook Pro line. That over anything else makes the current offering look substandard and frankly bizarre.

The base 13" MacBook Pro for £1k has an SSD that can trounce the £2.5k+ Mac Pro in storage benchmarks, and that's just embarrassing.

If the computer was intended to be used with one drive I'd agree but it's not. I'll take a faster scratch drive and faster remote storage than a faster boot drive any day. Once stuff is loaded on my MP at work it's a couple months before I reboot and need to load stuff again but I bang on my scratch drive every day.
 
The only place where Fiji can be is server cards. Tonga has been lately released for that environment due to virtualization on GPU. Fiji is advanced Tonga in that manner with Hardware schedulers. But it is not and will never will be Professional graphics chip.

Polaris/Vega GPUs will be in FirePro WXXXXX range. There is absolutely no sense for Apple to go for Fiji if soon there will be much better GPUs, with much lower cost to buy, also. Think about it: Polaris 11 is R9 470X GPU with price around 250$ or less.
 
I think it's optimistic in the extreme to expect any new Mac Pro released this year to ship with Polaris or Vega. Apple has always been behind the curve with GPUs and I don't see any reason for this trend not to continue.

This is not quite true. The iMac was the only way to get Tonga XT for roughly a year. I understand where this argument is coming from, basically that Apple selected Tahiti instead of Hawaii for the mac pro despite hawaii being released before the mac pro shipped. I think this choice was for a couple reasons:

1. Once you limit hawaii to the same thermal envelope as tahiti, the performance difference is probably minimal.
2. Tahiti had much better dual precision performance over consumer versions of hawaii. This gave Apple better compute performance per dollar than if AMD charged apple extra for the workstation variants of hawaii with a different DP ratio.

I'd expect any future Mac Pro GPUs to be based loosely on Fiji, no doubt Apple will brand them FirePro cards for some reason despite them not having ECC. AMD still to my knowledge haven't released a FirePro card based on Fiji, and with Polaris looming I doubt they will now.

Apple won't use Fiji as the high end option if it is still tied to HBM 1. Despite being faster, it would be a tough sell to have the high end GPU come with only 4 GB of VRAM after the D700 came with 6 GB. If you take away Fiji from GPUs for Apple to choose from, this leaves them Hawaii or Polaris. Tough to see them using Hawaii at this point for the above reason and there would be no sense waiting this long to update the mac pro.

That leaves us with Polaris which is going to be AMD's spiritual successor to Tahiti. A compute focused mid-sized GPU with big gains in efficiency compared to the previous generation.

The main problem that should have been addressed ages ago is the now laughable SSD performance of the current model, which is 40% of the speed of their iMac/MacBook Pro line. That over anything else makes the current offering look substandard and frankly bizarre.

The base 13" MacBook Pro for £1k has an SSD that can trounce the £2.5k+ Mac Pro in storage benchmarks, and that's just embarrassing.

Agreed. SSDs have gotten much faster since 2013. Why not silently swap in the SSDs they are using in macbook pros and iMacs.

So we have to expect something like:
Polaris 11 - D310
Vega 10 Cut down - D510
Full Vega 10 - D710.

Sounds like a good guess. Although I might change this to

D310: Polaris 10
D510: Polaris 11 cut down
D710: Polaris 11 full

I am not convinced we are getting 3 GPUs from AMD this year. I predict Polaris 10 is a ~100 mm^2 part and Polaris 11 is ~300-350 mm^2. I'm not sure what Vega is but I don't think AMD has ever delivered 3 GPUs in a year so I don't think this year will be any different.

It also wouldn't surprise me to see Tonga as the D310 option. The only reason I didn't list it is if they want to support displayport 1.3 or some other new tech they would need Polaris across the entire lineup.
 
  • Like
Reactions: AleXXXa
We are. Two Polaris cards and one Vega. Polaris 10 is the same market segment as R9 250X/GTX 750 Ti.

P.S. Polaris 11 is ~200mm2 die ;).
 
It's not just the supported cards, Apples drivers are completely outdated. Kepler-based cards profit enormously (performance wise) from the latest web drivers, but Apple never bothered to include them into an OS X update.
Last time I checked e.g. Metro: Redux was completely unplayable on stock Nvidia drivers (even a $3000 2013 iMac would produce a slideshow). Install Nvidia Web Drivers and BOOM, everything smooth, even on a $100 GTX 760.
Of course this applies to other games and serious applications as well.

I tried this exact thing out the other day on a Kepler card. Benched and performed exactly the same in both drivers.

The Nvidia web drivers are sometimes newer than the OS X drivers, but that's usually because Nvidia misses Apple's code freeze and certification window.

(The Nvidia drivers are certainly moving forward on OS X too. Unless the underwear gnomes were the ones who wrote the Metal Nvidia drivers.)

You could think so, but that's not true. Apple supports the computers they sell (kinda, see above), not any upgrades you install, which might have some basic OS X support. They have lots of GPUs in their drivers they never sold. Some run fine, others don't, none of them are officially supported. Simple as that.

Anything under the Made for Mac program is supported. Apple has the hardware on hand to test for new issues with, and they've certified the hardware/software interaction. When you do the Made for Mac program, Apple basically takes on the role of making sure your hardware isn't broken in future OS changes.

The device list for the Nvidia drivers hasn't change because the list of "Made for Mac" Nvidia cards hasn't changed. MacVidCards doesn't submit his cards for certification, yet he wants Apple to take on the support burden of handling bugs against his cards. Apple has no idea what he's shipping or what his ROM changes are, and it's very likely his ROMs blow away the manufacturers warranty anyway. So he wants Apple to support what to them is an unknown quantity.

Apple isn't blocking anyone from supporting non-Made for Mac cards, they're just not going to do it themselves. But they're perfectly happy to sign Nvidia's drivers if Nvidia wants to take that support burden.

If MacVidCards wants to get drivers for his stuff in the OS he needs to go through the MfM program. But that would probably mean he'd need to work out something with the vendors of the hardware he's buying and Nvidia in since he's reselling other people's work, and he's not the sole manufacturer of his product.

Apple has introduced (buggy) drivers for GCN 1.1 cards in Yosemite (HD 7790, R9 290X, R9 390X, all cards they never sold). Go ahead and file a support ticket because your card is crashing/artefacting.

You could actually. Once Apple ships the driver it's their job to handle bugs on it. Or at least pressure AMD into fixing them. But it would be a legit bug report.

And they actually ship certified hardware that runs under the GCN 1.1 architecture. As far as I know, there is no Apple or MfM Maxwell card.

Tell them that they're supposed to fix your issues because the presens of the drivers clearly shows that they want to support them. Would love to see their response. :D

You'll probably get a better response than if you filed a bug with a GTX 970.
 
Last edited:
This is not quite true. The iMac was the only way to get Tonga XT for roughly a year. I understand where this argument is coming from, basically that Apple selected Tahiti instead of Hawaii for the mac pro despite hawaii being released before the mac pro shipped.

I was speaking more of a broader view of Macs and their GPU offerings compared to currently available technology.

Apple is a massive partner with Intel, who often release hardware with newly announced or even sometimes unannounced chips (CPUs, Thunderbolt etc). It's not unreasonable to assume they have similar connections with AMD and Nvidia (although I can imagine Nvidia aren't too happy not being in a single currently-shipping Apple product currently).

  • Mac Pro Late 2013 launched with non-reference AMD 7970s, a card from 2011.
  • Mac Pro Mid 2010 launched with the AMD (ATI) 5770 and the 5870, two cards both released in 2009
  • Mac Pro Early 2009 launched with the Nvidia GT 120 (2009!) and the ATI 4870 (mid 2008)
  • Mac Pro Early 2008 launched with the ATI 2600 XT (2007), Nvidia 8800GT (2007) or the Quatro FX 5600 (2007)
Etc, etc. Point being, Apple were (almost) always consistently a year behind current GPU technology. It's great to imagine this changing, I'd love a product based off of a just announced HBM2 chip, but given Apple's history in this arena, I find it unlikely.
 
  • Like
Reactions: AleXXXa
The Nvidia web drivers are sometimes newer than the OS X drivers, but that's usually because Nvidia misses Apple's code freeze and certification window.

(

This is flat out wrong, and I imagine you know that.

10.11.3 Web Driver

Bundle ID: com.nvidia.web.GeForceWeb
Get Info String: GeForceWeb 10.9.14 346.03.05f01

The one Apple includes in 10.11.3 TODAY

Bundle ID: com.apple.GeForce
Get Info String: GeForce 10.8.14 310.42.15f01

The Web Driver for 10.8.5 313.01.04f06

Current Shipping PC Driver

361.75

Are you implying that Nvidia has voluntarily kept the OS version generations older for an unknown reason?

I don't find your post credible in the least. Apple makes billions, they don't need a team of people spreading mis-informed apologies. The facts stand for themselves.
 
  • Like
Reactions: AleXXXa and tuxon86
I am waiting MVC for your answer: what hardware architectural features on Nvidia hardware are cutting edge? Pinpoint all of them.
 
I was speaking more of a broader view of Macs and their GPU offerings compared to currently available technology.

Apple is a massive partner with Intel, who often release hardware with newly announced or even sometimes unannounced chips (CPUs, Thunderbolt etc). It's not unreasonable to assume they have similar connections with AMD and Nvidia (although I can imagine Nvidia aren't too happy not being in a single currently-shipping Apple product currently).

  • Mac Pro Late 2013 launched with non-reference AMD 7970s, a card from 2011.
  • Mac Pro Mid 2010 launched with the AMD (ATI) 5770 and the 5870, two cards both released in 2009
  • Mac Pro Early 2009 launched with the Nvidia GT 120 (2009!) and the ATI 4870 (mid 2008)
  • Mac Pro Early 2008 launched with the ATI 2600 XT (2007), Nvidia 8800GT (2007) or the Quatro FX 5600 (2007)
Etc, etc. Point being, Apple were (almost) always consistently a year behind current GPU technology. It's great to imagine this changing, I'd love a product based off of a just announced HBM2 chip, but given Apple's history in this arena, I find it unlikely.

You certainly make a valid point. I would say the difference with the new mac pro vs the classic mac pro is Apple has switched from tying itself to Intel's product cycle to AMD's product cycle, which makes sense since the mac pro moved from a 2 CPU + 1 GPU to a 1 CPU + 2 GPU configuration. That could be why Apple skipped Intel's Haswell. Given that AMD is not exactly doing well financially, I'm sure they are very eager to partner with Apple and make their custom form factor GPUs.

So it wouldn't surprise me to see Apple launch a mac pro based on brand new AMD GPUs. Something similar happened with the release of the 2014 Retina iMac which came with a custom form factor Tonga chip that had not been released yet (although the cut down chip shipped a month earlier). However, an important difference is that tonga was a new midrange chip on a mature process, unlike polaris, which will be a high end chip on a new node. Thus, Polaris will have more demand and be more supply constrained at launch.
 
I am waiting MVC for your answer: what hardware architectural features on Nvidia hardware are cutting edge? Pinpoint all of them.

Apple makes thin, minimalist machines.

AMD makes hot running, inefficient GPUs.

Nvidia has been the uncontested perf per watt King.

Using hot running GPUs in these skinny machines results in things like the 100C "Hand Warmer Pro" iMac. And the new nMP GPU recall due to burned-out Tahiti cards.

There is no rational reason to use the inferior AMD chips in Apple's lines. We all know that, the "miracles of resynched compute" magic show doesn't change that.

Thanks for keeping the thread current with all the AMD Press releases, good entertainment. I'm sure they got cold fusion coming up soon, they'll need it.
 
  • Like
Reactions: tuxon86
This is flat out wrong, and I imagine you know that.

10.11.3 Web Driver

Bundle ID: com.nvidia.web.GeForceWeb
Get Info String: GeForceWeb 10.9.14 346.03.05f01

The one Apple includes in 10.11.3 TODAY

Bundle ID: com.apple.GeForce
Get Info String: GeForce 10.8.14 310.42.15f01

The Web Driver for 10.8.5 313.01.04f06

Current Shipping PC Driver

361.75

Are you implying that Nvidia has voluntarily kept the OS version generations older for an unknown reason?

I don't find your post credible in the least. Apple makes billions, they don't need a team of people spreading mis-informed apologies. The facts stand for themselves.

Like I said, if the Nvidia driver hasn't moved, how does Metal run on Nvidia hardware?

This ought to be some good mental gymnastics.

Or maybe we can start with the 10.11.2 update which contains a bunch of changes for the Nvidia drivers.

Or if the driver hasn't changed since 10.8.5 how they added support for OpenGL 4.1 in 10.9. That would be good to address too.
 
Last edited:
Apple makes thin, minimalist machines.

AMD makes hot running, inefficient GPUs.

Nvidia has been the uncontested perf per watt King.

Using hot running GPUs in these skinny machines results in things like the 100C "Hand Warmer Pro" iMac. And the new nMP GPU recall due to burned-out Tahiti cards.

There is no rational reason to use the inferior AMD chips in Apple's lines. We all know that, the "miracles of resynched compute" magic show doesn't change that.

Thanks for keeping the thread current with all the AMD Press releases, good entertainment. I'm sure they got cold fusion coming up soon, they'll need it.
Right. So I asked you for architectural features on hardware level. You posted this as your answer. I think there is nothing to add at this point to it. Do you know what those features are, or you completely have no clue what is inside Nvidia hardware, yet you thrash everything that Apple decides and AMD as a GPU company. Do you know anything about those architectures and how they work? Because so far it looks like someone caught you with pants down.

I can provide few links that show that AMD GPU is not inefficient, but it will not change anything for you.

P.S. You don't know if GPU problems in Mac Pro are because of overheating(not likely), yet you post it as a fact. Im guessing it suits your agenda.
P.S.2 We were not talking about temperatures on GPUs, but Architectures and software that exposes it.

One last time. Pinpoint hardware, architectural features that make Nvidia GPUs cutting edge. Because all of what has been exposed on forums in last few days shows that Nvidia architecture on hardware level is few years behind AMD. And it is said by GAME DEVELOPERS.
 
Last edited:
  • Like
Reactions: AleXXXa
The recall on the recent Mac Pros is likely just the result of a poorly quality-controlled batch of product getting through QA somehow. Only a limited number of machines are affected, although the chips it's based on aren't exactly famed for their efficiency either.

The move to HBM makes a new AMD-based product much more appealing, I agree with your idea (Stacc) that the Mac Pro is now a machine ultimately designed around the GPU and not the CPU. AMD are in a tough spot financially, perhaps that gives Apple the leverage it requires to get these designs early, implement them in their own non-reference cards for the new Mac Pro form-factor and test. AMD would naturally have prototypes up and running far in advance of a press release announcing their roadmap, but how far back they'd have shared this information with Apple is anyones guess.

Fingers crossed at WWDC there'll be some news. But then we've all been here before ;)
 
One last time. Pinpoint hardware, architectural features that make Nvidia GPUs cutting edge. Because all of what has been exposed on forums in last few days shows that Nvidia architecture on hardware level is few years behind AMD. And it is said by GAME DEVELOPERS.

nVidia hardware is so far behind AMD that it jumps ahead of it in all the gaming benchmarks?
 
Yes, the fantastic AMD claims are REALLY stretching now.

"AMD makes the best hardware, but nobody notices."

"Everyone is running the tests wrong."

The scorching hot iMacs are...are...I guess they don't have anyone to blame for those. Make good hand warmers, though.

"Who do you believe, AMD or your lying eyes?"
 
  • Like
Reactions: tuxon86
nVidia hardware is so far behind AMD that it jumps ahead of it in all the gaming benchmarks?
Show me the benchmarks. So far the one benchmarks where Nvidia jumps are those of GameWorks titles, and High end, but it changes. Currently Fury X ties GTX980Ti in Techpowerup review suite in 1440p, and is faster in 11 from 15 games in 4K resolution than REFERENCE GTX 980 Ti. All of other brackets are dominated by AMD with also better pricing.

https://forums.macrumors.com/threads/2016-nmp.1952250/page-8#post-22545804 Read. All thread is extremely interesting. Also check Beyond3D and Guru3D forums in threads about Asynchronous Compute, DX12, etc.

All has be written, already.

MVC does not know the answer to my questions, so I will answer it. On hardware level on Nvidia GPUs you have MegaThread Engine, CUDA cores, cache, VRAM and thats pretty much it. All of "magic", "efficiency" on Nvidia hardware is thats to drivers, and software. There is absolutely no hardware scheduling. Nvidia got rid of it, because last time(Fermi) their hardware got it it became hot, inefficient and unreliable. Also it made everyone reliant on Nvidia optimization of the drivers, so they could control the life of their GPUs.

MegaThread Engine is for feeding the cores, but without drivers it has absolutely no clue what to do with application. Drivers are doing all the job of scheduling. Now lets think of world with low-level access to hardware. What happens where your hardware must manage itself? Its performance tanks. What happens when you have to manage it asynchronously and your hardware does not have anything on that side that useful? Performance tanks. Every current benchmark of DirectX 12 game shows that is the case on Nvidia side. And we have to remember that Ashes does not have compute done in the engine, yet. It will be in future. What will happen for Nvidia? Performance will tank. If anyone of you think that Pascal will change this - no chance. It is basically Maxwell with FP64, and deep learning.

AMD went other way. They have Hardware Schedulers(Only Fiji and Tonga so far), there are Asynchronous Compute Engines. Hardware will adapt itself to the task within the boarders of engine. That is the whole point of low-level access. All you have in drivers is device name, API driver, system drivers, and thats it. No optimization of applications, nothing. All of it is done on application level by developers. That is one of reasons why Apple went with AMD. Because they will not need to fiddle with drivers for each application. They will give API, and thats it. Developers will get to do everything from the ground up. To open it up AMD launched lately initiative. OpenGPU its called.

If anyone of you think it is wrong way find similar things about Vulkan, SPIR-V and OpenCL and Metal, OpenCL, OpenGL, and Swift. What is even funnier here is that Vulkan will include OpenCL in it.

Polaris will have modified core. With Scheduler that can handle both DirectX 12 and 11. Here is example of it: http://forums.anandtech.com/showpost.php?p=38011479&postcount=171
(all of this means that Amount of cores of AMD GPUs will not exactly reflect past. So lower core amount will bring higher performance than it was before.)

Again, I encourage everyone to read this thread and pay attention to posts from Zlatan, who is game developer, and Mahigan. Also check the Beyond3D and Guru3D forums and threads about DX12, scheduling, Asynchronous Compute, etc. Eye opening. Link: http://forums.anandtech.com/showthread.php?t=2462951&page=3
 
MVC does not know the answer to my questions, so I will answer it.

I'm still waiting for him to get to my driver question. But you know what they say about leading a horse to water... I had a good chuckle about his comment about my credibility though.
 
  • Like
Reactions: AleXXXa
I'm still waiting for him to get to my driver question. But you know what they say about leading a horse to water... I had a good chuckle about his comment about my credibility though.
...but where are these wonderful ATI graphics chips?

You can lead a horse to water, but if the chips are "sometime next year" he can't drink.

4 April 2016 - Save the date.
 
  • Like
Reactions: tuxon86
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.