Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

AndreeOnline

macrumors 6502a
Original poster
Aug 15, 2014
714
508
Zürich
Leaked details point to a June release for AMD's new Fiji XT GPU.

Lots of gfx stuff here is NVIDIA related, but I'd like to know what we know about Apple's updating of their AMD drivers.

What are my chances of being able to use a Fiji chip in my cMP soon after release? Driver packages are unified, but I don't know if Apple routinely update their AMD drivers even if they aren't using the new hardware (yet).

I'm guessing I need at least one driver refresh after Fiji is launched to make it work properly. If the most likely scenario is that we will have to wait another few months I'd probably rather just install a second R9 280X card now and use that setup until the new cards are confirmed working.
 
Lots of gfx stuff here is NVIDIA related, but I'd like to know what we know about Apple's updating of their AMD drivers.

And that would be because unlike AMD, NVIDIA makes web drivers available for the Mac. I would suppose that a lot of this has to do with the fact that CUDA has a strong following in professional creative apps.

What are my chances of being able to use a Fiji chip in my cMP soon after release? Driver packages are unified, but I don't know if Apple routinely update their AMD drivers even if they aren't using the new hardware (yet).

You kinda answered your own question. AMD pawns all Mac driver development off on Apple - and for that reason, it's really hard to say whether support for the Fiji XT based cards will be there or not. If the chip requires updated drivers, we likely won't see them until Apple decides to ship a product with a similar GPU.
 
And that would be because unlike AMD, NVIDIA makes web drivers available for the Mac. I would suppose that a lot of this has to do with the fact that CUDA has a strong following in professional creative apps.

I'm not sure it's that easy. CUDA made the break through in GPU rendering, but OpenCL has caught up in many cases. And on the Apple platform I'd argue that OpenCL should be the first choice—if your apps of choice support it.

Apple apps like FCPX, Motion and Compressor are OpenCL based and DaVinci Resolve is at least as good in OpenCL as in CUDA. Many of the high end 3D renderers like Maxwell and Arnold are CPU based (Octane being a CUDA based exception).

Since the 280X can be installed out of the box with no driver issues, it kind of puts it on par (I would say ahead) of the 980 in the light of all the driver issues it has after every Apple update.

I don't really use the Adobe line of products (apart from Lightroom) more than I have to, so I don't know how OpenCL fares compared to CUDA there. I know they supported CUDA pretty early.

I don't do gaming so I exclude that aspect, but looking at Barefeats' tests, an old 290X pretty much ties with a new 980 GTX. The NVIDIA card has the upper hand in energy efficiency, of course.

Anyway, not bashing the NVIDIA cards. They are fine cards. I just feel that as Apple users it's in our interest to push OpenCL. I also think some people default to CUDA partly un-informed.

You kinda answered your own question. If the chip requires updated drivers, we likely won't see them until Apple decides to ship a product with a similar GPU.

I wanted to ask people who has been following the gfx development on Mac longer than I have, if Apple update their AMD drivers semi-regularly even if they don't use new hardware. Sometimes there are graphic optimisations in the updates, but that might be strictly for the installed hardware.

Oh, well... instead of lingering I might as well just put another 280X in there, or maybe a used 290X and make a new decision once Fiji is confirmed up and running.
 
I'm not sure it's that easy. CUDA made the break through in GPU rendering, but OpenCL has caught up in many cases. And on the Apple platform I'd argue that OpenCL should be the first choice—if your apps of choice support it.

Apple apps like FCPX, Motion and Compressor are OpenCL based and DaVinci Resolve is at least as good in OpenCL as in CUDA. Many of the high end 3D renderers like Maxwell and Arnold are CPU based (Octane being a CUDA based exception).

Since the 280X can be installed out of the box with no driver issues, it kind of puts it on par (I would say ahead) of the 980 in the light of all the driver issues it has after every Apple update.

I don't really use the Adobe line of products (apart from Lightroom) more than I have to, so I don't know how OpenCL fares compared to CUDA there. I know they supported CUDA pretty early.

I don't do gaming so I exclude that aspect, but looking at Barefeats' tests, an old 290X pretty much ties with a new 980 GTX. The NVIDIA card has the upper hand in energy efficiency, of course.

Anyway, not bashing the NVIDIA cards. They are fine cards. I just feel that as Apple users it's in our interest to push OpenCL. I also think some people default to CUDA partly un-informed.

There's nothing uniformed about choosing a CUDA card as a creative pro, if there are clear benefits to having CUDA. Both DaVinci Resolve and Adobe suite have superior performance and feature sets when using CUDA over OpenCL. Their developers even confirm this. The decision between AMD and NVIDIA should rest on the card's intended usage. If you don't work in apps that have clear benefits using CUDA, you don't necessarily have to buy an NVIDIA card. OpenCL support is getting better and better, yes - and eventually it may actually reach parity with CUDA on apps that support both. But that isn't the case quite yet.

The fact remains that new NVIDIA cards have the highest chances of actually working in our old cMPs because NVIDIA continues to distribute web drivers on its own. With AMD, you're at the mercy of Apple adding an appropriate driver with an OS update. And that usually only happens when they have a reason to (e.g. they ship a Mac with a new AMD GPU).
 
I'm not looking to create an argument for arguments' sake—I just want to clear a few points below.

There's nothing uniformed about choosing a CUDA card as a creative pro, if there are clear benefits to having CUDA.

Yes, this is kind of stating the obvious and I agree. What I said was "some" users and "partly" uniformed. I meant that I get the feeling that most rush for CUDA even before checking if that design has the best support for the apps their using.

Both DaVinci Resolve and Adobe suite have superior performance and feature sets when using CUDA over OpenCL. Their developers even confirm this.

I'm sure that was true at some point, but things change. A comprehensive test at Liftgammagain.com led to the following chart showing a 280X to match a 780 GTX overall:

280Xvs780.png


I'm not interesting in benchmarks. Only "real life performance". 780 GTX is somewhat stronger on paper AND uses CUDA, but agains the 280X it wins some and loses some. The above chart is enough for me to say that CUDA isn't inherently better.

The fact remains that new NVIDIA cards have the highest chances of actually working in our old cMPs because NVIDIA continues to distribute web drivers on its own.

With NVIDIA there is the change of it working, if you find the driver (and keep it updated). With AMD it already works. The drivers are already included. I didn't have to do anything, before or after, installing my 280X (essentially a D700 with 3GB).

I think we largely agree on many points, but above you see some of the things I see differently. At the very least, I think the choice of GPU should be carefully considered.
 
I'm not looking to create an argument for arguments' sake—I just want to clear a few points below.



Yes, this is kind of stating the obvious and I agree. What I said was "some" users and "partly" uniformed. I meant that I get the feeling that most rush for CUDA even before checking if that design has the best support for the apps their using.



I'm sure that was true at some point, but things change. A comprehensive test at Liftgammagain.com led to the following chart showing a 280X to match a 780 GTX overall:

Image

I'm not interesting in benchmarks. Only "real life performance". 780 GTX is somewhat stronger on paper AND uses CUDA, but agains the 280X it wins some and loses some. The above chart is enough for me to say that CUDA isn't inherently better.



With NVIDIA there is the change of it working, if you find the driver (and keep it updated). With AMD it already works. The drivers are already included. I didn't have to do anything, before or after, installing my 280X (essentially a D700 with 3GB).

I think we largely agree on many points, but above you see some of the things I see differently. At the very least, I think the choice of GPU should be carefully considered.

It's no real secret that AMD has trouble keeping up with NVidia. While NVidia keeps on winning award for the speed and quality of their hardware, AMD is stuck in a rut doing some hocus pocus with gpu rebranding.
 
It's no real secret that AMD has trouble keeping up with NVidia. While NVidia keeps on winning award for the speed and quality of their hardware, AMD is stuck in a rut doing some hocus pocus with gpu rebranding.

Maybe they will be rebranded, but made in 20 nm.
http://forums.anandtech.com/showthread.php?t=2429277

Also, about Fiji. Nobody from the industry has even preproduction mules so far, apart from guys from ChipHell. Two possibilities:
1) Drivers are not mature enough, but when they will be, it will be faster than fastest version of full GM200.
2) AMD wants to make gigantic surprise to JHH(CEO of Nvidia), and Fiji, or biggest single GPU from AMD will be much faster than GM200 regardless if the drivers are mature enough or not.
 
Lots of gfx stuff here is NVIDIA related

Back in the day, it used to be the opposite. AMD cards were more easily flashed with Mac EFI (Nvidia cards required physical EEPROM replacement), and so there was a LOT more AMD discussion back then.

But Apple changed to self-init, which made flashing somewhat unnecessary. This enabled anyone to use any PC card with OS X driver support. Theoretically this should have just leveled the playing field a bit, but then Asgorath wrote the very wonderful but also completely Nvidia-focused PC card FAQ. I'm certain that FAQ guided many, many people to get Nvidia cards. I don't know why nobody as done the same by creating an AMD FAQ.

I am brand agnostic myself and will get whatever makes sense at the time. Right now Nvidia makes sense. Maxwell was a real improvement in speed and power efficiency--the top of the GTX line, the 980, uses 2x 6-pin power connectors, which is undeniably convenient for MP owners.

Hopefully Fiji XT is truly great. Competition is incredibly important for innovation, quality, and pricing, so it is important for everyone, even Nvidia users, that AMD does well.
 
I'm not looking to create an argument for arguments' sake—I just want to clear a few points below.



Yes, this is kind of stating the obvious and I agree. What I said was "some" users and "partly" uniformed. I meant that I get the feeling that most rush for CUDA even before checking if that design has the best support for the apps their using.



I'm sure that was true at some point, but things change. A comprehensive test at Liftgammagain.com led to the following chart showing a 280X to match a 780 GTX overall:

Image

I'm not interesting in benchmarks. Only "real life performance". 780 GTX is somewhat stronger on paper AND uses CUDA, but agains the 280X it wins some and loses some. The above chart is enough for me to say that CUDA isn't inherently better.



With NVIDIA there is the change of it working, if you find the driver (and keep it updated). With AMD it already works. The drivers are already included. I didn't have to do anything, before or after, installing my 280X (essentially a D700 with 3GB).

I think we largely agree on many points, but above you see some of the things I see differently. At the very least, I think the choice of GPU should be carefully considered.


To prove the current situation on GPUs you use a chart from 2013?

That was just a few months after 780 started even working in OS X, drivers were not mature at all. And it is well documented that 980/Titan X are superior OpenCl machines compared to 780 and their ilk from nearly 2 years ago.

I think we are all curious what the Fiji cards have to offer, but for a long time now there has been nothing even mildly competitive from the Red camp.

And BTW, there are upcoming drivers in OS X for AMD. You can see for yourself in any install of 10.10.3. Hence R9 290 support, but there more listed in there.

From 8000 Controller kext:

0x45001002 0x46001002 0x66401002 0x66411002 0x66461002 0x66471002 0x66501002 0x66511002 0x665C1002 0x665D1002 0x67B01002

And from the 9000 Controller kext:

0x69201002 0x69211002 0x69301002 0x69381002 0x69391002 0X73001002

"1002" is AMD's vendor id, the other numbers are device ids.

Look them up here:

https://pci-ids.ucw.cz/read/PC/1002
 
To prove the current situation on GPUs you use a chart from 2013?

That was just a few months after 780 started even working in OS X, drivers were not mature at all. And it is well documented that 980/Titan X are superior OpenCl machines compared to 780 and their ilk from nearly 2 years ago.

I don't intentionally use old tests. I use the best I can find where serious effort has gone into testing.

And just to be sure: the 780 in that test operates in CUDA mode, not OpenCL.

For modern day (Jan/Feb 2015) OpenCL in Resolve then, let's look at Barefeats.com:

A 7970 has about 50% better OpenCL performance (LuxMark) compared to a 780 GTX, whereas it gets beaten by 13% by the 980 GTX.

In Resolve, with CUDA for NVIDIA and OpenCL for AMD this translates to:

7970 beating 780 by 13-20%
980 beating the 7970 by 16-40%


If we look at the 290X (from fall 2013) it beats the 980 by 7% in OpenCL, but in Resolve, when it runs CUDA vs OpenCL for the 290X, the 980 is between 4-12% faster.

Be advised though, that those figures translates to between sub-frames to 2fps increase for the 980.

I think we are all curious what the Fiji cards have to offer, but for a long time now there has been nothing even mildly competitive from the Red camp.

I let my numbers above speak for themselves. The 980 is pretty new and will compete against Fiji XT—it's just sooner out of the gate. Prior to that, for the last 1.5 yrs or so it seems AMD was faster (i.e. 780 vs 7970/280X)?

And BTW, there are upcoming drivers in OS X for AMD. You can see for yourself in any install of 10.10.3. Hence R9 290 support, but there more listed in there.

From 8000 Controller kext:

0x45001002 0x46001002 0x66401002 0x66411002 0x66461002 0x66471002 0x66501002 0x66511002 0x665C1002 0x665D1002 0x67B01002

And from the 9000 Controller kext:

0x69201002 0x69211002 0x69301002 0x69381002 0x69391002 0X73001002

"1002" is AMD's vendor id, the other numbers are device ids.

Look them up here:

https://pci-ids.ucw.cz/read/PC/1002

Perfect, thanks! This is the info I wanted to find. So it seems there might be support for Fiji even before Apple ever uses it. We'll see.

The number I post is what I can find. I welcome any similar tests that shows superior NVIDIA performance in "professional" apps. Perhaps some After Effects benchmarks? Just no games or synthetic benchmarks, please.

Many of you seem convinced of NVIDIA's superiority—please show me your use cases and let me know what I'm missing. Thanks!
 
Please take a moment and read the small print at Barefeats.com.

He was testing a 290X that we modded. It ran at PCIE 2.0.

As far as I know there are 3 people on the planet with such a card.

So no other 290s are going to test that well.

And there are no boot screens on 290X.

And even if we introduce such a card, there are no frame buffers assigned to it, will always be "AMD Radeon 8xxx" and may not play DRM material, etc. We can do the PCIE 2.0 mod now if it was requested.

And on another "Pro Nvidia" note, while our GTX970/980/Titan-X can do boot screens on a 4K SST display, there are no AMD cards for OS X that can. And this includes nMP and it's spate of renamed desktop cards.

So, if you want a modern GPU that can show boot screens on a standard 4K display for Mac, your ONLY choice is Nvidia.
 
Please take a moment and read the small print at Barefeats.com.

He was testing a 290X that we modded. It ran at PCIE 2.0.

As far as I know there are 3 people on the planet with such a card.

So no other 290s are going to test that well.

And there are no boot screens on 290X.

And even if we introduce such a card, there are no frame buffers assigned to it, will always be "AMD Radeon 8xxx" and may not play DRM material, etc. We can do the PCIE 2.0 mod now if it was requested.

And on another "Pro Nvidia" note, while our GTX970/980/Titan-X can do boot screens on a 4K SST display, there are no AMD cards for OS X that can. And this includes nMP and it's spate of renamed desktop cards.

So, if you want a modern GPU that can show boot screens on a standard 4K display for Mac, your ONLY choice is Nvidia.

Those are fair remarks that are good to keep in mind. Of course, it's also fair to mention some of the issues that on again off again support of the NVIDIA drivers has meant—also recently.

I use an off the shelf 280X and don't enjoy boot screens (and don't care about it), but it has been 100% hassle free through all updates. I guess the idea of my card was price/performance. Modding from a 3rd party would defeat that in my case.

These are practical trade offs that each user has to make I guess. Anyway, these options are good and I'll be looking out for Fiji personally.
 
Last edited:
If you have an off the shelf 280x, you should pop the PCIE limit resistor off.

You too are stuck at PCIE 1.0 until you do so. In unpublished tests, Rob found it could make pretty big difference in certain apps, not so much in others. But easy to do if you have good eyes and steady hands.
 
If you have an off the shelf 280x, you should pop the PCIE limit resistor off.

You too are stuck at PCIE 1.0 until you do so. In unpublished tests, Rob found it could make pretty big difference in certain apps, not so much in others. But easy to do if you have good eyes and steady hands.

Yes... I have looked at it... Haven't felt sure that the card is actually hampered in actual usage though.

Too bad if Rob has got intel but decided not to publish it. I get that only quite few would actually care, but if he has got the data it would be nice to see it (in a footnote or similar). The lack of data so far has kept me from doing it. I would do it if I had a solid reason—aside from that I always seek the road of least resistance when it comes to these things.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.