Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As usual, you don't know what you are talking about. Hope you'll admit it this time when we have Pascal running on Macs before Polaris.

Why not just post about stuff you actually know?

I'm sure you will. And the GTX 1080 performance will likely be about the same as Kepler, just as with Maxwell. SCSC's argument is pertinent.
 
I'm sure you will. And the GTX 1080 performance will likely be about the same as Kepler, just as with Maxwell. SCSC's argument is pertinent.

The clock speed is quite high though so it could be an improvement over Maxwell, but Pascal has an even newer version of the 4th gen delta color compression algorithms which won't be supported by the web driver(stuck at 1st gen).

There are other newer features that also won't be available in the current state of web drivers such as PureVideo h.265 decode, NVENC h.265 encode, GPU Boost 3.0, and simultaneous multicast.

And then we still have an OSX with an old OpenGL and a Metal API that is still not useful. Apple might hold back on Vulkan support in favour of Metal.
 
Nvidia will be forced to lower their prices the minute AMD produces something competitive.

It's been a few years but you never know.

waiting-for-godot.jpg
 
It's funny reading complaints about Nvidia drivers.

Tried a R9 390X in a Mac Pro recently? Worked OK in 10.10 but in 10.11 the speed step control is broken, your display ends up looking like an old TV with vertical hold going out. And about 65-80% or so of 980Ti OpenGL speed. A single DP port so no 5K possible.

And then there's Fiji, out a year ago. 2/3 of driver files are there so you can get a single display to work, just a CPU powered frame buffer, no 3D or OpenCl. Maybe someday?

So, why are we complaining about fact that Nvidia has maintained current shipping cards working in OSX? Apple hasn't made it easy but they keep trying. AMD too busy writing "overclocker's dream" press releases and launching their paper Tigers to bother with OSX drivers. At least they keep a few PR shills busy.
 
  • Like
Reactions: tuxon86
You just posted the pre-optimised results despite the dozens of very obvious posts on this forum which have spoken about the August 2015 optimisation for Kepler.

These were the results from the optimised driver last August.

http://barefeats.com/gtx980ti.html

Kepler's OpenGL performance is now the same ballpark as Maxwell despite lower clock speed.

Compute is another thing. The 680 has less cores for that compared to the 980/Ti.

But that does mean the 970 could look quite bad compared to the 680 in OSX. That's the opposite to true Maxwell performance on Windows.
 
Last edited:
Okay... How about we test things on the latest drivers available today running on the latest version of OS X? Again, this is OS X El Capitan 10.11.5 (15F34) with Nvidia web driver version 346.03.10f01.

Heaven: GTX 980 Extreme preset and 3200x1800 (the resolution I used regularly).

Heaven - GTX980 Extreme.png Heaven - GTX980 4K.png

Heaven: GTX 680 Extreme preset and 3200x1800 (the resolution I use regularly).

Heaven - GTX680 Extreme.png Heaven - GTX680 4K.png

Here, we see a 28% increase in the Extreme preset and 39.5% increase at 3200x1800 in performance.


Valley: GTX 980 Extreme preset and 3200x1800 (the resolution I use regularly).

Valley - GTX980 Extreme.png Valley - GTX980 4K.png

Valley: GTX 680 Extreme preset and 3200x1800 (the resolution I use regularly).

Valley - GTX680 Extreme.png Valley - GTX680 4K.png

Here, we see just under 8% increase in the Extreme preset and a 30% increase at 3200x1800 in performance.

Everything was exactly the same for all tests. The only thing I did was swap out cards. No other apps were running during any of the tests. All were first runs. I didn't bother running test multiple times because I felt I wasted enough time swapping video cards.
 
  • Like
Reactions: LightBulbFun
You just posted the pre-optimised results despite the dozens of very obvious posts on this forum which have spoken about the August 2015 optimisation for Kepler.

These were the results from the optimised driver last August.

http://barefeats.com/gtx980ti.html

Kepler's OpenGL performance is now the same ballpark as Maxwell despite lower clock speed.

Compute is another thing. The 680 has less cores for that compared to the 980/Ti.

But that does mean the 970 could look quite bad compared to the 680 in OSX. That's the opposite to true Maxwell performance on Windows.

Do you simply not understand what it means to be CPU limited in a game? Or what a bottleneck is w.r.t performance? You keep claiming that NVIDIA hasn't enabled the Maxwell (or Pascal) color compression, where's your evidence or proof that this is the case? Do you work for NVIDIA and have inside knowledge or something?
[doublepost=1463877149][/doublepost]
The clock speed is quite high though so it could be an improvement over Maxwell, but Pascal has an even newer version of the 4th gen delta color compression algorithms which won't be supported by the web driver(stuck at 1st gen).

There are other newer features that also won't be available in the current state of web drivers such as PureVideo h.265 decode, NVENC h.265 encode, GPU Boost 3.0, and simultaneous multicast.

And then we still have an OSX with an old OpenGL and a Metal API that is still not useful. Apple might hold back on Vulkan support in favour of Metal.

Again, please provide evidence to support your claim the Maxwell color compression isn't supported. Same with GPU Boost. Why do you think this isn't enabled? I would've thought that would be controlled by the VBIOS directly, for example.

Are you really blaming NVIDIA for the lack of h.265 encode/decode and Simultaneous Multi-Projection? Or async compute? NVIDIA does not control the OpenGL, OpenCL or Metal frameworks. They are limited to implementing whatever Apple exposes.
 
Looking at the last two sets where you really pushed things to the max those results are poor compared to the delta between 680 and 980 on Windows. Average 5fps difference? The Valley benchmark at 1600x900 was almost identical, with the same min and max FPS, within margin of error. Why would anyone spend an extra 400 bucks for that?

In terms of average frames per second, you should be seeing a difference across the board like this (the difference has widened even more than this chart below on Windows with driver improvements over the last 18 months):
 

Attachments

  • image.jpeg
    image.jpeg
    107.7 KB · Views: 200
Last edited:
That doesn't address the rest of the results. 28%, 39.5%, and 30%.

6.4FPS increase is huge when you are only doing 15FPS. If you had $15 and gave away $6.40, you're giving away 42% of your money!!!

Look at the Heaven scores. 11.8 vs 19.5. 7.7FPS difference. If you had $11.80 and gave away $7.70, you'd be giving away 65% of your money!!!

Would people pay for $400 to go from a slide show like, migraine inducing 11.8FPS to almost bearable 19.5? I believe they will.

When did this become about OS X vs Windows? OpenGL ≠ DirectX. We are not talking about that. That's an entirely different discussion. I want to know why you keep running around telling everyone that Maxwell cards perform like Kepler cards in OS X, where you got that info, and for evidence of it being true.

Where is the proof that there are no optimizations in the Nvidia web drivers for Maxwell cards?

Where is the evidence of Maxwell running in "compatibility mode"?

Where did you get your information about the color compression?

What ever happened to your claims of the Nvidia web drivers crashing Adobe apps?

How do you know the team working on OS X drivers has been reduced?


You untiringly, continuously feed everyone your personal conjectures as fact and there are people who actually believe you. Please provide evidence to backup your statements, otherwise, I will just continue to believe they are nothing but acid fueled hallucinations.
 
Could someone clarify what possible reasons may exist why a GTX 1080 might not work in a cMP 5,1? Are there any?

I'm heading to the US for a week and want to pre-order a card (huge savings vs UK pricing), but won't have time to wait for anyone to test it first. I will be using it on second cMP dedicated to Windows 10.

Any reason to be cautious regarding compatibility from a hardware ONLY perspective?
It should work fine in Windows with UEFI, right?
 
Last edited:
  • Like
Reactions: cpnotebook80
I tend to stay at least one generation behind in GFX cards due to the great driver development shortage Mac OS X seems to have on newer kit. It seems to take much longer to get satisfactory results out of the newer kit. Just my opinion mind.
Last Hackintosh I made I used a GTX 770 because I saw only tiny gains in performance from the GTX 970 running Mavericks. Saved about £120 on that build.
I would recommend El Capitan users pick up cheap GTX 970s now, as the Win 10 crowd dump them secondhand for shiny, new, expensive 1080s.

https://gfxbench.com/compare.jsp?be...ce+GTX+770&D2=NVIDIA+Gigabyte+GeForce+GTX+970

Valley.jpg
 
Last edited:
That doesn't address the rest of the results. 28%, 39.5%, and 30%.

6.4FPS increase is huge when you are only doing 15FPS. If you had $15 and gave away $6.40, you're giving away 42% of your money!!!

Look at the Heaven scores. 11.8 vs 19.5. 7.7FPS difference. If you had $11.80 and gave away $7.70, you'd be giving away 65% of your money!!!

Would people pay for $400 to go from a slide show like, migraine inducing 11.8FPS to almost bearable 19.5? I believe they will.

When did this become about OS X vs Windows? OpenGL ≠ DirectX. We are not talking about that. That's an entirely different discussion. I want to know why you keep running around telling everyone that Maxwell cards perform like Kepler cards in OS X, where you got that info, and for evidence of it being true.

Where is the proof that there are no optimizations in the Nvidia web drivers for Maxwell cards?

Where is the evidence of Maxwell running in "compatibility mode"?

Where did you get your information about the color compression?

What ever happened to your claims of the Nvidia web drivers crashing Adobe apps?

How do you know the team working on OS X drivers has been reduced?


You untiringly, continuously feed everyone your personal conjectures as fact and there are people who actually believe you. Please provide evidence to backup your statements, otherwise, I will just continue to believe they are nothing but acid fueled hallucinations.

There are logical, sane people to have discussions with on this board.

Old SCSC isn't one of them.
 
That doesn't address the rest of the results. 28%, 39.5%, and 30%.

6.4FPS increase is huge when you are only doing 15FPS. If you had $15 and gave away $6.40, you're giving away 42% of your money!!!

Look at the Heaven scores. 11.8 vs 19.5. 7.7FPS difference. If you had $11.80 and gave away $7.70, you'd be giving away 65% of your money!!!

Would people pay for $400 to go from a slide show like, migraine inducing 11.8FPS to almost bearable 19.5? I believe they will.

When did this become about OS X vs Windows? OpenGL ≠ DirectX. We are not talking about that. That's an entirely different discussion. I want to know why you keep running around telling everyone that Maxwell cards perform like Kepler cards in OS X, where you got that info, and for evidence of it being true.

Where is the proof that there are no optimizations in the Nvidia web drivers for Maxwell cards?

Where is the evidence of Maxwell running in "compatibility mode"?

Where did you get your information about the color compression?

What ever happened to your claims of the Nvidia web drivers crashing Adobe apps?

How do you know the team working on OS X drivers has been reduced?


You untiringly, continuously feed everyone your personal conjectures as fact and there are people who actually believe you. Please provide evidence to backup your statements, otherwise, I will just continue to believe they are nothing but acid fueled hallucinations.
Percentages can be deceiving compared to looking at the real frame rates.

One of your benchmarks was identical between both cards with no significant difference outside margin of error.

And when you use red fonts, exclamation marks and personal attacks you have lost me completely.

Nvidia said themselves, only beta support arrived for Maxwell last August. No need since then, no official Maxwell vendors, nothing in the support documents about Maxwell.

Now I can understand why people report driver bugs below on other forums. This forum is often hostile, often unhelpful and people with questions are often pushed into purchasing expensive upgrades that are not well supported.

https://forums.geforce.com/default/...970-webdriver-346-03-05f01-under-osx-10-11-3/
https://forums.geforce.com/default/...hop-2015-1-2-with-osx-webdriver-346-03-05f01/
https://feedback.photoshop.com/phot...sh-on-a-mask-and-sometimes-with-healing-brush
https://feedback.photoshop.com/phot...p-2015-1-2-artifacts-appears-when-using-brush
https://discussions.apple.com/thread/7437317?start=0&tstart=0
https://forums.adobe.com/thread/2094893
 
Last edited:
Where did I attack you personally??? Is that the excuse you're going to use to avoid answering the questions?

Now, percentages are deceptive??? Is that the best you can do???

I am willing to bet you cold hard cash that I can run those tests a dozen times and the GTX 680 will not beat the GTX 980. This is not margin of error.

Where's the info on color compression???

Where's the info on compatibility mode???

Where's the info on the reduced OS X driver staff???


A few incidents of people crashing their apps is "margin of error". Unless it can be reproduced on a consistent basis, it can be a million other things causing crashes.

Look, it's fine to tell people what you are hypothesizing or guessing, but please don't pass it off as fact!!!
 
Last edited:
He doesn't have any proof for his claims. As I posted before, any Maxwell card will perform perfectly fine when it's sitting in a powerful hackintosh. The 980ti Unigine Benchmark I posted (not sure if this thread or another) was awesome, not any worse than in Windows.
Personally I wouldn't care if this is delivered by a driver running in "compatiblity mode" as long as the performance is fine. :)

Let's not forget that a) many OS X games are badly optimized and b) a Nehalem-based Mac (that's 5 generations back!) just isn't a good gaming machine in 2016.
 
He doesn't have any proof for his claims. As I posted before, any Maxwell card will perform perfectly fine when it's sitting in a powerful hackintosh. The 980ti Unigine Benchmark I posted (not sure if this thread or another) was awesome, not any worse than in Windows.
Personally I wouldn't care if this is delivered by a driver running in "compatiblity mode" as long as the performance is fine. :)

Let's not forget that a) many OS X games are badly optimized and b) a Nehalem-based Mac (that's 5 generations back!) just isn't a good gaming machine in 2016.

I'll put in 2 cents. I game only in bootcamp. I tried to see if my old cmp (see my sig) is cpu limited in gaming and it appears not, at least in Witcher 3 CPU usage does not go over 20% in task manager. However, I have to reduce the foliage effects to get okay frame rates, which I think is GPU limited. With Moore's law pretty much dead (or modified) I don't think cpu limitation is as important as in the past for gaming. The W3680 I have is an old cpu but its hardly breaking a sweat in this game! I want to get into 3D modelling as a hobbyist. So the 1080 might be overkill. Are some of you saying that on the mac side what generation of gpu doesn't really matter that much for rendering, etc? I find that hard to believe. But you have managed to confuse me quite well...
 
Maxwell performance about the same as Kepler?

Source: http://barefeats.com/gtx980.html

Pertinent argument? How so?

Your own posted data shows it clearly. Kepler actually has higher framerates in several of those titles compared to Maxwell, which is just ridiculous. Maxwell's OGL performance is a clear port of the Kepler series.

Is it nice that NVIDIA provides any support? Absolutely, its rather magnanimous actually. Are things consistently reliable? Sure. There are some compute benefits, power savings, perks of better architecture and faster clock rates with the newer cards, but if you think you're getting the most out of an NVIDIA GPU under OSX these days you're kidding yourself.
 
I'll put in 2 cents. I game only in bootcamp. I tried to see if my old cmp (see my sig) is cpu limited in gaming and it appears not, at least in Witcher 3 CPU usage does not go over 20% in task manager. However, I have to reduce the foliage effects to get okay frame rates, which I think is GPU limited. With Moore's law pretty much dead (or modified) I don't think cpu limitation is as important as in the past for gaming. The W3680 I have is an old cpu but its hardly breaking a sweat in this game! I want to get into 3D modelling as a hobbyist. So the 1080 might be overkill. Are some of you saying that on the mac side what generation of gpu doesn't really matter that much for rendering, etc? I find that hard to believe. But you have managed to confuse me quite well...

I respect your opinion, however, if your CPU usage stuck at around 17%, that may means you are CPU limiting, that's because the game can only utilise 2 threads.

On very worst case, the game can only utilise 1 thread, that's, 8.3% on the W3680. And there may be some other background demand from the OS itself, overall CPU usage may be just 10%, however, that's a sign that you may be CPU limiting.

When we are talking about CPU limiting in gaming, 99% of the time we are talking about single core performance, not multi core performance. For W3680, 12 threads available, 1 thread only equals to 8.3%. From memory, I only see 2 games can utilise 50% of my W3690 in gaming. And most of them cannot use more than 20%, however, I can run the game from 1080 all the way up to 4K at more of less the same frame rate. That's a sign that quite a few of games are CPU limiting on my W3690.

However, this may be a very specific case on my dual 7950 CF setup. AMD's driver is very bad, they may tax the CPU more, and make my setup further CPU limiting, or I don't know if CF will increase CPU overhead as well. Anyway, I can still enjoy almost all games at stable 4K 30HZ (usually somewhere between high to max setting), my 4K TV is the 1st gen stuff, no HDMI 2.0, so 30FPS max, not ideal, but very good graphics, very enjoyable. I mainly play offline game, so no need 120FPS to against the other player (e.g. in shooting games). And most of the time I still play games in consoles, so 30FPS is actually very common for me. If require, I can always go back to 1080P 60Hz, and use super resolution to render at 1440P, that will gives me pretty good graphic and FPS. However, when I do this , I realise even though I reduce the resolution, the FPS not necessary able to stay at 60FPS, sometimes it vary between 30 to 60FPS. GPU is not 100% at all, but CPU stay at the same 17% range. Therefore, I am quite sure my 5,1 is actually CPU limiting on quite a lot of games.

4k 30FPS sure is more demanding than 1080 60FPS. On some games, disable super resolution still roughly the same result. So, I am quite sure that's my W3690 limiting the FPS, not the GPU. Since Nvidia has better driver, may be can make the system deliver better performance overall. However, by considering the Maxwell is already so powerful, most likely the 5,1 with GTX 1080 is still CPU limiting most of the time.
 
  • Like
Reactions: Fl0r!an
Let's not forget that a) many OS X games are badly optimized and b) a Nehalem-based Mac (that's 5 generations back!) just isn't a good gaming machine in 2016.

A.) Definitely true.
B.) Less so. Westmere xeons have been well documented as excellent and now cheap gaming CPUS. Even barefeats had this to say: "We mentioned in our previous gaming article that we have been toying with a Hackintosh in our lab. With the GTX 980 Ti installed, it was faster running Batman than the two Macs featured above under OS X. However, with Windows installed in a Boot Camp partition, both Macs beat the Hackinosh running Batman. Ditto for Tomb Raider. Moral: If you already own a Mac and want to boost game performance, you don't need to sell your Mac and buy a PC or build a Hackintosh. Just add Windows and stir."

Source: http://barefeats.com/winvosx.html

Granted in heavily single-threaded titles a newer CPU will outperform, but I would say many if not most titles are GPU bound and the trend is continuing in this direction. You typically lose a couple frames due to the PCIe restrictions on cMP and a couple frames for dated CPU, but for a 2009 based machine to be *just* behind a modern build is fairly impressive.

@h9826790 Multi GPU has a significant CPU penalty. This is especially evident on cMP because of the older architecture's limited (by today's standards) IPC. Add in the PCIe hit and the benefits of multi GPU quickly start to evaporate. Back when I had 680s in SLI, although I had a reasonable boost in performance, it was not scaling as properly as it should due to the above mentioned reasons. Single GPU is fine because it does not saturate the card-to-card-to-processor interconnect. Multi GPU was a fun experiment for me but I am all about one powerful card these days. Better driver support, less heat, less noise, better frame times.
 
  • Like
Reactions: mtasquared
B.) Less so. Westmere xeons have been well documented as excellent and now cheap gaming CPUS. Even barefeats had this to say: "We mentioned in our previous gaming article that we have been toying with a Hackintosh in our lab. With the GTX 980 Ti installed, it was faster running Batman than the two Macs featured above under OS X. However, with Windows installed in a Boot Camp partition, both Macs beat the Hackinosh running Batman. Ditto for Tomb Raider. Moral: If you already own a Mac and want to boost game performance, you don't need to sell your Mac and buy a PC or build a Hackintosh. Just add Windows and stir."

Source: http://barefeats.com/winvosx.html
Quite funny that Barefeats wasn't able to put a simple PC box together which outperforms a 2009-ish workstation in Windows, won't comment on that. Anyway:

You're completely right that Windows drivers are a lot less CPU-demanding than their OS X equivalents! So, while a Nehalem CPU is just enough to drive a high-end GPU in Windows, it will be holding it back in OS X in OpenGL applications. This applies to all Nvidia cards (Maxwell & Kepler) and also AMD (although their drivers appear to be less CPU demanding, which I observed in my MP3,1, but that's another story).

My advantage as a Hackintosh-guy is that I can easily verify this by increasing the CPU clock speed and observing the resulting performance, and that's exactly what I did last year with my Nehalem Hackintosh:
In some games and benchmarks (e.g. CoH2, CS:GO, Cinebench), the performance was even increasing linearly with the CPU clock in OS X, which makes the bottleneck quite obvious. Other benchmarks/games showed a smaller dependence on CPU clock rate, but still were gaining a few FPS.

Just an example: Moving my R9 280 (which isn't high-end by any means) from my old P55 built (i5 clocked at >3GHz) to my new Skylake built gave my almost +50% performance in CoH2. In Windows the difference was a lot smaller but still present.
On Nvidia cards the difference would even be slightly bigger, because of the mentioned OS X driver CPU overhead.

So to sum it up: I was talking about OS X Web Driver performance (which SCSC permanently claims to be disastrous on Maxwell cards). In my opinion the reason for performance issues is the cMPs single core performance (and not any missing Maxwell features), and the barefeats benchmark you linked backs me up:
Some games are much faster on a Hackintosh. We have been toying with one in our lab. With the GTX 980 Ti installed, it ran Batman: Arkham City 80% faster and Dirt 57% faster than the 2010 Mac Pro tower with the same GPU. However, Diablo III and Tomb Raider were only 2% and 4% faster respectively.
 
  • Like
Reactions: scott.n
A.) Definitely true.
B.) Less so. Westmere xeons have been well documented as excellent and now cheap gaming CPUS. Even barefeats had this to say: "We mentioned in our previous gaming article that we have been toying with a Hackintosh in our lab. With the GTX 980 Ti installed, it was faster running Batman than the two Macs featured above under OS X. However, with Windows installed in a Boot Camp partition, both Macs beat the Hackinosh running Batman. Ditto for Tomb Raider. Moral: If you already own a Mac and want to boost game performance, you don't need to sell your Mac and buy a PC or build a Hackintosh. Just add Windows and stir."

Source: http://barefeats.com/winvosx.html

Granted in heavily single-threaded titles a newer CPU will outperform, but I would say many if not most titles are GPU bound and the trend is continuing in this direction. You typically lose a couple frames due to the PCIe restrictions on cMP and a couple frames for dated CPU, but for a 2009 based machine to be *just* behind a modern build is fairly impressive.

@h9826790 Multi GPU has a significant CPU penalty. This is especially evident on cMP because of the older architecture's limited (by today's standards) IPC. Add in the PCIe hit and the benefits of multi GPU quickly start to evaporate. Back when I had 680s in SLI, although I had a reasonable boost in performance, it was not scaling as properly as it should due to the above mentioned reasons. Single GPU is fine because it does not saturate the card-to-card-to-processor interconnect. Multi GPU was a fun experiment for me but I am all about one powerful card these days. Better driver support, less heat, less noise, better frame times.

Thanks for your info. Hopefully there are some new high end cards have good support from OSX (better has EFI). So that I can use a single new card and have same or better performance in both FPCX and gaming. At this moment, I stay with 2x 7950 is mainly because it doing so well in FCPX, and OK for gaming. If there is another good single card can beat (or match) 2x 7950 in FCPX, I think I will jump to that even without EFI (assume has native support from OSX. My wife will kill me if I am out of town and the Mac only give her black screen :eek:)
 
  • Like
Reactions: thornslack
Your own posted data shows it clearly. Kepler actually has higher framerates in several of those titles compared to Maxwell, which is just ridiculous. Maxwell's OGL performance is a clear port of the Kepler series.

You mean Barefeats' data. My data is on post #83.

Are the inconsistent frame rates in those games the fault of the Nvidia web drivers, OpenGL, poor porting of Windows games, or poor programming on those games?

You said, "And the GTX 1080 performance will likely be about the same as Kepler, just as with Maxwell." If by that, you mean, the Kepler is 21.5% slower than Maxwell in TessMark, 49.4% slower in Ocean OpenCL, and 2.5x slower in BruceX, then I guess the GT 120 performs about the same as a GTX 285.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.