Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mike641

macrumors newbie
Original poster
Feb 15, 2017
12
0
Hi there

I recently purchased a MSI 980ti for my Mac Pro 4.1 that i flashed to 5.1.
I have the most recent Nvidia web drivers and Cuda
I bought 2x 6 to 8pin adaptors (https://tinyurl.com/huewm22) to supply power to the card from the motherboard. booted her up just fine.
I tried playing a game (Rust) and had horrible FPS. i ran cinebench R15 and had same FPS as my old gtx760. and Geekbench had same scores


Does anyone see a problem with this setup? I'm not sure if i need an external PSU to get the full benefits of the card
Could it be under wattaged?
Any help would be greatly appreciated


Extras:
I have attached a screenshot of my Istat when the GPU is under load PCI slot 1
Cinebench gtx760= 47 fps whereas 980 ti= 45fps

according to https://www.tonymacx86.com/threads/graphics-testing-benchmarking-chart.177227/
I should be getting 99 FPS


With the nvidia drivers i've just kept on updating them since the 760 simply installed it + Cuda and nothing else (not sure if the issue lies somewhere in there) haven't used clover for any setting modifications
used the method below and still the 45fps exists
 

Attachments

  • aa.png
    aa.png
    90.4 KB · Views: 461
  • aaaaa.png
    aaaaa.png
    66.5 KB · Views: 420
  • bbbb-1.png
    bbbb-1.png
    46.3 KB · Views: 426
  • bbbb.png
    bbbb.png
    50.3 KB · Views: 347
Your problem is quite clear. You upgrade the graphic card, but the bottleneck is at the CPU, that's why there is no improvement.

In fact, Cinebench is well known CPU single thread limiting GPU benchmark. Almost all modern GPU will be bottlenecked by the cMP's CPU in this benchmark.

If you want to check your GPU's performance, try Unigine Heaven / Unigine Valley.
 
  • Like
Reactions: itdk92
Your problem is quite clear. You upgrade the graphic card, but the bottleneck is at the CPU, that's why there is no improvement.

In fact, Cinebench is well known CPU single thread limiting. Almost all modern GPU will be bottlenecked by the cMP's CPU in this benchmark.

If you want to check your GPU's performance, try Unigine Heaven / Unigine Valley.

When running anything that would strain the GPU the CPU is running not even 50%. i will run engine heaven now and see what the results are.
You don't think it has any issue with the motherboard power?
 
When running anything that would strain the GPU the CPU is running not even 50%. i will run engine heaven now and see what the results are.
You don't think it has any issue with the motherboard power?

You are bottlenecked by your CPU's SINGLE THREAD PERFORMANCE, you can be bottlenecked by the CPU but the usage is just 12.5% (your CPU can handle 8 threads, but only one can be use, and it's already working at 100%).

No, nothing to do with the power. If there is not enough power for the card, your cMP will shut itself down, but not limiting the 980Ti's performance.
 
You are bottlenecked by your CPU's SINGLE THREAD PERFORMANCE, you can be bottlenecked by the CPU but the usage is just 12.5% (your CPU can handle 8 threads, but only one can be use, and it's already working at 100%).

No, nothing to do with the power. If there is not enough power for the card, your cMP will shut itself down, but not limiting the 980Ti's performance.

so my only option here is to upgrade the CPU? i know i can upgrade the the 3.4 GHz 6 core xeon processor
(X5690) n this machine but I'm not sure if that would have any performance change

or if i need to do a complete new build
 
Last edited:
If you don't need 6 cores, you can go for the cheap X5677.

For Cinebench and Geekbenck, it's 100% guarantee that you will see the difference.

For gaming, most likely yes. Anyway, for gaming, if your game performance (FPS) doesn't change with difference settings, that usually means that you are CPU limiting.
 
If you don't need 6 cores, you can go for the cheap X5677.

For Cinebench and Geekbenck, it's 100% guarantee that you will see the difference.

For gaming, most likely yes. Anyway, for gaming, if your game performance (FPS) doesn't change with difference settings, that usually means that you are CPU limiting.

Is it a worthwhile upgrade such that if i go with the X5677 then i'd be able to use the potential of the 980ti?
 
Is it a worthwhile upgrade such that if i go with the X5677 then i'd be able to use the potential of the 980ti?

Probably, since a jump from 2.66 to 3.46 is pretty big. But it's impossible to tell without knowing what the bottleneck is. Monitor your CPU, GPU, and memory utilization while running the application that is important to you (Rust?). Once you've verified the bottleneck, it can hopefully be addressed.

Unfortunately this may be ignoring the elephant in the room. Some applications are just poorly optimized. They will run equally bad on a Mac Mini as a Mac Pro because the problem is the software, not the hardware. Windows games ported to OS X are notorious for this. I don't know anything at all about Rust, but if it is a game primarily meant for Windows, then probably nothing will help other than running it in Windows.
 
Probably, since a jump from 2.66 to 3.46 is pretty big. But it's impossible to tell without knowing what the bottleneck is. Monitor your CPU, GPU, and memory utilization while running the application that is important to you (Rust?). Once you've verified the bottleneck, it can hopefully be addressed.

Unfortunately this may be ignoring the elephant in the room. Some applications are just poorly optimized. They will run equally bad on a Mac Mini as a Mac Pro because the problem is the software, not the hardware. Windows games ported to OS X are notorious for this. I don't know anything at all about Rust, but if it is a game primarily meant for Windows, then probably nothing will help other than running it in Windows.

surprisingly on Rust with high settings and everything, single core CPU usage was about 75%.
Unigen Heaven i scored 68 fps avg. CPU was barely running
if you know of any other tests i can do please let me know

debate whether to upgrade the CPU x5690 or to build a PC/ potentially a Hackintosh

Below i have posted in game CPU status/ RAM usage and i was getting about 22 FPS
 

Attachments

  • aaa.png
    aaa.png
    51.9 KB · Views: 318
  • aaaaa.png
    aaaaa.png
    387.1 KB · Views: 329
  • aaaa.png
    aaaa.png
    434.9 KB · Views: 343
Last edited:
The issue is not your CPU, albeit faster never hurts. It's that you're gaming on OSX with beta support nvidia drivers that haven't matured significantly since the Kepler generation.

If you would like a more realistic sense of what your hardware is actually capable of doing in the gaming department you'll need to bootcamp install windows and play in that environment.
 
  • Like
Reactions: Synchro3
The problem is the CPU. Most games / OpenGL benchmarks are bottlenecked by the single thread performance of your CPU, since there's one single thread doing all the OpenGL calls. You won't see that in activity monitor, since many cores are idling and the "worker thread" continuously gets moved to different physical cores.
The CPU overhead in macOS drivers / APIs is comparably high (which affects all GPUs, not just Maxwell), so the performance hit form using an old CPU is more notable than in Windows.

If you had a Hackintosh, I'd say: Go to your BIOS and overclock the CPU, you'll see an almost linear increase in your graphics benchmarks. No way to do that on a genuine Mac Pro though. Installing a faster CPU will obviously do the same though. ;)

Also have a look here: https://forums.macrumors.com/threads/nvidia-geforce-gtx-980-on-5-1.2028834/page-2#post-24299426
 
I would have to disagree. To me it sounds like the OP has a gaming card and is trying to game, at a resolution of 1600p. While 1080P can be CPU bound almost all instances of 1440p or higher becomes GPU bound. Moving from his 2.66 to 3.46 within the same architecture will net him some gains, but those gains are so marginal compared to the benefits of moving platforms.

Namely:

Not using an outdated Open GL renderer.

Gaining access to Direct X

Having modern drivers designed for his GPU. Keep in mind that at the moment his 980ti is essentially just a super clocked 780 to OSX.

In the words of barefeats: "We mentioned in our previous gaming article that we have been toying with a Hackintosh in our lab. With the GTX 980 Ti installed, it was faster running Batman than the two Macs featured above under OS X. However, with Windows installed in a Boot Camp partition, both Macs beat the Hackinosh running Batman. Ditto for Tomb Raider. Moral: If you already own a Mac and want to boost game performance, you don't need to sell your Mac and buy a PC or build a Hackintosh. Just add Windows and stir." cite
 
I would have to disagree. To me it sounds like the OP has a gaming card and is trying to game, at a resolution of 1600p. While 1080P can be CPU bound almost all instances of 1440p or higher becomes GPU bound. Moving from his 2.66 to 3.46 within the same architecture will net him some gains, but those gains are so marginal compared to the benefits of moving platforms.

Namely:

Not using an outdated Open GL renderer.

Gaining access to Direct X

Having modern drivers designed for his GPU. Keep in mind that at the moment his 980ti is essentially just a super clocked 780 to OSX.

In the words of barefeats: "We mentioned in our previous gaming article that we have been toying with a Hackintosh in our lab. With the GTX 980 Ti installed, it was faster running Batman than the two Macs featured above under OS X. However, with Windows installed in a Boot Camp partition, both Macs beat the Hackinosh running Batman. Ditto for Tomb Raider. Moral: If you already own a Mac and want to boost game performance, you don't need to sell your Mac and buy a PC or build a Hackintosh. Just add Windows and stir." cite

It's easy to know the answer.

Ask OP to reduce the gaming resolution (and only resolution but nothing else). If the FPS keep increase by lowering the resolution. Then it's GPU limiting (no matter it's hardware to software limiting , at least we can confirm that's somehow GPU related, but not CPU related).
 
You won't see that in activity monitor, since many cores are idling and the "worker thread" continuously gets moved to different physical cores.

Is that just an Activity Monitor thing? Because I've noticed graphs in ARX that look exactly how I'd expect a demanding single-core thread to look (a single core pegged at the top consistently, with the load not moving around to other cores). Example:

upload_2017-2-16_11-20-44.png
 
Last edited:
I would have to disagree. To me it sounds like the OP has a gaming card and is trying to game, at a resolution of 1600p. While 1080P can be CPU bound almost all instances of 1440p or higher becomes GPU bound. Moving from his 2.66 to 3.46 within the same architecture will net him some gains, but those gains are so marginal compared to the benefits of moving platforms.

Namely:

Not using an outdated Open GL renderer.

Gaining access to Direct X

Having modern drivers designed for his GPU. Keep in mind that at the moment his 980ti is essentially just a super clocked 780 to OSX.

In the words of barefeats: "We mentioned in our previous gaming article that we have been toying with a Hackintosh in our lab. With the GTX 980 Ti installed, it was faster running Batman than the two Macs featured above under OS X. However, with Windows installed in a Boot Camp partition, both Macs beat the Hackinosh running Batman. Ditto for Tomb Raider. Moral: If you already own a Mac and want to boost game performance, you don't need to sell your Mac and buy a PC or build a Hackintosh. Just add Windows and stir." cite

Cinebench is discussed here, as usual:

https://forums.macrumors.com/thread...out-nvidia-pc-non-efi-graphics-cards.1440150/

FAQ 19.

Regarding the comments above, it is absolutely possible (and likely) for an ancient CPU like the ones found in the cMPs to be the limiting factor when paired with a modern high-end GPU, even at 2560x1600. There is no evidence to suggest that the 980 Ti is being treated like a 680/780, the shader cores are very different and the drivers (esp. the internal microcode compiler) must be aware of that for the GPU to even function at all.

Issues like these have been discussed to death, but it really does just boil down to the fact that the Apple OpenGL framework is extremely inefficient compared with DirectX. Apple has attempted to improve this situation with Metal, which basically removes the enormous Apple OpenGL framework software from between the app and the GPU driver (i.e. massively reducing the CPU overhead). However, the adoption rate of Metal for gaming on macOS has been anemic at best.
 
Cinebench is discussed here, as usual:

https://forums.macrumors.com/thread...out-nvidia-pc-non-efi-graphics-cards.1440150/

FAQ 19.

Regarding the comments above, it is absolutely possible (and likely) for an ancient CPU like the ones found in the cMPs to be the limiting factor when paired with a modern high-end GPU, even at 2560x1600. There is no evidence to suggest that the 980 Ti is being treated like a 680/780, the shader cores are very different and the drivers (esp. the internal microcode compiler) must be aware of that for the GPU to even function at all.

Issues like these have been discussed to death, but it really does just boil down to the fact that the Apple OpenGL framework is extremely inefficient compared with DirectX. Apple has attempted to improve this situation with Metal, which basically removes the enormous Apple OpenGL framework software from between the app and the GPU driver (i.e. massively reducing the CPU overhead). However, the adoption rate of Metal for gaming on macOS has been anemic at best.



So do you thinks its a CPU bottle neck in your opinion and suggest i upgrade to a newer MP CPU (X5690)) or a new computer build if it was you?
im not sure if i update the CPU to the X5690 whether it will be out dated again in say a year or so and run into these issues soon down the road.

its a matter of whether i spend $200 on CPU or $800 on a new build.

[doublepost=1487341753][/doublepost]
Is that just an Activity Monitor thing? Because I've noticed graphs in ARX that look exactly how I'd expect a demanding single-core thread to look (a single core pegged at the top consistently, with the load not moving around to other cores). Example:

View attachment 688887

yes the pull down menu with blue CPU usage is iStat which is activity monitor but on the menu bar
 
Last edited:
So do you thinks its a CPU bottle neck in your opinion and suggest i upgrade to a newer MP CPU (X5690)) or a new computer build if it was you?
im not sure if i update the CPU to the X5690 whether it will be out dated again in say a year or so and run into these issues soon down the road.

its a matter of whether i spend $200 on CPU or $800 on a new build.

About 3 or 4 years ago, I switched from a 2010 to a Hackintosh (built using similar components to the highest-end iMac at the time, with a high-end GPU). It was staggering how much faster a Core i7 CPU was. So, my advice is to accept the fact that the cMP is just past its prime and move on.
 
My guess is CPU. I only have a GTX 980 (non Ti), but here's Cinebench R15 on my hackintosh with i7-6700K clocked to 4.8GHz. I had a few light processes running in the background when I ran this...

Screen Shot 2017-02-17 at 12.00.35 PM.png
 
Last edited:
  • Like
Reactions: owbp
Don't listen to people saying to go ahead and spend $800 on a new build before you try every "free" thing you can. Create a boot camp partition and run windows 10. Download the 980ti drivers for windows on there, and then try to run tests. You will get much better results.
 
  • Like
Reactions: thornslack
With each passing day, it gets increasingly difficult to hide the 4,1/5,1's age.


Sure, but this case is like a "zero upgrade" scenario, where the OP uses a CPU from 2009!

Many Macs from 2009 work but try to benchmark them and see what you get :)

As others stated, It's a clear CPU bottleneck.

Same happened when we changed for Titans without upgrading the CPUs
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.