Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I read that article also. Although by community comments it seems that for more capable rigs the benefit is negligible/non-existant. Have you seen any benefits on cMP computers? I'll try some benching later.
 
I hate to bring up an old thread, but seeing that GTA V was on sale on Steam I purchased it, I was expecting some good performance on my machine as it has an RX580, but while it gives solid 60fps in some area's of the game (I have Vsync on) there are parts of the game where it drops to about 30fps which has me stumped as graphics wise the RX580 should be able to lock 60fps pretty easy unless I have a bottle neck somewhere. Anyone else have a RX580 and seen this kind of issue?
 
I hate to bring up an old thread, but seeing that GTA V was on sale on Steam I purchased it, I was expecting some good performance on my machine as it has an RX580, but while it gives solid 60fps in some area's of the game (I have Vsync on) there are parts of the game where it drops to about 30fps which has me stumped as graphics wise the RX580 should be able to lock 60fps pretty easy unless I have a bottle neck somewhere. Anyone else have a RX580 and seen this kind of issue?

CPU single thread bottleneck.

Reduce moving objects (e.g. people, cars, etc) and view distance may help.
 
Which graphics settings are you using ? The range of graphics settings is huge. GTA V will run on most decent PCs with the right settings. Also if you can't lock v-sync maybe you should disable it.
 
Last edited:
Which graphics settings are you using ? The range of graphics settings is huge. GTA V will run on most decent PCs with the right settings. Also if you can't lock v-sync maybe you should disable it.

Oh yeah, I missed that, V-sync seems the key factor when from 60 drop to 30.
 
The comments above are good advice. Keep in mind, the 580 is a midrange GPU from a few years ago and GTAV is an extemely demanding (and fun!) title, at least with all the graphics options turned up. I don’t recall how the cpu vs gpu load breaks down in that game but if you are playing at 1080p you are more likely to be cpu bound than if you run it at higher resolutions. Vsync typically incurs a small performance penalty and can increase your input lag, so unless your system can maintain 60 FPS at the most demanding of times it is usually better off. Vsync On eliminates tearing (more FPS than the monitors refresh rate causing parts of the screen to render one frame while other parts of the screen are rendering previous frames) and is also helpful for say indie titles where you don’t want your gpu rendering 200fps all the time when 30 or 60 will do.
 
Thanks guys, I appreciate ya'lls input. I've got the settings on High to Very High, GPU load is maxed but CPU load hovers around 35-55% on all cores and threads. I've been trying to disable hyper threading for the game as apparently that could cause performance problems as well, no success though.

As mentioned I do have Vsync on, turning it off does not increase the FPS in the game either.

I do plan on eventually upgrading the CPU's on my MacPro to X5690, but I have a feeling its not gonna do much for the game itself.
 
Thanks guys, I appreciate ya'lls input. I've got the settings on High to Very High, GPU load is maxed but CPU load hovers around 35-55% on all cores and threads. I've been trying to disable hyper threading for the game as apparently that could cause performance problems as well, no success though.

As mentioned I do have Vsync on, turning it off does not increase the FPS in the game either.

I do plan on eventually upgrading the CPU's on my MacPro to X5690, but I have a feeling its not gonna do much for the game itself.

35-55% across all cores doesn’t mean that you are not CPU single thread limiting.

The CPU will automatically balance the loading internally. E.g. a single thread bottlenecking process may show up as 50% core 1 usage + 50% core 2 usage.

So, it may looks like the process (a game in this case) can utilise multi core quite well, and you are not CPU bottlenecking. But that’s illusion actually.
 
I never really used AMD cards but for Nvidia sometimes you had to turn off vsync in the nvidia control panel as well as for the in game settings. Not sure if AMDs cataylist drivers or whatever they are called may have a similar feature but might be worth a look.
 
I never really used AMD cards but for Nvidia sometimes you had to turn off vsync in the nvidia control panel as well as for the in game settings. Not sure if AMDs cataylist drivers or whatever they are called may have a similar feature but might be worth a look.

Same, the game's Vsync setting can be override by the driver setting.

In general, it's better to turn off Vsync, but simply limit the framerate to the monitor's max refresh rate. Not always work, but most games will be able to stay at (or just below) the max refresh rate (with proper settings), and no tearing. Also, there is no need for the GPU to render any extra frame that never display. Save some power, let it run cooler.
 
@koyoot: I agree, we should start a Mac Pro gaming thread.

I've previously posted about my experiences in overclocking the nMP in bootcamp.

I am very impressed by the thermal room the platform has. I'm consistently running the latest games at "max" settings at 1440p.

For example: Dying Light: I'm playing that at 1440p at over 60FPS at almost max settings (shadows and view distance slightly reduced). This is a "capped" framerate. Without the cap, I'm seeing frame rates near 100!!

Metro and Metro Last Light redux - similar results.

Dragon Age Inquisition: ultra settings at 1440p at a little below a consistent 60FPS (high tessellation levels seems to bring down the framerate a bit; 60FPS is easy with reduced tessellation).

And the great thing - this performance is all with v-sync enabled. I find it eliminates cross-fire related screen tearing.

I recommend using a frame-rate limiter, such as DXSTORY, in order to keep a consistent 60fps, in conjunction with v-sync.

Now: the overclocking details. According to my research, the Radeon HD7870, which is the closest comparator with the D500, runs - in some base configurations - at 1000mhz core clock. So, I decided to overclock the D500s to 1000 mhz. Completely stable in everything but Dragon Age Inquisition (although that game has always had problems with Crossfire in Directx mode - Mantle seems to be more stable). So, I'm finding that a core clock of 950 - 975 with a memory clock of 1400 mhz is probably the sweet spot for every game I've tested.

GPU temps? High 80s. The only time I've seen higher temps was when I ran Dragon Age Inquisition at 75fps at 1080p (yes, at 75hz vertical refresh rate) - on ultra settings.

If anyone is interested I will repeat my comments about using MSI afterburner to achieve these clocks.

So, in answer to the OP's question: I have the same system as you, and I am really looking forward to GTAV - as long as the Crossfire compatibility is OK. And, another tip: try to run the 14.12 Omega drivers. It is easy enough to install them. Big performance boost over the official AMD bootcamp drivers. (Interesting observation: after installation the D500s are identified as "HD 7800 series" - which shows the relationship with the two chips).

Having said that, as Koyoot noted - the D500 has a wider memory bus than the HD7870XT, which probably helps with high-res performance. And the lower core clock might be explained by reference to Apple's desire to favour stability in these "workstation"-focused cards. However, given that they can easily, in my experience, be overclocked from 725mhz to 1000 mhz core clock, with very good stability and thermal performance, I would say that they are very robust designs, which is what one would expect when one is paying for "workstation-grade" GPUs.
Very surprised to read what you've written here, completely goes against conventional wisdom.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.