Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
any way a pre i7 cpu is slower than i7 1st gen, my 3.1 v 5.1 with same GPU/drives got much higher fps in mixed games.
some games are almost all gpu dependent some are almost all cpu dependent so depends on what you play.

all games faster in windows and if you just want to play games on windows a used windows box will be a lot cheaper

i gess i can pull out cin bench and compare my 3.1/5.1 if you want single/multi core comparison
 
  • Like
Reactions: fendersrule
any way a pre i7 cpu is slower than i7 1st gen, my 3.1 v 5.1 with same GPU/drives got much higher fps in mixed games.
some games are almost all gpu dependent some are almost all cpu dependent so depends on what you play.

all games faster in windows and if you just want to play games on windows a used windows box will be a lot cheaper

i gess i can pull out cin bench and compare my 3.1/5.1 if you want single/multi core comparison
OP, I would be happy to do the same with Cinebench or any other benchmark I have access to (Geekbench, Handbrake, etc). My 3,1 and 5,1 are both 2.8GHz processors. Dual quad-core on the 3,1 and a single quad-core on the 5,1 (though it is hyper threaded). Memory and video are different but since you're focused on processor speed they shouldn't be much of a factor on CPU bound tasks.
 
Orph & pl1984, I’d be very interested in any cpu/graphics bench comparisons. Any real world gaming comparisons would be great too.

Obviously, I know that the raw processing power is much better and more efficient for a 3.33 or 3.46 5,1 vs the 3.0 2,1, and that the 5,1 will blow the 2,1 away in rendering times and such. I don’t do much of this and I have the time to spend if I ever do any handbrake conversions, for instance.

However, the reason I’m so interested is that I can still get a smooth 30-40fps in many modern games with High Settings under Windows with my current 2,1 7950 setup (GTA V, Sleeping Dogs Definative, Witcher 3, Star Wars Battlefront I & II, just to name some intensive ones), which I think is pretty impressive given the age of the machine.

So, any comparisons would be great to see how big of a role the cpu makes between the 2,1 and 5,1 in terms of gaming performance. For me, as long as games are in the 30-40fps range, they are playable for me. I’m just wondering what it will take as I’m sure the inevitable unplayable game is on the horizon for a 2,1, but a 5,1 might still handle, even with the same graphics card.
 
However, the reason I’m so interested is that I can still get a smooth 30-40fps in many modern games with High Settings under Windows with my current 2,1 7950 setup (GTA V, Sleeping Dogs Definative, Witcher 3, Star Wars Battlefront I & II, just to name some intensive ones), which I think is pretty impressive given the age of the machine.

I suspect you are GPU limiting. I have the same GPU, I also play most of those games on my 5,1 (W3690). For high setting, GPU is almost always the limiting factor. I doubt if you can see a big boost by moving to 5,1 (if you are NOT going to upgrade the GPU).

Since you have those games on hand, you should do a very simple test.

Boot your 2,1 into Windows, Open those games, then use lowest setting AND resolution (Vsync OFF) to run them. If you still only see 30-40 FPS, then you are CPU limiting. Upgrade to 5,1 (with better CPU) will help. But if you see something like way beyond 60FPS, then you are not CPU limiting under your preferred setting (high setting in this case). Upgrade to 5,1 won't help anything, but still only gives you 30-40FPS, because that's what the 7950 can do.
 
Last edited:
I don't have anything in the way of games so I would be unable to help you there.
 
I suspect you are GPU limiting. I have the same GPU, I also play most of those games on my 5,1 (W3690). For high setting, GPU is almost always the limiting factor. I doubt if you can see a big boost by moving to 5,1 (if you are NOT going to upgrade the GPU).

Since you have those games on hand, you should do a very simple test.

Boot your 2,1 into Windows, Open those games, then use lowest setting AND resolution (Vsync OFF) to run them. If you still only see 30-40 FPS, then you are CPU limiting. Upgrade to 5,1 (with better CPU) will help. But if you see something like way beyond 60FPS, then you are not CPU limiting under your preferred setting (high setting in this case). Upgrade to 5,1 won't help anything, but still only gives you 30-40FPS, because that's what the 7950 can do.


H9826790 is about right on this one. For games, I am sure your bottleneck is the graphics Card. AMD somehow Never delivered good cards for gaming purposes. Go for a strong Nvidia card, I would go for a 1080 Ti
 
Anything over the 7950 the 3,1 will start to show it's piss-poor single-core potential.

The 7950 is not a competitive card for games these days. It will "make it" at 1080p with some modern games (which is impressive), but it does not guarantee you 60 FPS or even close to it, regardless of which Mac Pro you have. The 7950 is pretty much top dawg for a 3,1.

I will state again, the GTX 1060 6GB, in Windows, on a 3,1 Quad core 2.8GHz, will not manage 60FPS, or even remotely close for most modern games. It is heavily bottlenecked by the CPU. On a 5,1? No problem-o.

Vermintide on a 3,1 w/ GTX 1060 in Windows will average you 28FPS @ 1080p. Probably a range of 20-50FPS, depending, but many times, sub Xbox One speeds.

TF2 maxed out (NO FSAA) is most shocking, with a range of 20-60FPS at a common MP map @ 1080p, in Windows. Playable, yes, but still "WTF" for such an old game...

The 3,1 just does not have what it takes to compete these days with gaming with "smooth" 60 FPS (which isn't classified as "smooth" anymore). I know, because I have both.

If you're happy with 30-40 FPS, with dips in the 20s, then I guess you'd be happy with a 3,1. If you want a gaming rig that can play with the big dawg PCs, then a maxed out 5,1 is your only choice in 2018. It's amazing what slapping in a 1070 would do for a 3.46GHz 5,1. Slapping in a GTX 1060 in my old 3,1, I was whole-fully disappointed. Watching my brother play Path of Exile with his 3,1 w/ 1060 was a little jaw-drapping with the frame drops. There. Is. Significant. Bottleneck.

Another comparison point, my 5,1 3.46GHz w/ 1070 RAVAGES my friend's FX8350 PC OC'd @ 4.0GHz w/ 1080 in SuperPosition by nearly 25%. A 2.8GHZ Harpertown is not remotely close to an FX8350 in single core...AT STOCK SPEEDS...do the math. Cut your losses, do not game on a 3,1.
 
Last edited:
  • Like
Reactions: Synchro3
Forgive me if I'm derailing things a bit, but this feels like a question that's apropos (though it may be vague)—I seem to think I've seen hints in other threads that on the Windows side GPU performance might be limited with non-flashed cards like a stock 1060 ... Is that correct, somewhat right, or completely off-base? If it's even remotely right, what might the real world performance implications be? I have a 5,1 with a 1060 myself and I'm looking to slap a Windows drive in there to supplant my current aging gaming rig.
 
I wonder if a Stong GPU Nvidia card like a 1080Ti in your 2.1 would do the trick. It really worked in my case, my 980ti Card made a big difference.

The 3,1 and older just isn't fast enough to take advantage of a card that fast. You'll see some difference, but the slow system bus and memory bus will cap performance.

An Intel engineer once told me the Core 2 Quad (which the 3,1 and older uses the Xeon version of) was horribly inefficient. The CPU had way more power than the bus and memory could ever use. It was a stopgap until they could sell the real i7 version.

And this is all the same stuff that will always cap the performance you will get on these machines for gaming and GPUs.

People are tossing around synthetic benchmarks trying to argue that the 3,1 is nearly as fast. It's not. Trust me, I'm sitting with both a 5,1 and a 3,1 right now. Throw anything real world at a 5,1 and it will demolish a 3,1.
 
Forgive me if I'm derailing things a bit, but this feels like a question that's apropos (though it may be vague)—I seem to think I've seen hints in other threads that on the Windows side GPU performance might be limited with non-flashed cards like a stock 1060 ... Is that correct, somewhat right, or completely off-base? If it's even remotely right, what might the real world performance implications be? I have a 5,1 with a 1060 myself and I'm looking to slap a Windows drive in there to supplant my current aging gaming rig.

An unflashed Nvidia card is limited to 2.5GT/s in Windows, however, this should not cause any noticeable performance difference in real world.

https://forums.macrumors.com/thread...mac-with-2-d700s.1732849/page-5#post-21722712

I tested my 7950 in a 5GT/s x4 slot, that only provide 50% bandwidth if compare to a 2.5GT/s x16 slot, but still perform roughly the same as 5GT/s x16 slot.

The 1060 is not that fast, a 2.5GT/s x16 slot should be more than enough for the card to perform properly.

In general, as long as you put the card in a x16 slot, there is no need to worry about 2.5GT/s or 5GT/s. Some users claim that they see significant improvement, but I think that only happen under some specific case (e.g. running out of VRAM, which make the whole process require lots of data transfer to / from the card). In my own test, even up to my current 1080Ti, the performance is fine with 2.5GT/s, gaming performance is pretty much the same as the reviews.

This is the real world usage history coming from my 1080Ti (Windows 10, 2.5GT/s x16).

Average usage of playing Rise of Tomb Raider for about 2 hours (Max setting, 3840x1080, AA OFF, VSNC OFF). The Bus interface load is just 25%. (This include little bit desktop environment time for screen capturing etc, not entirely accurate, but a "good enough" reference)
Game avg.JPG


The same situation, but the max recorded parameter. Bus interface load, 52% (This should not be affected by the desktop environment time)
game max.JPG


HEVC Hardware video decoding, 12%
4k video decoding.JPG


H264 4K Hardware encoding (via FFMpeg), 34%.
4K Video conversion.JPG


Two H264 1080P hardware encoding at the same time, 45%
dual hardware encode.JPG


So far, for gaming, video processing, or those popular gaming / compute orientated benchmarks. Nothing really demand anything close to a 2.5GT/s x16 connection's limit.
 
Last edited:
An unflashed Nvidia card is limited to 2.5GT/s in Windows, however, this should not cause any noticeable performance difference in real world.

... Snipped for readability ...

So far, for gaming, video processing, or those popular gaming / compute orientated benchmarks. Nothing really demand anything close to a 2.5GT/s x16 connection's limit.

Thank you for this amazing, in-depth answer! I appreciate it very much—just what I was after!
 
That statement is rubbish. Just compare single thread performance of the the fastest 5,1 you could buy versus the fastest 3,1. Single core performance of a 3.33GHz W5680 is 1518 i.e. only 11% faster than the 1365 of a 3.2GHz X5482

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+W3680+@+3.33GHz
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+X5482+@+3.20GHz

Using only one synthetic source (which no one else uses) to show the single core difference between a 3,1 and 5,1 is rubbish. Check the difference using Geekbench, then report back. No, how about try owning BOTH, then slap in a 1060 and run a suite of game benchmarks for us, then report back. You won't see an "11% difference". The 5,1 is amples faster in single core vs the crippled 3,1.

Syncro- great information regarding the ability of the 5,1 vs an i7 7700 @4k. I'm tired of talking about the slow 3,1, let's talk about something more useful. :) How much difference been a modern i7 vs a 3.46 Westmere at 1440p is what I'm curious about. I'd expect "some"?
 
Last edited:
The 3,1 and older just isn't fast enough to take advantage of a card that fast. You'll see some difference, but the slow system bus and memory bus will cap performance.

An Intel engineer once told me the Core 2 Quad (which the 3,1 and older uses the Xeon version of) was horribly inefficient. The CPU had way more power than the bus and memory could ever use. It was a stopgap until they could sell the real i7 version.

And this is all the same stuff that will always cap the performance you will get on these machines for gaming and GPUs.

People are tossing around synthetic benchmarks trying to argue that the 3,1 is nearly as fast. It's not. Trust me, I'm sitting with both a 5,1 and a 3,1 right now. Throw anything real world at a 5,1 and it will demolish a 3,1.

Since you think the 5.1 is so much more capable, - would it make sense to put a shiny new Pascal Titan X Nvidia into a hex core 3,46 GHz rig ?? How much faster would be games compared to an old 980ti ?
I ask because the same argument may hold regarding the 5.1 - we classic Mac Pro lovers will hit a wall somewhere.. the question is where. I would love to see a Star Wars Jedi Edition TitanX in my box, thats for sure. Would it make sense in our view or would it be nuts ?
 
Syncro- great information regarding the ability of the 5,1 vs an i7 7700 @4k. I'm tired of talking about the slow 3,1, let's talk about something more useful. :) How much difference been a modern i7 vs a 3.46 Westmere at 1440p is what I'm curious about. I'd expect "some"?

I have now a watercooled GTX 1080 Ti in the i7-7700K-PC, so I can't compare anymore directly with two equal graphic cards.

If I remember correctly, the difference between Mac Pro 5,1 and i7-7700K at 1440p was about 5%.

With a Mac Pro 5,1 you won't get 140 fps at 1080p. But 60 fps is never a problem.

Here a post from MVC, MacPro 3,1 bottlenecking: http://www.macvidcards.com/i-want-the-best-graphics-card-for-my-mac-pro-where-do-i-start.html
 
Last edited:
Thanks for the info, Synchro. That's about what I would expect, very small difference @ higher resolutions.

Alex: the 5,1 has yet to "hit a wall" with GPUs, at least so far. As you see, many people have put 1080Ti's in them. Comparing a 5,1 with a 1070 vs a 1080 vs a 1080Ti shows significant jumps for each GPU.

Bottlenecking? Maybe very slightly, and that only depends on the resolution (perhaps some at 1080p) but hitting a wall? Not yette.
 
Bottlenecking? Maybe very slightly, and that only depends on the resolution (perhaps some at 1080p) but hitting a wall? Not yette.

I was talking about 3,1 bottlenecking, not 5,1.

I quote MVC: Please be aware that while the 3,1 Mac Pro has the same GPU compatibility as the 4,1 or 5,1 that the older hardware of the 3,1 will result in some comparative bottlenecking. In general I recommend the GTX 770 as the high end for the 3,1 Mac Pro.
 
Thank you all for the replies, the thoughts, and the real world comparisons. I look forward to any further info that comes out of this thread as hopefully others with any version of a cMP will find some useful information/discussion.

For me, I think I'll stick with my 2,1 for now. I'll keep a lookout here and there for a good 5,1 deal and if anything really good comes up, I'll consider jumping on it. It also sounds like if I really want to see a real improvement on the gaming side, I'll need to consider a newer video card, preferably Nvidia.

I'll also keep an eye on what the OSX 10.14 requirements are. If the 5,1 is still supported, that will influence me going forward. I'm perfectly happy with my current setup of El Capitan, Mavericks, and Windows 7 Ultimate (and I have an awesome 2.7 mid-2012 antiglare 15" cMBP to run the latest version of everything if I really need to). However, it's always nice to know that a machine is still officially supported by the latest operating system and it's probably good to keep that in mind when upgrading to such old hardware, even though it's extremely capable. On the other hand, a drop in support may mean that 5,1 prices go down a bit and a hack might become available anyway, similar to how I'm running Mavericks and El Capitan.
 
In order to know how 5,1 react on different settings in real world gaming, then help to predict what will happen if OP move from his 2,1 to a 5,1 (with CPU upgrade). I ran some tests in Windows 10 to find out the bottleneck of each game. All tests ran in 1080P window mode (which make me easier to monitor the resources, and make screen screen capture), window mode should only affect GPU performance, but not CPU performance anyway.

So, here are the results.

1) Tomb Raider
Lowest setting - CPU limiting
TR 1080P Low.JPG

Ultimate setting - GPU limiting
TR 1080P ultimate.JPG

This game is clearly GPU limiting, as long as the GPU is not limiting, the CPU can easily go way beyond 120FPS. So, this kind of older game should run very well on a 2,1.

2) Rise of Tomb Raider
Lowest setting - CPU limiting
RoTR low.JPG

Ultimate setting - GPU limiting
RoTR max.JPG

Even though the max FPS vary a lot. The average is not that significant. Only 24% overall. For this kind of newer game. If run on a 2,1. CPU bottling start to kick in. From Geekbench 3 (single core), we know that the X5365 CPU is 40% slower than the W3690. Therefore, even in CPU limiting case, it can still deliver around 85FPS. Good enough for general gaming.

3) Thief
Lowest setting - CPU limiting
thief low.JPG

Ultimate setting - also CPU limiting.
thief max.JPG

The game is completely CPU limiting. As you can see, the performance is almost identical between lowest and highest setting. And it can just deliver 60FPS. So, if OP use a 2,1 to play this game, he will stuck at around 40FPS. Not good at all, but playable.

4) Sleeping Dog Definitive Edition (sorry about that the capture is in Chinese, but it's very easy to spot the min/avg/max FPS anyway)
Lowest setting - CPU limiting
Sleepdog low.JPG

Ultimate setting - GPU limiting
sleeping dog max.JPG

As long as OP use the correct setting. With the same 40% performance hit assumption. His X5365 should able to deliver about 85FPS on average. I played this game with my 7950 few years ago. From memory, in 1080P, it's impossible to reach 85 FPS average if in high settings.

I also tested GTA V, Batman, and few more games, most of them are GPU limiting. But since they are either no summary available, or hard to make the screen capture properly, so no capture provide. GTA V is a bit like Rise of Tom Raider, FPS vary between ~110-180 if GPU is not limiting. Batman AK's performance is roughly the same as Thief when GPU limiting, but when reduce to lowest setting, it can perform 50% better. So, it's GPU limiting. And the 2,1 may stuck right at max ~60FPS regardless GPU or settings.

In my own experience, the HD7950 is a very limiting GPU for today's standard. It can barely go 60FPS in 1080P if push for higher settings. If OP want better gaming experience (average ~60FPS), the most limiting factor on his setup should be the GPU, but not the CPU. His X5365 should able to deliver 60FPS for most of the games. (Assuming there is no noticeable performance difference between Win 7 and Win 10)
 
Last edited:
In order to know how 5,1 react on different settings in real world gaming, in order to predict what will happen if OP move from his 2,1 to a 5,1 (with CPU upgrade). I ran some tests in Windows 10 to find out the bottleneck of each game. All tests ran in 1080P window mode (which make me easier to monitor the resources, and make screen screen capture), window mode should only affect GPU performance, but not CPU performance anyway.

So, here are the results.

This is an AWESOME analysis and provides some great info! Thank you so much for taking the time to do this!
 
  • Like
Reactions: h9826790
>,< it's one of those things where it's simpler to look at good pc gaming sites.
if game x uses 1 core then you may have problems but if it can scale up your better off.
some games use more CPU some more GPU some games use 1 core some 4 cores (or more? but all tend to be limited by 1 master thread even if it can use more)

FPS tend to be light on CPU hard on GPU
sims tend to be hard on CPU light on GPU
not well optimized games just hit walls and some games like battlefront,overwatch, doom (the new one) just work well

then some settings hit GPU AA, resolution,lighting
some hit CPU shadows (at least in total war games), unit numbers/bots

but there is no denying that a new cpu will give you a better time, even just the w3680 in a pc mobo will OC to 4-4.5ghz (i think) which give a large boost in speed

or any of you dwarf fortress players RAM speed seems to be big if you want a high FPS, i know my 5.1 cant keep 30fps in DF once im over 120 dwarfs :D, not even shore if DF uses my GPU (it must a tad? :p)
http://www.bay12forums.com/smf/index.php?topic=112603.0
ps if you can wait till the new macpro is out there may be a price drop in the old 5.1 as they cost way to much most the time on ebay compared to the same parts in a pc on ebay
 
Thank you for that great benchmark! It really shows where we are in terms of CPU / GPU importance.
To be honest, I am surprised because I really thought even the 5.1 wouldn't have so much future left. Very useful insight indeed, my humble Thanks to you h98 ! Thumbs up[/QUOTE]
 
  • Like
Reactions: h9826790
Thank you for that great benchmark! It really shows where we are in terms of CPU / GPU importance.
To be honest, I am surprised because I really thought even the 5.1 wouldn't have so much future left. Very useful insight indeed, my humble Thanks to you h98 ! Thumbs up
[/QUOTE]

Yeah, I never done this kind of test as well (I often push the GPU limit, but never try the CPU's limit).

A simple fact is most of reasonable optimised games can run at above 100FPS on the 5,1 (with proper upgrade). That means still has few years life time (for gaming) if anything above 60FPS is good enough.

If anyone want serious gaming (stably above 144FPS), he will need a good gaming PC to achieve that. No matter how to tweak the 5,1. It's just can't be there.

Or If anyone just want some very casual gaming (at or above 30FPS), then sure the 5,1 can do it without any issue in the next few years.
 
  • Like
Reactions: fendersrule
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.