Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Your two processors fall outside the window of comparison. Westmere-based cMP will perform similarly to an Sandy-bridge processor of the same clock speed. The core2duo architecture is way older, going as far back as 65nm. Westmere was the last before Sandy and both are 32 nm.
 
I don't have any other similarly clocked CPUs to test here.

Edit: Found some old screenshots of mine...
Here's my old MacPro5,1 with 32nm Westmere 3.33GHz X5680s vs my 22nm Ivy Bridge 3.4GHz i7-3770 (non K).
Screen Shot 2016-04-26 at 7.28.10 AM.png Screen Shot 2016-08-12 at 9.43.25 PM.png
 
Last edited:
I know this one isn't testing the GPU too much but heres some comparisons of the GT120, GTX760 and 980ti all on my system


that 30k multi core score with the westmere CPU is impressive !
but i guess for single core usage there isn't that much an improvement over my quad 2.66 i.e for gaming

i think i will stick with my current MP for the time being, save up some $ and make a Hackintosh with the i7.
It will be sad to see the MP go, she's been a wonderful machine.
 

Attachments

  • 0.png
    0.png
    186.2 KB · Views: 157
My point is that you can't judge CPU performance just by looking at GHz. Each generation brings improvements in IPC.

While the 30K multi-core score was impressive, The only app that I had which took advantage of it was Handbrake. On the down side, having two of those monster X5680 in my tower generated a ton of heat and it sucked down electricity faster than Tony Montana sucked down coke.

Realistically, my current daily runner which is an i7-6700K system feels much snappier than my old 5,1 did. I sacrifice a little bit on the top end, multi-core performance (my i7 scores over 22K), but I don't notice it too much because I queue up jobs and walk away when it starts.
 
I don't have any other similarly clocked CPUs to test here.

Edit: Found some old screenshots of mine...
Here's my old MacPro5,1 with 32nm Westmere 3.33GHz X5680s vs my 22nm Ivy Bridge 3.4GHz i7-3770 (non K).
View attachment 689225 View attachment 689224

Looks about right. Offset the turbo clocks you're pretty close to each other. And keep in mind, IPC differences will be far more apparent in benchmarks. Gaming really doesn't require all the bells and whistles. Look at a Anandtech's clock for clock architecture comparison with discrete gpus:
IMG_0948.PNG

Almost no benefit. The full read on ipc and gaming seems like it could be valuable in this discusssion and the information for gaming in particular follows on page 10: Comparing IPC
 
Last edited:
My point is that you can't judge CPU performance just by looking at GHz. Each generation brings improvements in IPC.

While the 30K multi-core score was impressive, The only app that I had which took advantage of it was Handbrake. On the down side, having two of those monster X5680 in my tower generated a ton of heat and it sucked down electricity faster than Tony Montana sucked down coke.

Realistically, my current daily runner which is an i7-6700K system feels much snappier than my old 5,1 did. I sacrifice a little bit on the top end, multi-core performance (my i7 scores over 22K), but I don't notice it too much because I queue up jobs and walk away when it starts.

Thank you for your input. do you happen to have a list of your components that you used in your Hackintosh build? i'm going over a variety of builds and seeing which one i think would be ideal for me.
i know tonymacx86 has a list of compatible hardware
 
I know this one isn't testing the GPU too much but heres some comparisons of the GT120, GTX760 and 980ti all on my system


that 30k multi core score with the westmere CPU is impressive !
but i guess for single core usage there isn't that much an improvement over my quad 2.66 i.e for gaming

i think i will stick with my current MP for the time being, save up some $ and make a Hackintosh with the i7.
It will be sad to see the MP go, she's been a wonderful machine.

Keep in mind that benchmark is cpu only and won't really tell you much about your GPU. The fastest single core geekbench you could expect with 3.46 ghz processor would be around 3000 points. Here's my score last time I ran it for comparison, which is an 8-core not a 12.

OSX 10.11 CPU:https://browser.geekbench.com/v4/cpu/7754
+Single-Core: 3010 Multi-Core: 16302
 
Thank you for your input. do you happen to have a list of your components that you used in your Hackintosh build? i'm going over a variety of builds and seeing which one i think would be ideal for me.
i know tonymacx86 has a list of compatible hardware

In my signature, click on the Asus Maximus VIII Gene link.
[doublepost=1487459338][/doublepost]
Keep in mind that benchmark is cpu only and won't really tell you much about your GPU. The fastest single core geekbench you could expect with 3.46 ghz processor would be around 3000 points. Here's my score last time I ran it for comparison, which is an 8-core not a 12.

OSX 10.11 CPU:https://browser.geekbench.com/v4/cpu/7754
+Single-Core: 3010 Multi-Core: 16302

Geekbench 4 scores are very different from Geekbench 3 scores. Look at the crazy numbers from my Skylake system...
Screen Shot 2016-12-03 at 4.47.32 PM.png
 
  • Like
Reactions: thornslack
i gave up reading half way along.

worth installing windows if you have the room/will. i play games on windows most the time and you tend to get better FPS at higher graphic settings, it's sad but true games are made for windows then ported to mac so we get the less optimized version (simply it's not worth the money/time to make the ports optimized to the same level + apple dragging there feet).
(is win 10 still free? if it is that's the cheapest option so worth trying)

if like me you use steam then it's relay easy you can have games on both osx and bootcamp (i play total war games in osx and windows) which is relay cool ^^ and gog will be fine too

FPS games tend to be GPU limited, RTS/SIM games tend to be CPU limited
so say
farcry wants GPU
star craft wants CPU (dang you single/dual threaded games)

the 4c cpu's are the best value for gaming and will get you so far but even then you may still just want to boot in to windows to play any recent game as it will just run faster/better

i saw a massive jump in fps from moving from a 3.1 2.8ghz to a 5.1 2.8ghz then a small (but nice) jump when i upgraded the cpu to 3.33ghz on the same GPU/games (mostly in windows as it'a harder to track fps in osx)
 
Your two processors fall outside the window of comparison. Westmere-based cMP will perform similarly to an Sandy-bridge processor of the same clock speed. The core2duo architecture is way older, going as far back as 65nm. Westmere was the last before Sandy and both are 32 nm.
Well, the step from Nehalem to Sandy Bridge was in fact one of the largest within Intels "Core-i" history: https://us.hardware.info/reviews/62...l-ivy-bridge-sandy-bridge-and-nehalem-results

The difference in clock for clock performance between Nehalem and Skylake / Kaby Lake is >40%. Considering the increased clock rates (the cMP maxes out at 3.46GHz, SKL/KBL can easily do >4.5GHz) results in almost twice the performance per core.

About a year ago I've switched form my old Nehalem Hackintosh to a Skylake one, and it was like night and day. In OS X, even my old R9 280 (which is nowhere near as fast as the 980Ti) got held back by the Nehalem CPU, in many games the performance would increase almost linearly when increasing the CPU clock (CoH2, CS:GO, benchmarks like Unigine).

In Windows there was not such a bug difference when using my R9 280, but something like the 980Ti can be bottlenecked by the CPU: https://www.reddit.com/r/nvidia/comments/4puyrg/gtx_1080_sandy_bridge_vs_skylake_tested/
Obviously it depends on the game if there's any notable difference...
 
  • Like
Reactions: orph and pastrychef
Well, the step from Nehalem to Sandy Bridge was in fact one of the largest within Intels "Core-i" history: https://us.hardware.info/reviews/62...l-ivy-bridge-sandy-bridge-and-nehalem-results

Everything you say is true but keep in mind you're skipping over westmere. Nehalem was 45nm, if you upgraded your processors or got a newer model cMP you move to westmere, 32 nm, same as sandy bridge. Sandy will still have a slight performance edge but the gap is not as severe as comparing to nehalems.
 
Everything you say is true but keep in mind you're skipping over westmere. Nehalem was 45nm, if you upgraded your processors or got a newer model cMP you move to westmere, 32 nm, same as sandy bridge. Sandy will still have a slight performance edge but the gap is not as severe as comparing to nehalems.

So, do you mean OP should upgrade his CPU because he is using the 45nm Nehalem 2.66 GHz CPU now? And should go for the 32nm Westmere?
 
I think my opinion on this thread is pretty clear. If the intent of the OP is to game he should install windows. He will likely achieve 60fps with his 980ti as is. If he continues to be underwhelmed with his performance at that time he could consider a processor upgrade to further boost his potential. My system with its maxwell titan x is very similar and I love the performance I get, especially in online shooters where many cores are extremely helpful in managing large multiplayer matches. Starcraft 2 is a perfect example of CPU limiting game, as it does not tap more than one thread, but even so the cMP is more than up to the task.
 
I don't think there's ever been any question of DirectX's superiority over OpenGL. The only question was on how to squeeze the most out of macOS. Let's hope more software use Metal...
 
  • Like
Reactions: owbp
Haha I know, and they are almost the same. My comment was on the obvious performance penalty Open GL results in regardless of the OS you're running it on when compared to DX for gaming.



1080P is the notorious CPU binder resolution. 1440p and higher really relieve a lot of the burden there. And yes, your argument that faster CPUs can produce more FPS is of course accurate, but I disagree that there is a 'pattern of CPU limiting' shown here or that dual-core i3 processor can realistically be compared to (at minimum) our quad Xeons and (at maximum) a 12-core 3.6 GHz set up. The pattern of decline that I see from those graphs is largely due to the descending capability of GPUs listed. The closest relative processor-wise on that list is the FX 8350, which has very similar single threaded performance to the cMP but still suffers from lower IPC over all.



58.6 FPS is 60 FPS dude. Very few people have 144+hz monitors and 45-60 FPS still remains the gold standard for fluid gaming. Keep in mind that above 60 hz results in tearing (bad) or the need for VSync, which incurs its own penalty on the GPU and increases input lag.



This is just wrong. Sure the i3 is faster single threaded, but it is a dual core 3.4. It doesn't have the core count to handle the games so the FPS suffer. A cMP with any reasonable processor will blow it out of the water gaming. From the Guru of 3D:

"What about actual IPC clock for clock performance then? Well, the Core i7 6700K runs at a 4.20 MHz turbo, the Core i7 7700K at 4.50. We can easily normalize that and downclocked the 7700K cores to 4200 MHz. Aside from a minor platform offset, as you can see, an Intel core is an Intel core. Once you clock them the same, they perform (roughly) the same. This actually has not changed in hugely massive steps ever since Sandy-Bridge in 2011. IPS perf did advance from Haswell to Skylake and now Kaby Lake, we are however talking roughly 10% there. cite

I think I have to emphasis again that a CPU CAN bottleneck some games, even some GPU demanding games, but not ALL games. You get a GPU limiting game and tell the others that CPU won't affect anything is meaningless. We know about that. We just saying the CPU can be the bottleneck, but not saying CPU always is the bottleneck.

Also, it's very clear that the worst combination is very strong GPU + and weak CPU. That's exactly what OP have. The 980Ti is stronger than any GPU in the chart I posted. And W3520 is obviously weaker than most of the CPU were tested. Result, the CPU can be the bottleneck in some games. And make the 980Ti unable to achieve 60FPS even in just 1080p.

Again, I have to emphasis that I aim to diagnosis OP's case (strong GPU + weak CPU), but not "if a cMP can game decently".

And no problem, we use FX 8350 (to simulate the cMP's Xeon) vs 4130.
IMG_1840.PNG


Here is the fact, FX8350 not always do better than i3 4130 just because it has more cores. Of course it can do better in some game. However, what I want to point out is that more core not always works better. An overall stronger CPU e.g. W3520 (or even X5690) can perform worse than an i3. There are plenty of games that still single core performance limiting out there. Again and again, what I want to say is the CPU speed CAN be the limiting factor, but not always. And more cores not always do better. You keep saying more cores is important in multi player online FPS games. It's totally OK, however, it doesn't mean that more cores ALWAYS works better in gaming. And it still can't rule out that OP's case is CPU limiting.

Also, the W3520 (OP's current CPU) is only a little bit better than the i3 4130 (in the chart) in multi core performance (only 7% actually). But in single core performance, The i3 is 62% faster! Which means, for the games that can benefit in multi cores. OP's computer can do (at most) 7% faster. But for any games that cannot use more that 4 threads, the i3 will win, and at most 62%. So, I don't think OP's Xeon is a good choice for gaming. And in most case, the crappy poor i3 can do better.
IMG_1838.PNG
IMG_1836.PNG


I think my opinion on this thread is pretty clear. If the intent of the OP is to game he should install windows. He will likely achieve 60fps with his 980ti as is. If he continues to be underwhelmed with his performance at that time he could consider a processor upgrade to further boost his potential. My system with its maxwell titan x is very similar and I love the performance I get, especially in online shooters where many cores are extremely helpful in managing large multiplayer matches. Starcraft 2 is a perfect example of CPU limiting game, as it does not tap more than one thread, but even so the cMP is more than up to the task.

I hear you, and I am not trying to say we should not use Windows for gaming. Or the cMP cannot game, etc. The problem is that you don't hear us. We have no objection that gaming in Windows is a better choice, and can improve gaming performance a lot. However, you insist OP's W3520 is not the problem, which may not be a correct statement.

From the chart at post #43, it clearly shows that for a strong GPU, a weak CPU e.g. i3-4130 (similar to OP's W3520 in multi core performance) can make the game unable to run at 60FPS. In fact, OP's case is even worse. For games like Thief (not multi core optimised) the FX 8350 do even worse (FX 8350's single core performance only few % better than W3520).

Therefore, it's not that hard to conclude that OP's W3520 can be the bottleneck if he want maintain 60FPS for gaming. I know 45FPS can still game, however, who the hell on the world will buy a 980Ti and happy to game at 45FPS in just 1080P?

Anyway, I 100% agree that OP should follow your plan, which is install Windows first, try to find out if he happy with that. If yes, no need to touch the CPU. The CPU can be one of the limiting factor, but not the most critical one in his case. Installing Windows is more critical. And only if the game still unable to achieve a decent frame rate, then obviously the next item to look at will be the CPU.
 
Last edited:
  • Like
Reactions: pastrychef
The OP has a quad i7 at almost 3ghz and he is at 1600p resolution. First gen Core I or not it's still very capable. Your graphs and hypothetical conclusions do not match the real world output, and you seem more set on proving a CPU limiting scenario than listening to anyone who has chimed in saying it will run games just fine. Why push someone to an upgrade path that is laborious, expensive, and quite likely unnecessary?
 
Everything you say is true but keep in mind you're skipping over westmere. Nehalem was 45nm, if you upgraded your processors or got a newer model cMP you move to westmere, 32 nm, same as sandy bridge. Sandy will still have a slight performance edge but the gap is not as severe as comparing to nehalems.
I did that on purpose since Westmere is exactly the same thing as Nehalem, just on a smaller process (-> better power efficiency). IPC doesn't differ at all though.
 
The OP has a quad i7 at almost 3ghz and he is at 1600p resolution. First gen Core I or not it's still very capable. Your graphs and hypothetical conclusions do not match the real world output, and you seem more set on proving a CPU limiting scenario than listening to anyone who has chimed in saying it will run games just fine. Why push someone to an upgrade path that is laborious, expensive, and quite likely unnecessary?

Of course I was focusing on CPU limiting scenario, because I want to proof that "IS POSSIBLE", do you know what's that mean? Only one case is enough to proof that's possible. And Thief is a game, i3 is a CPU (close performance to W3520), GTX 980 is a GPU for gaming, which is even weaker than 980Ti. That's real world. And what I proof is that the CPU can be the limiting factor, or CPU speed can also be the limiting factor (even has lots of cores), and cause the GPU unable to perform even just at 1080P. This is simple logic.

And did you read? I never push him to spend anything. I suggested that he was CPU limiting because he was. OP was talking about the GeekBench and CineBench at the beginning, both of these benchmark are CPU limiting. Then why should I suggest anything else?

Then on post #6, I did tell OP that in case of gaming, a faster CPU most likely can help. But I didn't ask him to spend lots of money for a new system. But a cheap X5677 (yes the X5677 is cheap) can 100% help on CineBench and GeekBench.

In fact, at post #8, I simply tell him the truth is "depends on his usage", the X5677 not always help to improve the 980Ti's performance.

At post #35, I further suggest OP should clearly tell us what he want to achieve. Otherwise no way for us to give him the optimum solution.

And I did say your plan is right. 1st install Windows (BTW, Windows is MORE EXPENSIVE than a X5677 if OP not own a copy yet), and then further decide if CPU still limiting for his gaming. I NEVER EVER tell OP to get any expensive upgrade before installing Windows for gaming.
 
Make some tests today. Windows faster up to 15% on OpenGL, and up 30% on DirectX11

macOS (OpenGL)
IMG_2748.JPG

BootCamp Windows 8 (OpenGL)
IMG_2750.JPG

BootCamp Windows 8 (DirectX11)
IMG_2752.JPG
 
  • Like
Reactions: itdk92
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.