Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What are you talking about, we are not measuring the GPU performance? Then what you want to say? What's the point of running Valley benchmark and you emphasis that's not CPU limiting (anyway, I already proved that it is)?

Yes, we are measuring the card's performance, BUT in a cMP.

You said "I believe unflashed GPU's has a limitation. A 380X must have done much better. Here's what I get with a flashed 270X. Check minimum FPS please. Didn't overclock the card." It is not about GPU performance?

Yes, purely about GPU performance in a Mac Pro. After examining Valley benches all day, I have come to a point his score is not abnormal. 380X is hitting the ceiling too.

You only focus on the min FPS but ignore everything else. And I already show you that min FPS is not reliable in Valley benchmark because that can go very low during scenes transition. Guess what? I can get even lower min FPS in Valley with the lowest setting on my cMP (same OS, same W3690, same RX580)
View attachment 774659

What's that mean? It means this particular measurement is NOT reliable.

Unless you can get higher min. FPS after benching over and over, it's a sign of a problem.

Also, tell me, why we need a playable frame rate for comparing performance. It's not gaming, just benchmark, I already gave you an example. 6FPS is 100% stronger than 3FPS, what's wrong with that? You made an assumption that we need "playable" for benchmarking, why? where it come from? What's the difference between "6FPS vs 3FPS" and "60FPS vs 30FPS"? numbers are just numbers, 100% stronger is 100% stronger. Going lower setting just make the test more easily become CPU limiting (not necessary the whole process, but lets say 10% of the time is CPU limiting, then the result already unable to accurately tell the difference in GPU performance).

It's just realistic, real world like.

I already show you the CPU usage in my last post, please show me yours.

I will be on holiday for two weeks after Friday. Will glady do all the benches you asked for. I don't even have a cMP hooked up to a monitor right now.

EFI is completely irrelevant to GPU performance. 1060 perform badly because the Nvidia web driver has much higher overhead than AMD driver in macOS. Please feel free to try that in Cinebench (you already know this is 100% CPU limiting benchmark), same result will be observe, again, because it's CPU limiting, and Nvidia GPU has higher overhead.

This is new info to me. So, nVidia driver sucks, but info I put is still true. A flashed 680 will shine, because it will use Apple's driver. True?


(BTW, what is GTX 3GB? Where the R9 390 come from?

Tested these cards on same PC. 3 GB GTX 1060 is faster than an R9 390. Sold R9 390.

If you want to prove that Mac EFI can significant affect the performance. Please do the following.

1) Install a AMD card that with the original PC VBIOS
2) Open Activity Monitor (to make sure CPU is not limiting during benchmark)
3) make sure there is nothing opened and the computer is quite idle
4) Run Valley benchmarks at highest setting (window mode, that will allow us to monitor CPU usage)
5) re-run Valley benchmarks 2 more times to make sure the results are consistent
6) Flash that AMD card with Mac EFI (the EFI ROM must be created by the original PC VBIOS)
7) re-do step 2-5 on the same cMP (with same spec, of course)
8) comparing the result

I had HD7950, R9 280, R9 380 before I moved to 1080Ti and RX580. I can tell there is absolutely no performance difference by just flashing the card. I didn't keep all the record, so, can't show you. But please prove me wrong. I am more than happy to learn. I am more than happy to be corrected. But I need some reliable evidences to show me that an AMD GPU performance can be significantly improved by adding the Mac EFI.

I won't do it, not just because it's too much work, but I do believe you.
 
Yes, we are measuring the card's performance, BUT in a cMP.

Yes, purely about GPU performance in a Mac Pro. After examining Valley benches all day, I have come to a point his score is not abnormal. 380X is hitting the ceiling too.

Unless you can get higher min. FPS after benching over and over, it's a sign of a problem.

It's just realistic, real world like.

I will be on holiday for two weeks after Friday. Will glady do all the benches you asked for. I don't even have a cMP hooked up to a monitor right now.

This is new info to me. So, nVidia driver sucks, but info I put is still true. A flashed 680 will shine, because it will use Apple's driver. True?

Tested these cards on same PC. 3 GB GTX 1060 is faster than an R9 390. Sold R9 390.

I won't do it, not just because it's too much work, but I do believe you.

May be you missed something, I showed you my benchmarks which are also done on the cMP. The only exception is the one that from Hacktintosh to prove that move to a faster CPU system make the same GPU do 100% better on max FPS and 50% better on average. Therefore, this is a CPU limiting case. But all others measurements are from my old cMP (a real cMP, not any Hackintosh).

BTW, why your assumption become realistic? I am not a part of the real world? I always told myself to stay objective. Don't make my opinion become fact. And why "need playable frame rate for benchmark" is a fact? I can only see that's your assumption, your opinion. No real logic behind.

Oh, definitely, if a 380X cannot achieve 20-30 min FPS after multiple run (in med or even lower setting), but always stay at 8-9 FPS, then something is wrong.

No idea about GTX680, never own one, never test one, no data on hand, can't tell. But by considering there are so many GTX680 users here. Most likely some of them can tell if there is any noticeable performance difference before and after flashing.

P.S. Please enjoy your holiday. Only do this kind of thing if that's your interest (and have time, of course) :D
 
  • Like
Reactions: yuzgen
May be you missed something, I showed you my benchmarks which are also done on the cMP. The only exception is the one that from Hacktintosh to prove that move to a faster CPU system make the same GPU do 100% better on max FPS and 50% better on average. Therefore, this is a CPU limiting case. But all others measurements are from my old cMP (a real cMP, not any Hackintosh).

PC to Mac is still apples to pears. Architecture is completely different. Everything is different. When I put "Valley is not CPU bound, I was talking about, take i3-3320 and i7-3770. The result will be very, very similar. I myself did these benches. But when you play online games, i7 will shine. A lot of calculations have to be done.

BTW, why your assumption become realistic? I am not a part of the real world? I always told myself to stay objective. Don't make my opinion become fact. And why "need playable frame rate for benchmark" is a fact? I can only see that's your assumption, your opinion. No real logic behind.

Simply because people need at least 60 FPS, to play games fluently.

No idea about GTX680, never own one, never test one, no data on hand, can't tell. But by considering there are so many GTX680 users here. Most likely some of them can tell if there is any noticeable performance difference before and after flashing.

I had one. Sold it in a 2 x X5670 Mac Pro. I guess same logic applies here. It doesn't matter if it's flashed or not. If MacOS has its driver, i will shine. I couldn't find the benches, but I remember. 680 rocks.

P.S. Please enjoy your holiday. Only do this kind of thing if that's your interest (and have time, of course) :D

LOL, thank you. I'm planning to be at home, lower prices and sell some stuff, away from the rush of my daily work. I'll also gladly play and tweak every single hardware I have. I love these toys.
 
PC to Mac is still apples to pears. Architecture is completely different. Everything is different. When I put "Valley is not CPU bound, I was talking about, take i3-3320 and i7-3770. The result will be very, very similar. I myself did these benches. But when you play online games, i7 will shine. A lot of calculations have to be done.

Simply because people need at least 60 FPS, to play games fluently.

I had one. Sold it in a 2 x X5670 Mac Pro. I guess same logic applies here. It doesn't matter if it's flashed or not. If MacOS has its driver, i will shine. I couldn't find the benches, but I remember. 680 rocks.

LOL, thank you. I'm planning to be at home, lower prices and sell some stuff, away from the rush of my daily work. I'll also gladly play and tweak every single hardware I have. I love these toys.

Huh? Valley is a GPU benchmark, not a game. I can understand why we want playable frame rate on games, but this is just a benchmark, not a game. Also 60FPS is not a requirement for games, lots of people can play at 30FPS without problem. Again, I can only see this as your opinion, not the fact.

Haha, same here, I love to stay at home on my holidays and playing around with my cMP.

Anyway, I think we better stop at here. We hi-jack this thread for a while already :p
 
  • Like
Reactions: yuzgen
Good luck with the project, it will be an awesome mac pro. I still love these machines and its still my go to machine and makes me smile. I have mine hooked up to an ASUS 34" PG348Q curved screen 3440 X 1440 just love it. Also have me an iMac G4 1.2ghz 20" screen with SSD maxed out on snow leopard. another i would never part with.

I do hope the new mac pro is nothing like the trash can mac, if they kept the mac pro 5.1 case i would be getting me one.

Thanks !, I think that's what pushes everyone away from 6,1.. No one likes them, lack of upgradability, ugly case. Don't know what Apple was thinking.
 
So I got the tower today..

I did the 5,1 flash

The App store didn’t want me to login to my Apple ID. (Said invalid password) even though 2FA prompt was popping up on my phone.)

So I bypassed the App Store and downloaded the HS patcher tool.

Download the latest clean copy and closed out the tool.

Installing High Sierra it asked me to do another firmware flash, before I’ll let me update.

Firmware flashed

Now it’s upgrading to HS
 
Last edited:
So I got the tower today..

I did the 5,1 flash

The App store didn’t want me to login to my Apple ID. (Said invalid password) even though 2FA prompt was popping up on my phone.)

So I bypassed the App Store and downloaded the HS patcher tool.

Download the latest clean copy and closed out the tool.

Installing High Sierra it asked me to do another firmware flash, before I’ll let me update.

Firmware flashed

Now it’s upgrading to HS
You don’t need to use patches after the 4,1>5,1 upgrade, now it’s a fully supported Mac.
 
You don’t need to use patches after the 4,1>5,1 upgrade, now it’s a fully supported Mac.

I used it to download HS installer only.

The App Store didn’t allow me to login to my Apple ID.

Probably has something to do with mountain lion.
 
I used it to download HS installer only.

The App Store didn’t allow me to login to my Apple ID.

Probably has something to do with mountain lion.
Ah, ok! From now on you just install the vanilla macOS from the Mac App Store, no need for patches or hacks past the 5,1 upgrade that you already did.
 
something is off with those rx580 tests, it says 256mb vram. while the nvidia cards report the correct amount of Vram. the 1080 sure is faster than the rx, but not 3X.. lol

also wondering why you bought the single cpu tray? Looking forwards to seeing this all go nicely together.
 
Last edited:
[doublepost=1534401250][/doublepost]
also wondering why you bought the single cpu tray? Looking forwards to seeing this all go nicely together.

For a spare

Earlier today
 

Attachments

  • AEFF5CAD-D2D2-4DDE-BCCF-62638091053D.jpeg
    AEFF5CAD-D2D2-4DDE-BCCF-62638091053D.jpeg
    1.3 MB · Views: 106
Its alive ! picture from yesterday..

Still have to clean up wiring and secure everything nice and neatly.


39245758_1825523150817703_3457097203221266432_n.jpg
 
  • Like
Reactions: h9826790
Last edited:
Is that your base or OC'd score?

The closest single core I can manage in Mac OS is 3239, on a x5677. My 990x comes in next with a single core of 3228 and a multicore of 15499 with 56gb.

How does your dual proc x5690 hold up? Im seeing a drop in single core and memory performance going to dual x5677's and 96gb.

Ok, so here is the thing..

I ran back to back, geekbench tests. "stock speed" in macOS Mojave and in Windows 10.

At stock speed macOS dominates windows

macOS

macOS_stock.PNG

Windows 10

Windows10_Stock.PNG

Here comes the "overclock" results in windows of course.

Windows10_OC.PNG

Here is cinebench scores

Cinebench multicore stock (Windows)

CB_Stock.PNG

Cinebench single core stock

Stock_SC.PNG

Cinebench multicore "overclock"

CB_OC.PNG

Cinebench single core "overclock"
CB_SC_OC.PNG

Here are geekbench links for verification

https://browser.geekbench.com/v4/cpu/9469210 (macOS Stock)

https://browser.geekbench.com/v4/cpu/9469295 (Windows 10 Stock)

https://browser.geekbench.com/v4/cpu/9469381 (Windows 10 Overclock)

https://valid.x86.fr/8rt3e6 (CPU-Z)

CPU-Z and Intels own app report 4122.38 MHz.

Others I showed the results to are saying its wrong and reported clock speed is cosmetic.. but the scores in (windows) both CB and GB show change and performance increases with the overclock.

CPU-Z_OC.PNG
 

Attachments

  • CB_Stock.PNG
    CB_Stock.PNG
    1.2 MB · Views: 109
  • CPU-Z_OC.PNG
    CPU-Z_OC.PNG
    240.4 KB · Views: 97
Last edited:
Ok, so here is the thing..

I ran back to back, geekbench tests. "stock speed" in macOS Mojave and in Windows 10.

At stock speed macOS dominates windows

macOS

View attachment 776699

Windows 10

View attachment 776700

Here comes the "overclock" results in windows of course.

View attachment 776701

Here is cinebench scores

Cinebench multicore stock (Windows)

View attachment 776703

Cinebench single core stock

View attachment 776704

Cinebench multicore "overclock"

View attachment 776705

Cinebench single core "overclock"
View attachment 776706

Here are geekbench links for verification

https://browser.geekbench.com/v4/cpu/9469210 (macOS Stock)

https://browser.geekbench.com/v4/cpu/9469295 (Windows 10 Stock)

https://browser.geekbench.com/v4/cpu/9469381 (Windows 10 Overclock)

https://valid.x86.fr/8rt3e6 (CPU-Z)

CPU-Z and Intels own app report 4122.38 MHz.

Others I showed the results to are saying its wrong and reported clock speed is cosmetic.. but the scores in (windows) both CB and GB show change and performance increases with the overclock.

View attachment 776707


Thanks for sharing. I was scratching my head how you achieved the higher number, overclocking explained it. The 990x and w3690 very similar in architecture, except for the 990x lacking ecc support. The x5677 is a x5690 with 2 cores disabled.

Your default windows score looks like windows is using slower memory timings than macOS.

Mac OS with a 990x - 48gb - no overclocking (link)
window 8-18-187.24 PM.png


macOS with a x5677 - 24gb - no overclock (link)
window 8-18-187.32 PM.png


2x x5677's - 96gb - no overclock (link)
window 8-18-187.34 PM.png
 
Last edited:
Thanks for sharing. The 990x and w3690 very similar in architecture, except for the 990x lacking ecc support. The x5677 is a x5690 with 2 cores disabled.

Mac OS with a 990x - 48gb - no overclocking (link)
View attachment 776717

macOS with a x5677 - 24gb - no overclock (link)
View attachment 776719

2x x5677's no overclock - 96gb - no overclock (link)
View attachment 776723

Do you have a windows drive by chance? i'm very curious on scores via windows. :)

I failed on the x5690 delid (my fault) , that's why i'm not running the dual tray at the moment.
 
Do you have a windows drive by chance? i'm very curious on scores via windows. :)

I failed on the x5690 delid (my fault) , that's why i'm not running the dual tray at the moment.

Sorry to hear about your x5690. I’m not running windows yet, although installing a copy is on my bucket list. Sounds like a good project for tomorrow. I’m interested to see what windows reports with geekbench as well.

I made the jump to dual cpu’s On a 5,1 on Monday of this week. Installing the heat sinks on the dual cpu tray is definitely more challenging than a single cpu tray. Having to work with delidding cpu’s and installing the heat sinks, risking chipping the corners etc was something I wanted to avoid.

If I ever opt to upgrade one of two 4,1’s to dual cpus, I’ll go for a 5,1 tray and spend the extra $100 for a 5,1 logic board and a lot less stress.
 
Sorry to hear about your x5690. I’m not running windows yet, although installing a copy is on my bucket list. Sounds like a good project for tomorrow. I’m interested to see what windows reports with geekbench as well.

I made the jump to dual cpu’s On a 5,1 on Monday of this week. Installing the heat sinks on the dual cpu tray is definitely more challenging than a single cpu tray. Having to work with delidding cpu’s and installing the heat sinks, risking chipping the corners etc was something I wanted to avoid.

If I ever opt to upgrade one of two 4,1’s to dual cpus, I’ll go for a 5,1 tray and spend the extra $100 for a 5,1 logic board and a lot less stress.

New high score on windows

https://browser.geekbench.com/v4/cpu/9478509
OC_GB_New.PNG

[doublepost=1534654855][/doublepost]Lets back that up that bench.. with a even higher bench ;) https://browser.geekbench.com/v4/cpu/9478564

OC_GB_New.PNG
 
  • Like
Reactions: handheldgames
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.