Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

UCDHIUS

macrumors regular
Original poster
Nov 16, 2017
199
61
Texas
So I purchased a dual socket 4,1 cMP base configuration to use as a project build.

x2 2.26 GHz processors, 14 GB ram with 500GB HDD (Case looks to be in near mint condition)

$600 + ($100 shipping, I live in Hawaii and have high shipping costs)

Upgrades I just purchased from ebay.

2x X5680s 6C/12T for $100 + ($17 shipping)

ASUS ROG 1080Ti for $550 + ($20 shipping)

Amazon

Samsung 860 500GB SSD and OWC 2.5 inch SSD sled for $135

3mm CPU wrench, thermal paste and dual 6 pin mini pcie to 8pin for $25

As it sits i'm at $~1500-1600 total right now.

When everything starts coming in ill update this thread.

Pictures of the machine below

s-l1600 (2).jpg

s-l1600 (1).jpg
s-l1600 (3).jpg

s-l1600 (4).jpg
 

Attachments

  • s-l1600 (4).jpg
    s-l1600 (4).jpg
    266.2 KB · Views: 146
Last edited:
  • Like
Reactions: devon807
Are you going to delid the CPUs, or go for the washer technique and thermal pad option?

I’m going try my hand at deliding them.
[doublepost=1532952834][/doublepost]Also the seller of the 1070Ti canceled out on me “stating I can’t sell to people with less than 20 eBay purchases.”

So I bought a ASUS 1080Ti for $550.
 
I used the vice method to delid. Pays to buy/use a quality vice - I ended up breaking the vice - **** metal, crap quality.
 
Some additional thoughts:
- You don't really need a 2.5-inch sled for the SSD. You can use the other Sata connection in the optical drive bay, where it can hang loose, or be duct taped to something. SSDs are essentially vibration proof.
- I've mounted my SSD on both a PCIe adapter and in the drive bay, and essentially no performance difference. You may want to use PCIe to free up a drive bay for storage, or use a PCIe card that is capable of performing better than a Sata interface.
- You probably want to do the firmware upgrade to make it look like a 5,1, and accept the latest Mac OS X. Search MR for "netkas.org" and firmware. After doing the homework and getting prepared, it was really easy.
- Mac Pro (Early 2009) - tim.id.au <-- Handy!
 
^^^^The CPUs chosen by the OP will not work unless the 5,1 firmware is utilized. So, it's a must.

Lou
 
^^^^The CPUs chosen by the OP will not work unless the 5,1 firmware is utilized. So, it's a must.

Lou

Yes, I’ll be upgraded to 5,1 firmware ASAP.
[doublepost=1532987254][/doublepost]
Some additional thoughts:
- You don't really need a 2.5-inch sled for the SSD. You can use the other Sata connection in the optical drive bay, where it can hang loose, or be duct taped to something. SSDs are essentially vibration proof.
- Mac Pro (Early 2009) - tim.id.au <-- Handy!

I was thinking of doing that but said screw ill just buy the sled. It shipped out already so no turning back
[doublepost=1532988583][/doublepost]
I used the vice method to delid. Pays to buy/use a quality vice - I ended up breaking the vice - **** metal, crap quality.

I just noticed in your signature, your build its awesome because I have a 380X in my current PC. So if I ever need it for back up i'm good to go.

Do you by chance have any benchmark scores? I was deciding to to either go with x5680s or x5690s. I chose the x5680s because I thought the slight clock speed bump wasn't worth the extra $100.
 
  • Like
Reactions: JedNZ
I have so much guff running on my 12-core cMP that I don't trust the benchmark scores due to all the background processes that are running. FWIW, I got 83847 on GeekBench 4.1.1 the other day on my 12-core cMP. And on Valley I got this. Not particularly inspiring, but good enough for this part time gaming hobbyist.
image.png

I understand there's a 5% difference between the X5680 and the X5690, as far as performance goes. I bought the X5680s for both my single CPU Hexa-core cMP 4,1>5,1 (my son is now using this) and my dual CPU 12-core cMP 4,1>5,1 because they were roughly 40% cheaper than the X5690s, and that was my primary decision point - $ over %. I think you chose right, even it's just to make me feel better about my choice lol
[doublepost=1533022435][/doublepost]I'm running my boot/apps on an Accelsior S PCIe SATA III adapter with a Samsung 850 EVO 500GB - User data is on a FD consisting of a NVMe 960 EVO 500GB fused with a WD Black 2TB spinner. I do see some high R/W speeds at times on my boot drive, often over 300MB/s (loading some games, graphics software etc), which warrants me realising the full potential of the 850 EVO SSD.

But for the larger part, if I put the 850 EVO into one of the direct connect bays (DCB) (SATA II) then I'd gain a few seconds at boot (recognised immediately, versus a 20sec delay with the PCIe adapter) and there'd only be a very small chance (<5%?) I'd ever completely saturate the SATA II bus speed if it was in the DCB. Not sure which way I would go if I started again, however recently got a great deal on another Accelsior S adapter from MacSales.com, so threw that in the boy's hex-core cMP, just because I love pushing data at the fastest possible speeds I can. If I was into cars I would have a V24 with a super charger, handful of turbos and all that guff in a Ford Fiesta lol.
 
I have so much guff running on my 12-core cMP that I don't trust the benchmark scores due to all the background processes that are running. FWIW, I got 83847 on GeekBench 4.1.1 the other day on my 12-core cMP. And on Valley I got this. Not particularly inspiring, but good enough for this part time gaming hobbyist.
View attachment 773672

I understand there's a 5% difference between the X5680 and the X5690, as far as performance goes. I bought the X5680s for both my single CPU Hexa-core cMP 4,1>5,1 (my son is now using this) and my dual CPU 12-core cMP 4,1>5,1 because they were roughly 40% cheaper than the X5690s, and that was my primary decision point - $ over %. I think you chose right, even it's just to make me feel better about my choice lol
[doublepost=1533022435][/doublepost]I'm running my boot/apps on an Accelsior S PCIe SATA III adapter with a Samsung 850 EVO 500GB - User data is on a FD consisting of a NVMe 960 EVO 500GB fused with a WD Black 2TB spinner. I do see some high R/W speeds at times on my boot drive, often over 300MB/s (loading some games, graphics software etc), which warrants me realising the full potential of the 850 EVO SSD.

But for the larger part, if I put the 850 EVO into one of the direct connect bays (DCB) (SATA II) then I'd gain a few seconds at boot (recognised immediately, versus a 20sec delay with the PCIe adapter) and there'd only be a very small chance (<5%?) I'd ever completely saturate the SATA II bus speed if it was in the DCB. Not sure which way I would go if I started again, however recently got a great deal on another Accelsior S adapter from MacSales.com, so threw that in the boy's hex-core cMP, just because I love pushing data at the fastest possible speeds I can. If I was into cars I would have a V24 with a super charger, handful of turbos and all that guff in a Ford Fiesta lol.

Awesome, Thanks for the info !
 
So I currently have everything on hand ready to go.. Still waiting on the tower..

2x x5680s (Delided but need to test)

I picked up a spare single CPU tray with a W3690 for a good price

ASUS ROG 1080Ti 11GB

HD Plex 360W SFF External PSU (for GPU)

Dell 330W AC Adapter

1 Samsung 500 GB SSD (with OWC Sled)

2x 1TB 7200RPM HDD
 
Last edited:
I believe unflashed GPU's has a limitation. A 380X must have done much better. Here's what I get with a flashed 270X. Check minimum FPS please. Didn't overclock the card.

Edit: Please try removing GT 120. Maybe you get a higher score.

Edit-2: You're running in windowed mode. Maybe try fullscreen too.

Screenshot_20180806-122557.png


I have so much guff running on my 12-core cMP that I don't trust the benchmark scores due to all the background processes that are running. FWIW, I got 83847 on GeekBench 4.1.1 the other day on my 12-core cMP. And on Valley I got this. Not particularly inspiring, but good enough for this part time gaming hobbyist.
View attachment 773672

I understand there's a 5% difference between the X5680 and the X5690, as far as performance goes. I bought the X5680s for both my single CPU Hexa-core cMP 4,1>5,1 (my son is now using this) and my dual CPU 12-core cMP 4,1>5,1 because they were roughly 40% cheaper than the X5690s, and that was my primary decision point - $ over %. I think you chose right, even it's just to make me feel better about my choice lol
[doublepost=1533022435][/doublepost]I'm running my boot/apps on an Accelsior S PCIe SATA III adapter with a Samsung 850 EVO 500GB - User data is on a FD consisting of a NVMe 960 EVO 500GB fused with a WD Black 2TB spinner. I do see some high R/W speeds at times on my boot drive, often over 300MB/s (loading some games, graphics software etc), which warrants me realising the full potential of the 850 EVO SSD.

But for the larger part, if I put the 850 EVO into one of the direct connect bays (DCB) (SATA II) then I'd gain a few seconds at boot (recognised immediately, versus a 20sec delay with the PCIe adapter) and there'd only be a very small chance (<5%?) I'd ever completely saturate the SATA II bus speed if it was in the DCB. Not sure which way I would go if I started again, however recently got a great deal on another Accelsior S adapter from MacSales.com, so threw that in the boy's hex-core cMP, just because I love pushing data at the fastest possible speeds I can. If I was into cars I would have a V24 with a super charger, handful of turbos and all that guff in a Ford Fiesta lol.
 
I believe unflashed GPU's has a limitation. A 380X must have done much better. Here's what I get with a flashed 270X. Check minimum FPS please. Didn't overclock the card.

Edit: Please try removing GT 120. Maybe you get a higher score.

Edit-2: You're running in windowed mode. Maybe try fullscreen too.

View attachment 774597

That min FPS easily drop to the 8-9FPS range during scenes transition (especially he ran that in windows mode). Not an accurate reference to compare performance.
 
How about 61.1 vs 57.3 average FPS? If 61 1 is not a ceiling for an X5680 or Mac Pro, I believe there's something wrong. Maybe insufficient power? Does 380X use more than 2x6 PCIe power? Did he use 6 to 8 pin adapter?

If he can bench in fullscreen, it will help us. I'm not home. Can't test windowed for about eight hours from now on.

Edit: And min FPS is something important about the CPU performance. I'm talking about PCs too. I bench on a lot of computers.

That min FPS easily drop to the 8-9FPS range during scenes transition (especially he ran that in windows mode). Not an accurate reference to compare performance.
 
How about 61.1 vs 57.3 average FPS? If 61 1 is not a ceiling for an X5680 or Mac Pro, I believe there's something wrong. Maybe insufficient power? Does 380X use more than 2x6 PCIe power? Did he use 6 to 8 pin adapter?

If he can bench in fullscreen, it will help us. I'm not home. Can't test windowed for about eight hours from now on.

Edit: And min FPS is something important about the CPU performance. I'm talking about PCs too. I bench on a lot of computers.

I know the min FPS is a very important factor for gaming benchmark because that almost define the gaming experience. However, what I want to point out the Valley benchmark often let the FPS drop during scenes transition. Therefore, it's not a real good reference to measure performance.

But I agree that the final score does mean something. However, not because the CPU limited to 61FPS, but the CPU limited the MAX FPS. Therefore, when the CPU stop the GPU to deliver more FPS (when the GPU can go further), this will eventually lower the overall average FPS (and score).

This effect is very obvious when I benchmark my 1080Ti in the cMP.

On the cMP, it's score is limited to 4000, max 157FPS.
Screen Shot 2018-08-06 at 18.05.51.jpg


But on my Hackintosh, the same card can deliver 300FPS, score 6300.
Hackintosh.PNG


It's all about the CPU single thread performance.

Also, as you can see, the min FPS is worse on the Hackintosh, that's due to the FPS drop during transition. Nothing to do with the system's performance.
 
All correct, but I'm trying to compare apples to apples. PC is a pear here.

I will go ahead and say min FPS really matters. It's a very important hint. Please observe your FPS while gaming or doing benches on your 8700K PC. I can test i7-4790 and GTX 1060 when I'm home. I bet min FPS won't be that low.

Edit: Umm, you're doing Extreme HD. I won't bet min FPS will be that high with 1060 with that setting. I always use default profile while doing Valley benches. It's easier not to remember any settings.

I know the min FPS is a very important factor for gaming benchmark because that almost define the gaming experience. However, what I want to point out the Valley benchmark often let the FPS drop during scenes transition. Therefore, it's not a real good reference to measure performance.

But I agree that the final score does mean something. However, not because the CPU limited to 61FPS, but the CPU limited the MAX FPS. Therefore, when the CPU stop the GPU to deliver more FPS (when the GPU can go further), this will eventually lower the overall average FPS (and score).

This effect is very obvious when I benchmark my 1080Ti in the cMP.

On the cMP, it's score is limited to 4000, max 157FPS.
View attachment 774604

But on my Hackintosh, the same card can deliver 300FPS, score 6300.
View attachment 774603

It's all about the CPU single thread performance.

Also, as you can see, the min FPS is worse on the Hackintosh, that's due to the FPS drop during transition. Nothing to do with the system's performance.
 
Last edited:
All correct, but I'm trying to compare apples to apples. PC is a pear here.

I will go ahead and say min FPS really matters. It's a very important hint. Please observe your FPS while gaming or doing benches on your 8700K PC. I can test i7-4790 and GTX 1060 when I'm home. I bet min FPS won't be that low.

Edit: Umm, you're doing Extreme HD. I won't bet min FPS will be that high with 1060 with that setting. I always use default profile while doing Valley benches. It's easier not to remember any settings.

In fact, we need to use Extreme HD preset to minimise the CPU bottleneck. Run at too low setting will be too CPU limiting, and end up we are comparing the CPU single thread performance, not the GPU performance.
 
I don't think so. There's a reason Valley on Windows defaults to high and medium on MacOS. To get playable frame rates, the majority of people must lower the settings. IE most of the steam users use GTX 750 Ti as far as I remember, not 980 Ti etc. Default settings will give us a realistic idea of performance for most users. Valley is not CPU bound. See my thread. A slow quad and fast hexa core's scores are very close. You can also monitor CPU usage during benchmark.

In fact, we need to use Extreme HD preset to minimise the CPU bottleneck. Run at too low setting will be too CPU limiting, and end up we are comparing the CPU single thread performance, not the GPU performance.
 
Last edited:
Valley is not CPU bound

I believe that as well until I have a 1080Ti, then I realise how CPU bound the Valley Benchmark.

Also, there is no need to have playable frame rate to compare the performance. We just need the numbers, the final result, not to enjoy the graphics. Both 3FPS and 6FPS is not "enjoyable", but clearly 6FPS is 100% stronger, that's it.

Use setting as high as possible is the best way to avoid CPU single thread performance issue (which is very serious in cMP).

There is no need to use any "default" setting to get "realistic idea". We can use any "preset" to compare the performance difference between different systems (yes, it's the whole system, not the GPU). Normal compare to Normal, Extreme compare to Extreme. That's it. Default of not doesn't really matter. No need to over interpret why default to use normal or high. We just need to know how to use the tool properly to extract the info we want. In the case of cMP, highest setting is the best. Most newer Mac has much faster CPU, but much slower GPU. For them, lower setting won't make any difference, because GPU is always the most limiting factor (for this benchmark). But for cMP, we are at a very different position, we can have a slow CPU, but very strong GPU. Do you think Valley default the medium setting is particularly good for cMP? Or for most other Mac?

If we only want to compare the GPU, use the highest preset is the best way to "extract" the GPU performance. It's still not 100% accurate, however, much better than use lower setting (which has much higher chance to become CPU limiting).

I already show you in my pervious that how Valley can be CPU limiting. Same GPU, same OS, same API, different CPU, ~100% different on the Max frame rate. This is clearly the slower CPU limited the frame rate at ~150FPS.

And I can show you how bad Valley is to show GPU performance at low - med settings (on cMP, so Apple to Apple now).

As you can see, two results. I got both today on my cMP, W3690, RX580, latest 10.13.6.

low setting, low resolution, window mode, all min, average, max FPS is lower than the med setting one. Does it make sense?
Screen Shot 2018-08-07 at 00.04.22.png

Screen Shot 2018-08-06 at 23.54.13.png


Also, the numbers are roughly at the same range, what's that mean? Means it's completely CPU limiting. The benchmark unable to measure the Apple recommended sapphire PULSE RX580 8GB's performance correctly at the Valley defaulted medium setting at 1080P.

We cMP users at a very different situation then normal Mac users. We need to carefully look at the fact, and think about what's that mean. Our CPU is too old, very easy to make things CPU single thread limiting. We better do everything possible to eliminate that when we want to measure the GPU performance.
[doublepost=1533573882][/doublepost]
See my thread. A slow quad and fast hexa core's scores are very close. You can also monitor CPU usage during benchmark.

Your thread, if I read that correctly, for the same GPU, (indent as 7950 or 7xxx), same setting, you get better result when you move to faster CPU.

And CPU usage? No doubt, 100% single core at your recommended "default" setting. Because it's CPU single thread performance limiting.
Screen Shot 2018-08-07 at 01.34.14.png
 
Last edited:
I believe that as well until I have a 1080Ti, then I realise how CPU bound the Valley Benchmark.

Also, there is no need to have playable frame rate to compare the performance. We just need the numbers, the final result, not to enjoy the graphics. Both 3FPS and 6FPS is not "enjoyable", but clearly 6FPS is 100% stronger, that's it.

Use setting as high as possible is the best way to avoid CPU single thread performance issue (which is very serious in cMP).

We already know cMP itself is a bottlenck to very strong GPUs. Anything above GTX 960 / 280X level is overkill. We must see the platform's ability while benchmarking on a cMP, no? "How much of a bottleneck is a cMP for X GPU?" That's the question. We don't need to compare a 1080Ti to a Vega 64 for instance. Techpowerup has a great, in depth benchmark data, comparing an enormous amount of GPUs.

I observed that Valley benchmark shows a computer's general gaming performance. We can grab that information while benchmarking. Why ruin it?

Do you think Valley default the medium setting is particularly good for cMP? Or for most other Mac?

Unigine set default medium settings for OpenGL on Mac, because achieving playable framerates is hard for higher settings, for any Mac.

I already show you in my pervious that how Valley can be CPU limiting. Same GPU, same OS, same API, different CPU, ~100% different on the Max frame rate. This is clearly the slower CPU limited the frame rate at ~150FPS.

Please observe CPU usage while benchmarking. CPU is not the only factor. PCIe speed, RAM speed, system bus speed, PSU, all comes into play. 1080Ti is an overpowered beast for a cMP. Certainly overkill.

And I can show you how bad Valley is to show GPU performance at low - med settings (on cMP, so Apple to Apple now).

As you can see, two results. I got both today on my cMP, W3690, RX580, latest 10.13.6.

low setting, low resolution, window mode, all min, average, max FPS is lower than the med setting one. Does it make sense?
View attachment 774638
View attachment 774637

It doesn't make sense at all. I observed this on a 1,1 and 3,1. Any card will get same (or scaringly close) OpenGL scores in Cinebench and War Thunder on Windows (not bootcamp, direct insall).

Also, the numbers are roughly at the same range, what's that mean? Means it's completely CPU limiting. The benchmark unable to measure the Apple recommended sapphire PULSE RX580 8GB's performance correctly at the Valley defaulted medium setting at 1080P.

That's why it's an overkill, but don't forget the EFI factor. On my thread again, you can see GTX 1060 getting very bad results. I think it's because the lack of Apple EFI

We cMP users at a very different situation then normal Mac users. We need to carefully look at the fact, and think about what's that mean. Our CPU is too old, very easy to make things CPU single thread limiting. We better do everything possible to eliminate that when we want to measure the GPU performance.

Again, we don't have to measure the real GPU performance. There's always Techpowerup. We need to know what card can show its potential and what card can't. We need to know those two 6 pin powers are more than enough for us. We mustn't put power hungry beasts in cMPs.

Your thread, if I read that correctly, for the same GPU, (indent as 7950), same setting, you get better result when you move to faster CPU. And that’s quite proportional to the CPU speed as well.

When I look at those AND your 1080 Ti's results with W3690 (X5690) I see two things clearly:

1- A ceiling for cMP for max FPS, no matter what. Your 1080 Ti is ultra heavily bottlenecked.
2- Unflashed PC cards have a penalty, because a GTX 3 GB is faster than an R9 390 (I had both cards at the same time), but the results are clear. It loses on cMP.

Edit: Please see this thread too:
https://forums.macrumors.com/threads/mac-performance-guide-valley-benchmark-surprise.1699703/
It's amusing.

Edit-2: When a 1080 Ti or dual 7950s make sense:
http://barefeats.com/imac5K_vs_pros.html
Not gaming.
 
Last edited:
We already know cMP itself is a bottlenck to very strong GPUs. Anything above GTX 960 / 280X level is overkill. We must see the platform's ability while benchmarking on a cMP, no? "How much of a bottleneck is a cMP for X GPU?" That's the question. We don't need to compare a 1080Ti to a Vega 64 for instance. Techpowerup has a great, in depth benchmark data, comparing an enormous amount of GPUs.

I observed that Valley benchmark shows a computer's general gaming performance. We can grab that information while benchmarking. Why ruin it?



Unigine set default medium settings for OpenGL on Mac, because achieving playable framerates is hard for higher settings, for any Mac.



Please observe CPU usage while benchmarking. CPU is not the only factor. PCIe speed, RAM speed, system bus speed, PSU, all comes into play. 1080Ti is an overpowered beast for a cMP. Certainly overkill.



It doesn't make sense at all. I observed this on a 1,1 and 3,1. Any card will get same (or scaringly close) OpenGL scores in Cinebench and War Thunder on Windows (not bootcamp, direct insall).



That's why it's an overkill, but don't forget the EFI factor. On my thread again, you can see GTX 1060 getting very bad results. I think it's because the lack of Apple EFI



Again, we don't have to measure the real GPU performance. There's always Techpowerup. We need to know what card can show its potential and what card can't. We need to know those two 6 pin powers are more than enough for us. We mustn't put power hungry beasts in cMPs.



When I look at those AND your 1080 Ti's results with W3690 (X5690) I see two things clearly:

1- A ceiling for cMP for max FPS, no matter what. Your 1080 Ti is ultra heavily bottlenecked.
2- Unflashed PC cards have a penalty, because a GTX 3 GB is faster than an R9 390 (I had both cards at the same time), but the results are clear. It loses on cMP.

You're focusing on benchmarks and forgetting the big picture.

Some cards are used as GPGPU, others as CUDA processors, other cards have 4K60hz support or HEIVC decoding and so on.

Looking only for raw benchmark numbers are short sighted.

For people who don't need CUDA/boot screens/FV2, the best card for cMP today is RX-580. But other people that need CUDA/boot screens/FV2, a flashed GTX 1080 it's better. For other people needs, other cards and so on.
 
Last edited:
You're focusing on benchmarks and forgetting the big picture.

Some cards are used as GPGPU, others as CUDA processors, other cards have 4K support or HEIVC decoding and so on.

Looking only for raw benchmark numbers are short sighted.

For people who don't need CUDA/boot screens/FV2, the best card for cMP today is RX-480. But other people that need CUDA/boot screens/FV2, a flashed GTX 1080 it's better. For other people needs, other cards and so on.

This is purely a gaming oriented look. I edited my previous post for other uses, before reading your post. Under heavy work, stronger cards shine, depending on the APIs used.
 
We already know cMP itself is a bottlenck to very strong GPUs. Anything above GTX 960 / 280X level is overkill. We must see the platform's ability while benchmarking on a cMP, no? "How much of a bottleneck is a cMP for X GPU?" That's the question. We don't need to compare a 1080Ti to a Vega 64 for instance. Techpowerup has a great, in depth benchmark data, comparing an enormous amount of GPUs.

I observed that Valley benchmark shows a computer's general gaming performance. We can grab that information while benchmarking. Why ruin it?



Unigine set default medium settings for OpenGL on Mac, because achieving playable framerates is hard for higher settings, for any Mac.



Please observe CPU usage while benchmarking. CPU is not the only factor. PCIe speed, RAM speed, system bus speed, PSU, all comes into play. 1080Ti is an overpowered beast for a cMP. Certainly overkill.



It doesn't make sense at all. I observed this on a 1,1 and 3,1. Any card will get same (or scaringly close) OpenGL scores in Cinebench and War Thunder on Windows (not bootcamp, direct insall).



That's why it's an overkill, but don't forget the EFI factor. On my thread again, you can see GTX 1060 getting very bad results. I think it's because the lack of Apple EFI



Again, we don't have to measure the real GPU performance. There's always Techpowerup. We need to know what card can show its potential and what card can't. We need to know those two 6 pin powers are more than enough for us. We mustn't put power hungry beasts in cMPs.



When I look at those AND your 1080 Ti's results with W3690 (X5690) I see two things clearly:

1- A ceiling for cMP for max FPS, no matter what. Your 1080 Ti is ultra heavily bottlenecked.
2- Unflashed PC cards have a penalty, because a GTX 3 GB is faster than an R9 390 (I had both cards at the same time), but the results are clear. It loses on cMP.

What are you talking about, we are not measuring the GPU performance? Then what you want to say? What's the point of running Valley benchmark and you emphasis that's not CPU limiting (anyway, I already proved that it is)?

You said "I believe unflashed GPU's has a limitation. A 380X must have done much better. Here's what I get with a flashed 270X. Check minimum FPS please. Didn't overclock the card." It is not about GPU performance?

You only focus on the min FPS but ignore everything else. And I already show you that min FPS is not reliable in Valley benchmark because that can go very low during scenes transition. Guess what? I can get even lower min FPS in Valley with the lowest setting on my cMP (same OS, same W3690, same RX580)
Screen Shot 2018-08-06 at 23.54.18.png


What's that mean? It means this particular measurement is NOT reliable.

If we are not using Valley to measure the difference between GPU performance, then what we are doing here?

Also, tell me, why we need a playable frame rate for comparing performance. It's not gaming, just benchmark, I already gave you an example. 6FPS is 100% stronger than 3FPS, what's wrong with that? You made an assumption that we need "playable" for benchmarking, why? where it come from? What's the difference between "6FPS vs 3FPS" and "60FPS vs 30FPS"? numbers are just numbers, 100% stronger is 100% stronger. Going lower setting just make the test more easily become CPU limiting (not necessary the whole process, but lets say 10% of the time is CPU limiting, then the result already unable to accurately tell the difference in GPU performance).

I already show you the CPU usage in my last post, please show me yours.

Everything make sense, the lowest setting has roughly the same result as the med setting, because it's CPU limiting.

EFI is completely irrelevant to GPU performance. 1060 perform badly because the Nvidia web driver has much higher overhead than AMD driver in macOS. Please feel free to try that in Cinebench (you already know this is 100% CPU limiting benchmark), same result will be observe, again, because it's CPU limiting, and Nvidia GPU has higher overhead.

NO, you didn't show anything about Mac EFI has anything to do with performance on AMD card. Absolutely zero prove so far. (BTW, what is GTX 3GB? Where the R9 390 come from?)

If you want to prove that Mac EFI can significant affect the performance. Please do the following.

1) Install a AMD card that with the original PC VBIOS
2) Open Activity Monitor (to make sure CPU is not limiting during benchmark)
3) make sure there is nothing opened and the computer is quite idle
4) Run Valley benchmarks at highest setting (window mode, that will allow us to monitor CPU usage)
5) re-run Valley benchmarks 2 more times to make sure the results are consistent
6) Flash that AMD card with Mac EFI (the EFI ROM must be created by the original PC VBIOS)
7) re-do step 2-5 on the same cMP (with same spec, of course)
8) comparing the result

I had HD7950, R9 280, R9 380 before I moved to 1080Ti and RX580. I can tell there is absolutely no performance difference by just flashing the card (even resistor mod usually prove zero benefit as well). I didn't keep all the record, so, can't show you. But please prove me wrong. I am more than happy to learn. I am more than happy to be corrected. But I need some reliable evidences to show me that an AMD GPU performance can be significantly improved by adding the Mac EFI.

Please, make it happen. Just keep focus on the min FPS on a particular benchmark won't help to solve this problem.
 
Last edited:
Good luck with the project, it will be an awesome mac pro. I still love these machines and its still my go to machine and makes me smile. I have mine hooked up to an ASUS 34" PG348Q curved screen 3440 X 1440 just love it. Also have me an iMac G4 1.2ghz 20" screen with SSD maxed out on snow leopard. another i would never part with.

I do hope the new mac pro is nothing like the trash can mac, if they kept the mac pro 5.1 case i would be getting me one.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.