Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, I tried something out for the first time yesterday. It seems that no-one has had the balls to overclock the nMP.

Well, I took mine (6-core, D500s) up to 950 mhz (core) and 1400 mhz (memory). (For reference, core clock is set at 725 mhz stock). This is on Windows 8.1 with the AMD Omega drivers, using MSI Afterburner with unofficial overclocking mode enabled ("without" Powerplay) and ULPS disabled.

It didn't even break a sweat. Guess those "server grade" cards and the monster heat sink really do what they are meant to do. Maximum GPU temps after a couple of hours of gaming was 70 degrees celsius!!! And this was on a hot summer's day! Clocks never dropped once, no throttling, no artifacts, no crashes. The fan didn't even ramp up that high but I set it to 1900 rpm just in case. Everything ran completely smooth at capped 60 FPS at 1440p. Diablo 3 in crossfire mode with Vsync enabled (AFR-friendly) runs at 4K at 60FPS capped (via Dxstory).
 
Well, I tried something out for the first time yesterday. It seems that no-one has had the balls to overclock the nMP.

Well, I took mine (6-core, D500s) up to 950 mhz (core) and 1400 mhz (memory). (For reference, core clock is set at 725 mhz stock). This is on Windows 8.1 with the AMD Omega drivers, using MSI Afterburner with unofficial overclocking mode enabled ("without" Powerplay) and ULPS disabled.

It didn't even break a sweat. Guess those "server grade" cards and the monster heat sink really do what they are meant to do. Maximum GPU temps after a couple of hours of gaming was 70 degrees celsius!!! And this was on a hot summer's day! Clocks never dropped once, no throttling, no artifacts, no crashes. The fan didn't even ramp up that high but I set it to 1900 rpm just in case. Everything ran completely smooth at capped 60 FPS at 1440p. Diablo 3 in crossfire mode with Vsync enabled (AFR-friendly) runs at 4K at 60FPS capped (via Dxstory).

That's fascinating. Nice work. Too bad we don't have those kinds of tools in OS X. I wonder how much headroom the GPUs have in them :D
 
It is interesting to (re)visit this thread in 2019.

It shows what a failure the 6,1 was. Not only were they stuck with less than perfect GPUs upon release, they were not even able to keep THOSE from failing at a high rate, let alone get people better ones in later models.
 
IMHO, the main drawback is the lack of a second CPU support. 16 cores and 24 cores configurations would be great.
 
Would that have been enough to cover the total lack of CUDA support and the mediocre AMD computing?

For those who don't need graphics power, a silent, small and 16-24 cores, with official apple support would be a dream.
 
Why would you want dozens of CPU cores (which usually have slower per core performance) when GPUs are used more and more for rendering and have better performance for price and per watt? All those cores won't help for most of the tasks the Mac Pro are typically used for these days.
 
Why would you want dozens of CPU cores (which usually have slower per core performance) when GPUs are used more and more for rendering and have better performance for price and per watt? All those cores won't help for most of the tasks the Mac Pro are typically used for these days.

You should tell both Intel and AMD that increasing cpu core count is not the way to go :)
 
Power10s in 2020.

That’s maybe 2020 and more likely 2021 . GlobalFoundaires fumbled the process Power 10 was suppose roll out on. IBM is switching to a future Samsung process ( that is probably custom to IBM design constraints)


https://www.anandtech.com/show/13740/ibm-samsungs-7nm-euv-power-z-cpu

While Samsung may get 7nm EUV process working that they been planned for the last several year to work in 2019-20 ( with more than just limited layers) , they have work to do. And IBM is doing a pace holder 2019 Power 9 version ( in above article ) .


Samsung is already doing limited layer 7nm

https://www.anandtech.com/show/1349...ction-of-chips-using-its-7nm-euv-process-tech

What IBM will probably be shooting for is an incrementally higher transistor density which is going to take time . Especially when IBM will be asking for relatively large dies also and will have a long , detailed beta evaluation process . That is substantially different from what Samsung had planned to do ( much more focused on rolling out ‘low power ‘ increments in 2019-20 )

https://www.anandtech.com/show/1279...ap-euvbased-7lpp-ready-for-2018-3-nm-incoming

Samsung and IBM can do this but it is a change in plans for both. That typically does not lead to shorter product roll out .


OpenPower systems are around though. Folks who wanted to put their HPC CPU workload off in a compute farm could/can . The notion that Mac Pro has to be an internal box solution for many of those loads is a stretch . The utterly insatiable core count market isn’t where the 2006-12 Mac Pro was at either. It is unlikely that Apple is going to switch up an start chasing that now .
[doublepost=1551213895][/doublepost]
You should tell both Intel and AMD that increasing cpu core count is not the way to go :)
You mean both of them that have ‘big’ GPU projects in flight ???? They already know .

Their liarge x86 core count product are at least ( if not more so ) aimed at large multiuser systems ( virtualiztiion , ‘ back in ‘ systems , etc ) than at single user , single app computational workloads .
 
Why would you want dozens of CPU cores (which usually have slower per core performance) when GPUs are used more and more for rendering and have better performance for price and per watt? All those cores won't help for most of the tasks the Mac Pro are typically used for these days.

And 640K is enough for everybody.....

I am glad you know my workflow better than I do.
 
And 640K is enough for everybody.....

I am glad you know my workflow better than I do.

That Bill Gates quote is a misquote. He was referring to the version of DOS at the time and not any future operating system. Only clueless noobs could imagine that Bill Gates couldn’t fathom that new operating systems would be more resource hungry.


[doublepost=1551252704][/doublepost]
You should tell both Intel and AMD that increasing cpu core count is not the way to go :)

For servers those cores are needed. But if we are talking about workstations GPU compute has surpassed anything that the CPUs can do (with less energy consumption and at less cost) so we no longer need so many CPU cores or dual CPUs in a workstation.

There are some apps even some games that can benefit from an 8-10+ core CPU but in most cases the lower clock speed is a detriment.
 
Last edited:
That Bill Gates quote is a misquote. He was referring to the version of DOS at the time and not any future operating system. Only clueless noobs could imagine that Bill Gates couldn’t fathom that new operating systems would be more resource hungry.


[doublepost=1551252704][/doublepost]

For servers those cores are needed. But if we are talking about workstations GPU compute has surpassed anything that the CPUs can do (with less energy consumption and at less cost) so we no longer need so many CPU cores or dual CPUs in a workstation.

There are some apps even some games that can benefit from an 8-10+ core CPU but in most cases the lower clock speed is a detriment.
How much of this is SW related, though? If the OSes and SW took better advantage of multiple cores.......
 
That Bill Gates quote is a misquote. He was referring to the version of DOS at the time and not any future operating system. Only clueless noobs could imagine that Bill Gates couldn’t fathom that new operating systems would be more resource hungry.


[doublepost=1551252704][/doublepost]

For servers those cores are needed. But if we are talking about workstations GPU compute has surpassed anything that the CPUs can do (with less energy consumption and at less cost) so we no longer need so many CPU cores or dual CPUs in a workstation.

There are some apps even some games that can benefit from an 8-10+ core CPU but in most cases the lower clock speed is a detriment.

In 2019, almost every application in my workflow (3D art) will use every cycle and every scrap of ram I can throw at it - even the free software. GPU based rendering at the moment, is mostly CUDA based, which kinda eliminates Macs. (Although the AMD Pro Render engine is starting to mitigate this). Jumping willy-nilly between render engines is a waste of time learning how each engine works materials. Been there, done that.

AFA 640K, I was in IT at that time - so please, stop trying to rewrite history.

And no, MS never did figure it out - look at how many memory extenders were on the market at the time and for how long they were needed. And then joy, we had the EXACT same issue with Windows 3.0 (the GDI heap memory limitation [32K] - once it was filled, your system ground to a halt). Windows 3.1 doubled it to 64K - which meant that there was STILL memory issues that crippled one's ability to multi-task on 386, 486, and Pentium chips.

This is why MS hired Dave Cutler and brought over all of his people from DEC's Prism project (which became NT), just like how MS bought QDos from Seattle Computing and renamed it MS-Dos (after switching the forward slash to a backwards slash).

Dealing with MS's inability to figure out memory management (for YEARS) is what led me to jumping ship from Windows 3.0 to OS/2. Well, that and the fact that most software (at $500 an application) written for Windows 3.0 wouldn't run on Windows 3.1 (except for MicroSoft software - fancy that.)
 
  • Like
Reactions: WatchFromAfar
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.