Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Do you have a pic/drawing of what you have in mind? What kind of rack are you picturing?

Just an old cabinet rack. That tower looks like it could go in sideways the MP will go in sideways and you'd be able to keep you 12" spacing. That'd keep all the cables routed and cut the noise back along with keeping everything cleaner. I'd take a picture of ours at work but I can't because it's in a SCIF but I think you know what I mean.
 
NAB Show 2015

I attended the show from April 13th to 16th. A local commented, "Las Vegas is either windy and cool or windy and warm or windy and hot." I've experienced all three of them almost every day here. But what has really blown me away is a High Density Compute Accelerator from One Stop Systems [ http:www. onestopsystems.com ]. This product is sold through http:www.maxexpansion.com . It's said that it can add 139.8 Tflops of computational power using 16 Nvidia K80 GPUs at 8.74 Tflops each. The system has :
1) Four PCIe x16 Gen3 Adapter Cards and Cables (that attach to a 3U rackmount chassis) and
2) Four Integrated Canisters - each with a front mounted intake fan to cool up to four PCIe x16 Gen3 GPUs per canister (4*4 GPUs per canister = 16 GPUs).

The system is powered by three 3,000 watt redundant PSUs.

It can be loaded with, up to, 16:
1) Teslas;
2) Grid GPUs;
3) Intel Xeon Phi Coprocessors;
4) GTX GPUs and
5) AMD GPUs.

GPUs can be mixed to the extent that they can be mixed in a single computer system if all four PCIe adapters are placed in a single system; but it also appears that the GPUs can be mixed to an even greater extent because the PCIe cards appear to be able to be connected to separate systems simultaneously.
 
GPUs can be mixed to the extent that they can be mixed in a single computer system if all four PCIe adapters are placed in a single system; but it also appears that the GPUs can be mixed to an even greater extent because the PCIe cards appear to be able to be connected to separate systems simultaneously.

They look amazingly efficient on space - very tidy.

How does this tie in the with limits you have found on number of GPUs per system? Did the 8 GPU limit apply to Windows as well or just OSX? I cant remember...
 
When choices were fewer, decisions were easier, but things ran slower. Now flip 180

They look amazingly efficient on space - very tidy.

How does this tie in the with limits you have found on number of GPUs per system?

It appears that the number of GPUs per system is mainly dictated by the following factors:

1) system IO space size,
2) other system bios features or their equivalent,
3) PCIe implementation of the particular motherboard,
4) the particular GPU and each GPU's IO space requirements,
5) the particular software application, and
6) the OS.

And don't forget that these factors may interact in non-obvious ways with one another.

Suplementation and examples:
Factors 1, 2, 3 and 4) Amfeltec [ "The motherboard limitation is for all general purpose motherboards. Some vendors like ASUS supports maximum 7 GPUs, some can support 8. All GPUs requesting IO space in the limited low 640K RAM. The motherboard BIOS allocated IO space first for the on motherboard peripheral and then the space that left can be allocated for GPUs. To be able support 7-8 GPUs on the general purpose motherboard sometimes requested disable extra peripherals to free up more IO space for GPUs. The server type motherboards like Super Micro (for example X9DRX+-F) can support 12-13 GPUs in dual CPU configuration. It is possible because Super Micro use on motherboard peripheral that doesn’t request IO space."[Emphasis added]] and other providers of PCIe expansion products caution that exceeding 7 to 8 GPUs per systems is possible only with certain Supermicro systems because only they have the IO space sufficient to recognize GPU counts above 8 or 9. However, Tommes was able to get Octane to recognize only 7 of his 10 GPUs [ https://render.otoy.com/forum/viewtopic.php?f=23&t=44209 ], despite the fact that his Windows OS system running on his Supermicro X9DRX+-F recognized all ten of his GTX 780 TIs. But, GTX 780 TIs might present a special case GPU - see discussion of Factor 4, below.

Factors 2 and 5) A) OctaneRender's license [ see, e.g., https://render.otoy.com/shop/nuke_plugin.php ] states: " A maximum of 12 GPU's may be used. You will not attempt to circumvent the physical GPU or single machine license limit, including obfuscating or impairment of the direct communication between Octane and the physical GPUs, virtualization, shimming, custom BIOS etc.; [Emphasis added]" B) Redshift3d states: "Each instance of Redshift can currently use up to 8 GPUs concurrently. To take advantage of more than 8 GPUs on a single machine, you can launch multiple instances of Redshift each rendering a different job on a different subset of available GPUs."

Factor 3) My cMP2,1s can recognize and run stably only 4 GPUs (or 4 GPU processors = 2xGTX 590) which matches the number of PCIe slots in the system.

Factor 4) Running 8 GTX 780 Ti ACX SC OCs in my Tyan steals resources necessary to recognize all of the system ram that I've installed and thus less ram, than all installed, is recognized. However, neither my 8 GTX Titans, nor my 4 Titan Z (4x2 GPU processors), rob other system resources when installed in the same system, i.e., all of my system's ram is recognized. Thus, I believe that certain GPUs consume more IO space than others [see discussion of Tommes's issue in 1, 2, 3 and 4), above.

Moreover, running 16 GTX GPUs, even with a TDP of 250w, consumes a lot of power, and depending on the application each of them could be consuming 300w or more: 16 * 250w = 4,000w; 16 * 300w = 4,800w. Very few PSU systems can handle that without special precautions/power/electrical/ systems being employed. Thus, only a few GTX cards would be powered sufficiently by a 3,000w PSU.

6) As to the OS, I do not now know what is the max GPU limit, if any, of OSX. I do know that Mavericks V. 9.2 has a 32 CPU core limit. Moreover, all that I've read on the best OS for large numbers of GPUs for system are:
1) Linux,
2) Windows and
3) OSX
in that order.


OctaneRender provides a cure to, at least some, missing GPUs running under Windows:
"Issue 9. Windows and the Nvidia driver see all available GPU's, but OctaneRender™ does not.

There are occasions when using more than two video cards that Windows and the Nvidia driver properly register all cards, but OctaneRender™ does not see them. This can be addressed by updating the registry. This involves adjusting critical OS files, it is not supported by the OctaneRender™ Team.

1) Start the registry editor (Start button, type "regedit" and launch it.)

2) Navigate to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E968-E325-11CE-BFC1-08002BE10318}

3) You will see keys for each video card starting with "0000" and then "0001", etc.

4) Under each of the keys identified in 3 for each video card, add two dword values:
DisplayLessPolicy
LimitVideoPresentSources
and set each value to 1

5) Once these have been added to each of the video cards, shut down Regedit and then reboot.

6) OctaneRender™ should now see all video cards.
" [ http://render.otoy.com/manuals/Standalone/?page_id=62 ]
I've had to resort to this technique to get both the system and Octane to recognize my eight GTX 780 TI ACX SC OCs in my Tyan. It worked successfully [ https://render.otoy.com/octanebench/summary_detail_item.php?systemID=8x+GTX+780+Ti ].


TheaRender provides a little more guidance:"- Configure Windows Watchdog (Windows only). Windows runs a service, called "watchdog" that monitors the graphic driver. If the driver does not respond within 2 seconds, it decides that there is a kind of instability so it terminates and restarts the driver process. The driver is the process responsible for handling all Presto commands to the GPU and - unfortunately - Presto is responsible for keeping the driver super-busy when there is a heavy rendering job.
You can actually configure watchdog service. And this is the recommendation to stay in the safe side, in all cases. In that situation, you can even set your device priorities to Highest (which means fastest Presto - at the expense of a less responsive graphic system).
Read here [ http://msdn.microsoft.com/en-us/library/windows/hardware/ff553890(v=vs.85).aspx ] about watchdog service.
" [ https://www.thearender.com/site/index.php/features/engines/presto-gpu-cpu.html ]

Did the 8 GPU limit apply to Windows as well or just OSX? I cant remember...

My current impression is that there are a number of factors at play when one tries to install a number of GPUs in a system that exceeds the number of factory based PCIe slots. My current belief is that an 8 GPU limit is much too high (about twice as much too high) for an Apple system (pre-2013) running OSX. I also do not know for certain whether an 8 GPU limit applies to the latest Windows OSes, but I am beginning to doubt that it's based on the OS, but rather has more to with the particular system, application and GPUs. I haven't seen any evidence of a single system running more than 8 GPUs on OctaneRender, FurryBall, TheaRender or Redshift3d. Moreover, I haven't been able to find any GPU maximums for either FurryBall (" Do I need FurryBall licence for each GPU? No, the license is per workstation - number of GPUs in computer is UNLIMITED") or TheaRender. Within the next few weeks, I intend to install Amfeltec (SKU-042-43) GPU oriented PCIe 4-way Splitters in my quad E5-4650 SuperMicro Systems. I'll keep you apprised about what I learn from that process because I hope to test, at a minimum, 16, 12 and 8 GPU processor configurations.

P.S. I opted for the Amfeltec splitters over the chassis because of the transfer speed and price of the splitters and the fact that I have some classic tower motherboard chassis that I can Dremel into GPU expansion service [ see "Waste Not - Want Not" post no. 1223, above.]
 
Last edited:
When the Titan X will be available (again) here I'll have 5 video cards to play with and I'll try if OSX can handle them or not.

BTW: I am still searching for a tool which shows me the activity of _all_ of my GPUs - do you now such a tool (for OSX - and if the tool can show more informations like temperature and so on it'll be a nice bonus)?
 
When the Titan X will be available (again) here I'll have 5 video cards to play with and I'll try if OSX can handle them or not.

BTW: I am still searching for a tool which shows me the activity of _all_ of my GPUs - do you now such a tool (for OSX - and if the tool can show more informations like temperature and so on it'll be a nice bonus)?

The GPU processors themselves can affect how many a system can run stably. I was using 3 GTX 590s and only one of the GPU processors of the 3rd GTX 590 would come and go. I never could get both GPU processors of the 3rd GTX 590 to show simultaneously under OSX on my cMP2,1. I hope that you can get five of your GPU processors running stably. Please keep us posted on your success.

Regarding your query concerning a GPU monitoring tool that works under OSX, I haven't been keeping up with what, other than iStat Menus [ http://bjango.com/mac/istatmenus/ ] , is out there. If anyone has some information to share, please let us know.
 
The GPU processors themselves can affect how many a system can run stably. I was using 3 GTX 590s and only one of the GPU processors of the 3rd GTX 590 would come and go. I never could get both GPU processors of the 3rd GTX 590 to show simultaneously under OSX on my cMP2,1. I hope that you can get five of your GPU processors running stably. Please keep us posted on your success.

I'll let you know as soon as I have all cards here and running, that'll be then a mixed system out of 2x GTX Titan's, 1x GTX 980, 1x GTX Titan X and just for checking if 5 GPUs will be recognized one Radeon "Dont_know_which_exactly_it_is".

For a first try this should do the job, if that works I'll go and get another GTX card.


Regarding your query concerning a GPU monitoring tool that works under OSX, I haven't been keeping up with what, other than iStat Menus [ http://bjango.com/mac/istatmenus/ ] , is out there. If anyone has some information to share, please let us know.

Yes, iStat is what I have running - unfortunately it just shows two of the (at the moment) existing three cards. I've send the developers of iStat a note/request if it is possible to see more, but I guess it is tooooo specific as that they are interested in implementing it (which I really understand... I guess there are not many out there with more than two cards in their systems).
 
Modding a mac

Can somebody send me a video tutorial link to this please?I am having trouble understanding this.:confused::p
 
I got the Titan X - so my system is now running with 2x Titans, 1x GTX 980, 1x Titan X.

This system was now running rock solid for a few days, so I decided today to do the test and added a Radeon HD5770 to the system.
I am getting no startup chime, system doesn't come up - I am running this system headless and have no screen which I can use with this Mac, so I dont know if there is shown anything or if the system really just is "dead".
 
All fellow travelers are appreciated.

I got the Titan X - so my system is now running with 2x Titans, 1x GTX 980, 1x Titan X.

This system was now running rock solid for a few days, so I decided today to do the test and added a Radeon HD5770 to the system.
I am getting no startup chime, system doesn't come up - I am running this system headless and have no screen which I can use with this Mac, so I dont know if there is shown anything or if the system really just is "dead".


I don't and will not try to prevent anyone of my cousins from exceeding my expectations because I wouldn't want that done to me. On the other hand, I try not to be shy about expressing my opinions, particularly those based on my past experience, particularly because I do not like to waste my money and particularly because the listener may be as frugal as I am. With that said, if your system boots and runs normally when you subtract the HD5770 from the mix, then your system runs just as I'd expect it too, nothing more/nothing less. Mixing Radeons and GTXs has always been problematic (even mixing certain different GTXs can do the same) ; so has exceeding the 4 GPU limit on a cMP. Some of what prevents most systems from exceeding a specific GPU count is the system's IO space, PCIe implementation and the mix of GPUs [ http://render.otoy.com/forum/viewtopic.php?f=40&t=43597 ]. With the added Radeon HD5770, you were challenging at least three of the factors. Moreover, your system had to deal, not with just different GPUs, but different drivers (AMD and Nvidia) also. So if you end up being able to run only four Nvidia GPUs successfully on your Mac, you can consider yourself ahead of the game. If you have any free PCIe slots left, then you may be able to run some more non-GPU PCIe cards, but just remember that they too pose the same risks. If you find a way to run that AMD card, along with the four Nvidia cards, please publish here how you conquered that challenge. Although some applications, like OctaneRender or Blender, may not care how many different GPUs are mixed in a particular system, if the cards prevent the system from booting or seeing all of the GPUs (even if the system boots normally), then it matters not what OctaneRender or Blender can do if it is in opposition to the system's capabilities. Some GPUs, like the GTX 780 TI ACX SC OCs, are alone IO space hogs; some GPUs, like the lowly/cheap GT 640, have cause my Supermicro system to experience IO space issues when paired with more than two GTX 780 6Gs.

We're frontiermen/women, using GPUs to do some of the things that were previously within the domain of the CPU. The more that we push the boundaries of the GPU solution, the more we learn about the best practices. So just before I had learned of your post, immediately preceding, I had added to my thread, entitled "Best Practices For Building A Multiple GPU System" on OctaneRender's forum the following: " NEVER PLAN TO PUT/BUY DIFFERENT CARDS IF YOU THINK ABOUT WATERCOOLING" [per Smicha]. ALSO, NEVER PLAN TO PUT/BUY DIFFERENT CARDS IF YOU THINK ABOUT CREATING A MASSIVELY PARALLEL PROCESSING SYSTEM, UNLESS YOU ARE SURE THAT THE MIX OF CARDS WILL NOT CAUSE A RESOURCE/IO SPACE ISSUE [ http://render.otoy.com/forum/viewtopic.php?f=40&t=43597&p=231838#p231838 ]. Those best practices are just that - best practices. You and anyone else are welcomed to challenge any of them. So long as you detail how you met the challenge, you've help to improve our store of knowledge in a replicable way.

Thanks for being a fellow traveler and sharing your experiences.
 
Last edited:
Well, the Radeon worked fine side-by-side with my GTX cards. I used it when I had just two GTX cards, the Radeon was for driving the (virtual) screen.

The only difference was: the Radeon was inside of my cMP while the GTX were on the cluster board. Now I have the Titan X inside the cMP, the other three GTX and the Radeon were at the cluster. Maybe it's worth trying to swap the cards? I dont know if that'll change anything.

I am now at the point that this getting more and more interesting, but not having the knowledge to get into the deep - my personal goal would be having as much GPUs as possible at a Mac = stretch the limit of four GPUs out.

Watercooling, as you mention it, is the next step I will go and do - I am already looking for components, seeing that this is another way to spend a LOT of money on.

If I am not able to push the 4 cards limit I think the next step will be: getting a new system and max it out, then getting them to work together. And I now my better half is thinking I am going mad, as this is not something I need for my life... It's just for... well, what? ;)
 
Well, the Radeon worked fine side-by-side with my GTX cards. I used it when I had just two GTX cards, the Radeon was for driving the (virtual) screen.

The only difference was: the Radeon was inside of my cMP while the GTX were on the cluster board. Now I have the Titan X inside the cMP, the other three GTX and the Radeon were at the cluster. Maybe it's worth trying to swap the cards? I dont know if that'll change anything.

I am now at the point that this getting more and more interesting, but not having the knowledge to get into the deep - my personal goal would be having as much GPUs as possible at a Mac = stretch the limit of four GPUs out.

Watercooling, as you mention it, is the next step I will go and do - I am already looking for components, seeing that this is another way to spend a LOT of money on.

If I am not able to push the 4 cards limit I think the next step will be: getting a new system and max it out, then getting them to work together. And I now my better half is thinking I am going mad, as this is not something I need for my life... It's just for... well, what? ;)

Sedor,
Thanks for the update.
My experience is similar to yours in this respect: After receiving and installing my Amfeltec splitters, I could run my GT 640 with one or two GTX 780's connected to the splitter, but as I increased the number of GTX 780's beyond 2 using the splitters, then that GT 640 became the weakest link by beginning to prevent booting. So you can probably mix a couple of cards without much trouble, but when your goal is higher, then sticking with the same cards makes the process easier. I just got some more GTX 780 6Gs. They're $350 (USD) EVGA refurbs from NewEgg. I'll keep you posted on how they work out in my Supermicro, using the Amfeltec Splitters.
 
I am looking forward your results with the Amfeltec splitters!

I got now two ideas and don't know if they are worth to follow:
- seeing what happens if I try to connect more than four cards to a new MacPro (in total, counting the internal cards)
- getting a Xserve and see if it can handle more cards, maybe it has more resources free?
 
Last edited:
I am looking forward your results with the Amfeltec splitters!

I got now two ideas and don't know if they are worth to follow:
- seeing what happens if I try to connect more than four cards to a new MacPro (in total, counting the internal cards)
- getting a Xserve and see if it can handle more cards, maybe it has more resources free?

This is one worth a try: "seeing what happens if I try to connect more than four cards to a new MacPro (in total, counting the internal cards)." But I'd just borrow the fifth GTX card if I already didn't own it. I'd pass on purchasing an Xserve.
 
This is one worth a try: "seeing what happens if I try to connect more than four cards to a new MacPro (in total, counting the internal cards)." But I'd just borrow the fifth GTX card if I already didn't own it. I'd pass on purchasing an Xserve.

Will do that test at the weekend, as I must check before if my nMP is still correctly "hacked" for running the card(s).

Okay, so you don't think a Xserve will make a difference - was just a thought, as I dont find any in depth specifications which should help to decide where the max. limit of GPUs really is hidden.
 
Dealing with GPU IO space requirements & what to look for in a mobo

Will do that test at the weekend, as I must check before if my nMP is still correctly "hacked" for running the card(s).

Okay, so you don't think a Xserve will make a difference - was just a thought, as I dont find any in depth specifications which should help to decide where the max. limit of GPUs really is hidden.

Here's what Gordan79 [ http://forums.evga.com/Evga-SR2-with-8-GPUs-m2056924-p3.aspx ], who appears to very knowledgeable on the issue of IO space requirements of Nvidia GPUs (and the EVGA-SR2 motherboard), has to say on the subject (when discussing putting 10 and 14 GPUs on the SR2):

"There should be no problem with the number of GPUs as long as you don't run out of PCIe I/O memory. In theory, the BIOS on the SR-2 should allow for up to 3GB of PCIe I/O memory, but it isn't particularly clever with how it allocates it. It doesn't get allocated anywhere nearly contiguously, and there are gaps it won't reuse, so in reality you are limited to a lot less than 3GB due to the crap BIOS. Also, as you may infer from the 3GB limit, it will only map I/O memory below the 32-bit limit, even though most GPUs advertise their required I/O memory as 64-bit capable.

A typical GeForce card requires:
1) 1x 128MB block
2) 1x 32MB block
3) 1x 16MB block
adding up to a total of 176MB. So in theory, you should be able to get up to 17 GeForce GPUs running on the SR-2. In practice due to the way the BIOS allocates the I/O memory you'll be lucky to get anywhere near that.

Tesla and Quadro cards are different in that they have much bigger I/O memory demands (you can modify the BIOS to adjust that on both GeForce and Tesla/Quadro cards), so obviously you'll get much fewer of those in without VBIOS modifications... . The only setting in the BIOS that is relevant to running many GPUs is the memory hole (which is settable to a maximum of 3GB). No other setting is relevant to running many devices with large I/O memory requirements. There is no way to influence where the BIOS maps the different I/O memory blocks, so it will either work or it won't, and if it doesn't, your only hope is to start modifying the soft straps on the VBIOS.

Those are documented here:
https://indefero.0x04.net/p/envytools/source/tree/ed2/hwdocs/pstraps.txt
The bits you would need to adjust are:
BAR 0: bits 17,18,19
BAR 1: bits 14,15,20,21,22
BAR 2: bit 23

To adjust them, dump the BIOS and look at the location appropriate to your GPU to get the current strap (remember to swap the byte order) and modify it accordingly. Flash the new strap to the GPU with nvdlash --straps. Getting it wrong will in many cases result in a bricked GPU and you will need to unbrick it by booting it with the BIOS chip disabled, then re-flash it. If you are planning to experiment with this I highly recommend soldering a switch and a resistor (in series) across the VCC and GND for easy unbricking.
... .

IMO, given what you are spending on GPUs already, you would do well to invest into a decent motherboard first. Something with a UEFI firmware would be a good start.

Given you are trying to use up to 14 GPUs (7 slots with dual GPU cards), you will need at least 2688MB of PCI IOMEM area. Although the SR-2 can theoretically deliver up to 3GB, I would be very surprised if the BIOS' IOMEM mapping had an allocation occupancy good enough to give you enough. Also note that SR-2 uses a legacy BIOS, not an UEFI one, so all BARs, regardless of whether they are 64-bit capable, have to get mapped under the 4GB limit (hence the 3GB IOMEM limitation)." [Emphasis added]

Obviously, Gordan, as well as many others, is not a big fan of the SR-2's bios implementation.
 
Last edited:
I'd love to know how much free PCIe I/O memory a MacPro has - maybe then I could be tempted into modifying my GPUs.

Yes, from a logic point of view I should go and get a decent motherboard, no question! But then there is this idea getting the max out of my MacPro.

Anyway, now I have to spare some money and then decide which steps I really will do next - spend to much money this year into GPUs ;)

This week I'll get the components for my water cooling project...
 
Hi Tutor,

I have the opportunity to purchase a Dell PowerEdge C6100 Server for a very hefty discount. The unit is a cluster server of 4 nodes, each with the potential to run up to 12c/24t PER NODE. My question is:

I currently do a lot of video editing under OS X, much of it is evolving into 4K formatted video. My current workstation is powerful enough to edit the video, but it's taking a lot of CPU time. I'm wondering if there is a way to offload the video renders to this machine. Should I purchase it, would it be possible to use something like Grid Batch software to allow this powerhouse to work as a render slave for Premiere Pro (my video editing software of choice)? Estimated Geekbench puts the maximum uninhibited output at just under 75,000 Multi Core, but that would require all of the nodes to work at maximum capacity without a bottleneck. Quite a lot of theoretical horsepower.

Which OS would be best for this, on both the client and server end? Is this even possible?

Thanks in advance.

-N
 
Hi Tutor,

I have the opportunity to purchase a Dell PowerEdge C6100 Server for a very hefty discount. The unit is a cluster server of 4 nodes, each with the potential to run up to 12c/24t PER NODE. My question is:

I currently do a lot of video editing under OS X, much of it is evolving into 4K formatted video. My current workstation is powerful enough to edit the video, but it's taking a lot of CPU time. I'm wondering if there is a way to offload the video renders to this machine. Should I purchase it, would it be possible to use something like Grid Batch software to allow this powerhouse to work as a render slave for Premiere Pro (my video editing software of choice)? Estimated Geekbench puts the maximum uninhibited output at just under 75,000 Multi Core, but that would require all of the nodes to work at maximum capacity without a bottleneck. Quite a lot of theoretical horsepower.

Which OS would be best for this, on both the client and server end? Is this even possible?

Thanks in advance.

-N

Hello NotNice,

I hope that you're happy, healthy and prospering.

The equivalent Geekbench 3 performance of the Dell PowerEdge C6100
using 4x2xX5690s should =~ 28,000 to 32,000 per Node [ http://browser.primatelabs.com/geekbench3/546322 ] or between 112,000 to 128,000 multicore for all four modules fully loaded. However, Windows Server products yield some horribly low Geekbench scores, though it doesn't affect the performance of real world applications running on the server. Just run the fully loaded system using Linux Mint to get true performance.*/ Also, you can splurge and by a disk **/.

I’d run the Dell using Windows Server 2012 or 2008 ***/, using Premiere Pro for Windows.

Are you running the X99 Workstation i7-5820K as a hackintosh or only as a Windows system? I’m not familiar enough with either Grid Batch or your setup (are you running OSX on the X99 or just the MP1,1?) to feel comfortable making a recommendation to use it at this stage. Nevertheless, I’d recommend that you PM AidenShaw regarding your using Grid Batch. Also, consider what your license costs will be. Add-on license cost - [ http://www.softwaremedia.com/micros...Da7lVfVXYNWOG72oQXntW3_mK2jZyF3lSLhoCUJTw_wcB ] vs/and/or standard license cost [ http://www.mychoicesoftware.com/pro...osoft-windows-server-2008-r2-standard-license ]. Aiden also probably has a better handle on what licenses you need to purchase for a blade/node system. I haven't been as fortunate, as you are now, to confront this question.


*/ http://www.linuxmint.com .

**/ https://www.osdisc.com/products/linux/linuxmint?q=&sort=pricehigh&view=60 .

***/ Dell recommended, “Microsoft® Windows Server® 2008 R2 Enterprise and HPC Server 2008 R2 x64.” [ http://www.dell.com/downloads/global/products/pedge/en/poweedge-c6100-spec-sheet-100810.pdf ] Windows server 2008 - end of life [ https://support.microsoft.com/en-us...&alpha=windows server 2008 R2&Filter=FilterNO ].
 
Last edited:
Hello NotNice,

I hope that you're happy, healthy and prospering.

The equivalent Geekbench 3 performance of the Dell PowerEdge C6100
using 4x2xX5690s should =~ 28,000 to 32,000 per Node [ http://browser.primatelabs.com/geekbench3/546322 ] or between 112,000 to 128,000 multicore for all four modules fully loaded. However, Windows Server products yield some horribly low Geekbench scores, though it doesn't affect the performance of real world applications running on the server. Just run the fully loaded system using Linux Mint to get true performance.*/ Also, you can splurge and by a disk **/.

I’d run the Dell using Windows Server 2012 or 2008 ***/, using Premiere Pro for Windows.

Are you running the X99 Workstation i7-5820K as a hackintosh or only as a Windows system? I’m not familiar enough with either Grid Batch or your setup (are you running OSX on the X99 or just the MP1,1?) to feel comfortable making a recommendation to use it at this stage. Nevertheless, I’d recommend that you PM AidenShaw regarding your using Grid Batch. Also, consider what your license costs will be. Add-on license cost - [ http://www.softwaremedia.com/micros...Da7lVfVXYNWOG72oQXntW3_mK2jZyF3lSLhoCUJTw_wcB ] vs/and/or standard license cost [ http://www.mychoicesoftware.com/pro...osoft-windows-server-2008-r2-standard-license ]. Aiden also probably has a better handle on what licenses you need to purchase for a blade/node system. I haven't been as fortunate, as you are now, to confront this question.


*/ http://www.linuxmint.com .

**/ https://www.osdisc.com/products/linux/linuxmint?q=&sort=pricehigh&view=60 .

***/ Dell recommended, “Microsoft® Windows Server® 2008 R2 Enterprise and HPC Server 2008 R2 x64.” [ http://www.dell.com/downloads/global/products/pedge/en/poweedge-c6100-spec-sheet-100810.pdf ] Windows server 2008 - end of life [ https://support.microsoft.com/en-us...&alpha=windows server 2008 R2&Filter=FilterNO ].

Thanks so much for the quick response.

The limit in the C6100 is that each processor has a TDP limit of 90W, so the X5690s that we're so fond of here won't work. I've got my eye on L5640s, which are 6 cores at 2.26GHz and have a rough 12 core Geekbench of around 18,500. The X99 Workstation runs both OS X and Windows, but I primarily use Premiere under OS X. Ignore that Mac Pro - it's sitting at my work and rarely sees use, let alone for something like this.

I'll read up on those. It's still research phase right now, but it's looking promising.

Thanks again.

-N
 
Hi Tutor, just wanted to reach out and say hi :). I hope things are going well with you. I wanted to find out (since I was considering building a new system soon), what would be the best on the market for:

1) Mobo (multiple CPU mobo is great if reliable for a Hackintosh)
2) 3 x GPU's
3) CPU (or CPU's depending on the mobo)

This would be for a setup for a Hackintosh with a case that will fit a HPTX (same size as an SR-2 mobo) that is reliable and would work. EVGA hasn't put out any new Dual (OC'd) CPU mobos and they don't have any plans yet for any kind of new releases. I wanted to do 3 x Titan X's, but again, don't know how compatible they are with a Hackintosh setup. I also hear that a new CPU will be coming out in January and wonder if I should wait until then. Let me know your thoughts my friend.

P.S. - I spoke with RampageDev and he says that the new OS (Mavericks or Yosemite) doesn't work with multiple monitor (MM) setups (like what I currently have setup with 3 x 30" 2560x1600 monitors). Do you know if the 980's or Titan X's work with Hackintoshes and MM's? Again, let me know what you can. Later… :cool:
 
Last edited:
Will do that test at the weekend, as I must check before if my nMP is still correctly "hacked" for running the card(s).

Okay, so you don't think a Xserve will make a difference - was just a thought, as I dont find any in depth specifications which should help to decide where the max. limit of GPUs really is hidden.

Hey guys, really happy to see you here working on something that parallels what I have been working on.

Sedor I believe that you and I are the only ones posting about nMP with eGPU at Techinferno. You had CUDA running on nMP. I have been able to get display recognized as primary display via an Nvidia card on nMP, think I am the only one to have done so. It required a special eEFI that allowed boot screen to display on GTX Titan Black instead of on D300s.

The whole thing gets much more complicated trying to get into Windows. I actually posted in your thread over there asking about Windows. The fascinating thing about nMP is the incredible web of PCI to PCI bridges that connect things. And in every attempt I have made, with or without eEFI I can not get nMP to work for display output or CUDA in Windows. I always end up with an "Error 12" ("Windows can not find enough resources, etc")

I enlisted the help of Nando4 over there. He pointed me at a thread trying to get eGPU working on a MacBook of some sort. The guy was able to connect things by booting into EFI shell and then using mmio commands to link things. I dug into this and did everything I could, best I got was display output, but still couldn't get driver to load.

Anyhow, I think that we are fighting similar battle. Some good reads include the Tolud battles at eGPU boards. Basic issue they fight is all of the addressing going into 32 bit space, not enough space for GPUs, so they compact some and change to 36bit to gain space. There is a thread on this board from when people first started doing true EFI boots of Win 7 and Win 8. Issue was that GPUs weren't working in Windows, same Error 12 issue we hit with eGPU.

Anyhow, it may be possible to use some of the methods mentioned at eGPU board (DSDT over rides, mmio on EFI boards, etc)

I did an lspci using a GTX680 that had been eEFI'd into running display and the out put didn't match what those threads expected, however. I have been unable to get nMP to see a 2nd eGPU IIRC. I am able to test a variety of setups. I have an infinite number of GPUs and an infinite number of TB2 enclosures (OK, exaggeration, only have 12 of those)

But my attempts to get 2014 Mini to do SLI failed horribly. Partially because HyperSli isn't being supported and the replacement is harder to get running, but also because I seem to run into issues with more than 2 or 3 eGPUs.

So, Sedor, I would be thrilled to know if you ever tried eGPU with nMP and:

1. Display output from eGPU in OS X.
2. Any function (CUDA or display output) in Windows.

EFI can help get a card "introduced" to the OS but doing many at once doesn't seem to be possible. There is however, someone much brighter than I who will shortly be joining the eGPU fray. He may find more useful answers than I.

I have had some fun already, I was being told off over there by someone with connections at Netstor TB enclosure company. They (Netstor) flat out swore that 2014 Mini could only have 2 external Nvidia GPUs. So I wired up 3 and did a screenshot.

It's the little things.
 
Hi Tutor, just wanted to reach out and say hi :). I hope things are going well with you.

Hello PunkNugget,

Thanks immensely for visiting and for all that you have done to help open my mind to the advantages of CUDA. I hope that you are happy, healthy and prosperous.


PunkNugget said:
I wanted to find out (since I was considering building a new system soon), what would be the best on the market for:

1) Mobo (multiple CPU mobo is great if reliable for a Hackintosh)
2) 3 x GPU's
3) CPU (or CPU's depending on the mobo)

This would be for a setup for a Hackintosh with a case that will fit a HPTX (same size as an SR-2 mobo) that is reliable and would work. EVGA hasn't put out any new Dual (OC'd) CPU mobos and they don't have any plans yet for any kind of new releases. I wanted to do 3 x Titan X's, but again, don't know how compatible they are with a Hackintosh setup. I also hear that a new CPU will be coming out in January and wonder if I should wait until then. Let me know your thoughts my friend.


If I were going to purchase a dual CPU/multi-GPU motherboard currently, my first choice would now be a Supermicro X10DAX-O Dual LGA2011/ Intel C612/ DDR4/ SATA3&USB3.0/ A&2GbE/ EATX Server Motherboard [ http://www.supermicro.com/manuals/motherboard/C612/MNL-1563.pdf ]. It can be purchased from Superbiiz for $405 [ http://www.superbiiz.com/detail.php?name=MB-X10DAX ]. It's an Intel Zeon 26XX V.3 dual CPU slotted/3x PCI-Express 3.0 x16 Slots, 2x PCI-Express 3.0 x8 Slots, 1x PCI-Express 2.0 x8 (runs at x4) slotted overclockable (by only < 1.05% */) motherboard. It got a 9.4 review score here : http://www.servethehome.com/supermicro-x10dax-review-hyperspeed-motherboard/ . If you were to use 3x x16 to x16 and 2x x8 to x8 riser cables (which can usually found on E-bay for about $5 to $10 each), then you could load the system with 5 double wide GPUs. This motherboard supports Titan X GPUs. As to it's suitability for a Hackintosh, I'd suggest that you do as I would do, and that is to consult RampageDev.

PunkNugget said:
P.S. - I spoke with RampageDev and he says that the new OS (Mavericks or Yosemite) doesn't work with multiple monitor (MM) setups (like what I currently have setup with 3 x 30" 2560x1600 monitors). Do you know if the 980's or Titan X's work with Hackintoshes and MM's? Again, let me know what you can. Later… :cool:

I don't have any Maxwell GPUs, so I can't tell you definitively whether the 980's and Titan X's would work, but I do know that owners of 2009-2012 Mac Pros are running 980's and Titan X's so I see no reason why those GPUs wouldn't work with Yosemite. I don't recalled how the driver release dates match up with the release of those GPUs, but it's my fuzzy recollection that Mavericks precedes the release of OSX drivers for those GPUs so Mavericks may not support those GPUs.


*/ - The predecessor Supermicro motherboard (for Sandy and Ivy Bridges) to the X10DAX-O motherboard, the Supermicro $630 X9DAX-IF-O Dual LGA2011/ Intel C602/ DDR3/ SATA3&USB3.0/ A&2GbE/ Enhanced EATX Server Motherboard [ http://www.superbiiz.com/detail.php?name=MB-X9DAXF&c=CJ ] could be overclocked by the full 1.0755% allowable for post-Westmere CPUs); but given the other technological enhancements in the latest CPUs, that overclocking differential found in the earlier DAX is more than likely just a wash in terms of true performance.

P.S. While the Supermicro X10DAX-O supports up to two 18 core 26XX V.3 CPUs, you might get the greatest performance in a Hackintosh environment by not exceeding 16 real cores per CPU, because my last experiments with Mavericks shows that is had a cap of 32 CPU cores - real cores/threads and/or hyper-cores/threads combined. I'm not sure whether this same limit applies to Yosemite; but if it does, then staying within the limit of 32-cores for all CPUs combined by using only real CPU cores/threads (turning hyper threads off) will yield the best CPU performance under OSX.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.