If you need ten GPUs, I have a feeling that you should think about whether your needs are beyond what the Apple ecosystem can support.
If you want advice about Linux or Windows systems that support ten GPUs, just ask. (Although five Titan-X cards per system is as far as I've gone so far.)
Splitting the interface signal is the way to go with such high end internally connected arrays . And the next part of the project was to do exactly that . But hitting a brick wall so soon with three modern nVidia GPUs running a recent OS X version has dampened my spirits .
Now I am considering System configurations that would permit splitting a single host interface for the video rendering and freeing up the other host interfaces for other peripheral cards (EFI UI graphics card , controller cards, etc.) .
There should be more than enough bandwidth here , in a cMP .
In fact , with the Nehalem Mac Pro available bandwidth should be :
PCIe slot 1 = 8 GB/s (16 x 500 MB/s)
PCIe slot 2 = 8 GB/s (16 x 500 MB/s)
PCIe slot 3 = 2 GB/s (4 x 500 MB/s)
PCIe slot 4 = 2 GB/s (4 x 500 MB/s)
With keeping in mind slots 3 and 4 share resources .
So , you see , there's a lot to work with here . And the GPGPU array I have already built should , in theory , work fine connected to a single x16 PCIe slot with the appropriate splitting . We're talking 4 lanes of PCIe Rev 2 per graphics card .
For comparison purposes : the nMP (Cylinder) has just a 2.5 GB/s connection through a single TB2 port , which is how an eGPU expansion chassis would connect to this System .
Last edited: