very nice!... thx for sharing I would have thought that cards sandwiched together like that would be bad for heat though.All servers. Not a single monitor will be connected to any card.
And twenty cards means that I'm upgrading five servers.
View attachment 697606
These are Titan X (Pascal) cards in a server that I'm not upgrading. Note the 24 DIMM slots.
440 cores and 20 GPUs in 7U (InfiniBand version):
@ about $5K per GPU440 cores and 20 GPUs in 7U (InfiniBand version):
https://www.supermicro.nl/products/superblade/module/SBI-7128RG-F2.cfm
I thought maybe your Ti support was unofficial and they would also work here.@ about $5K per GPU
Never seen them over 50° under load. There's a lot of airflow through the cards from the chassis fans, and the inlet temperature is 15°.very nice!... thx for sharing I would have thought that cards sandwiched together like that would be bad for heat though.
That chassis sucks air through the front-mounted GPUs and exhausts to the rear - it's designed for fanless GPUs.I thought maybe your Ti support was unofficial and they would also work here.
I see front slots in the manual but probably the GPU fans would not be effective enough and you would would not like to stand in the aisle.That chassis sucks air through the front-mounted GPUs and exhausts to the rear - it's designed for fanless GPUs.
I know the supported accelerators have no fans, normal GPUs do.GPU fans would blow the wrong way and fight the chassis fans. The GPUs have no fans.
View attachment 699007
Maybe it can intake through the rear bays, if they are not all populated?I know the supported accelerators have no fans, normal GPUs do.
And I know there would be no intake from the front, but blades with front drives do not seem to provide much.
It would seem there's some intake from the sides, but it does not look like enough.
The manuals do not seem to mention cooling at all.
Maybe it can intake through the rear bays, if they are not all populated?
...full of InfiniBand switches, management controllers, network ports,...There are bays above and below the PSUs.
I know, if you can do without the InfiniBand, the minimum should be 2 or 3 bays populated (a blade without IB costs less than 1K, while with it costs 2K+)....full of InfiniBand switches, management controllers, network ports,...
Aren't there "founder" cards with the power connector on the rear?Also note that the space is very tight around the GPUs - only "Founder's Edition" and blowerless form factor will fit. (And you need special low-profile right angle power plugs - there isn't enough space to plug a normal aux PCIe connector.)
I've seen statements that all of the "Founder's Edition" cards are identical - literally manufactured by Nvidia and put in Asus/EVGA/PNY/MSI... boxes. (At least for the 1080Ti.)Aren't there "founder" cards with the power connector on the rear?
Arent currently Server GPUs made for specific Thermal designs? For example Under 150W TDP, Under 225W TDP, Under 300W TDP?I've seen statements that all of the "Founder's Edition" cards are identical - literally manufactured by Nvidia and put in Asus/EVGA/PNY/MSI... boxes. (At least for the 1080Ti.)
P100: 300WArent currently Server GPUs made for specific Thermal designs? For example Under 150W TDP, Under 225W TDP, Under 300W TDP?
And all of what you are talking about is just standardizing this?