Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
The blade has an intake at the bottom on the front.

Another question is what would happen if you have both intakes and exhausts on the same side.
 
Exactly

7128RG-F2_angle[1].jpg


You see the vent openings for 20 Tesla cards.
 
I am not saying it would probably work, just that there are 3 additional vents and powerful fans.

I understand that the two at the rear probably can't work as intakes, because from the manual it would seem the PSUs pull from this area.
 
Last edited:
This thread reminds me of an argument I had with my ex. On hot days I wanted to have two house fans, one pushing and one pulling, for maximum flow of cool outside air.

She wanted both fans to blow inward, to have the cool outside air coming in. She thought that having one of the fans blow outward, against the cool air, was stupid.

Of course, anyone in a relationship knows this is a no-win scenario. Either (A) I make her happy, point both fans inward, but the house doesn't get cool, or (B) I demonstrate how much better it is with push-pull, the house gets cooler, but she's mad.
 
This thread reminds me of an argument I had with my ex. On hot days I wanted to have two house fans, one pushing and one pulling, for maximum flow of cool outside air.

She wanted both fans to blow inward, to have the cool outside air coming in. She thought that having one of the fans blow outward, against the cool air, was stupid.

Of course, anyone in a relationship knows this is a no-win scenario. Either (A) I make her happy, point both fans inward, but the house doesn't get cool, or (B) I demonstrate how much better it is with push-pull, the house gets cooler, but she's mad.
There's no intake fan. If you add a blower it would be perpendicular to the rear fans.

Where this loses is if the rear bay area flows directly into the PSU, and that even if the three small intakes if this was not the case the openings are probably not big enough.

Then there's the problem of the intakes being next to the exhausts.

After that there's the problem of stacking in a rack, and opposing racks.

Maybe the last problems would be solvable with a floor-ceiling push-pull room.
 
Last edited:
Best of show:

IMG_1596.jpg


Self-driving tractor - and of course it's a John Deere.


IMG_1599.jpg


Self-driving truck.


Honorable mention:

IMG_1598.jpg


eGPU
 
Last edited:
There's no intake fan. If you add a blower it would be perpendicular to the rear fans.

Where this loses is if the rear bay area flows directly into the PSU, and that even if the three small intakes if this was not the case the openings are probably not big enough.

Then there's the problem of the intakes being next to the exhausts.

After that there's the problem of stacking in a rack, and opposing racks.

Maybe the last problems would be solvable with a floor-ceiling push-pull room.
I think this concept is really asking for horizontal blades.
 
There's no intake fan. If you add a blower it would be perpendicular to the rear fans.

Where this loses is if the rear bay area flows directly into the PSU, and that even if the three small intakes if this was not the case the openings are probably not big enough.

Then there's the problem of the intakes being next to the exhausts.

After that there's the problem of stacking in a rack, and opposing racks.

Maybe the last problems would be solvable with a floor-ceiling push-pull room.

Did you mean to quote someone else? I went off topic and was talking about houses, not rack servers. ;)
 
Make graphics card with reversed fan.
How do you make a centrifugal sucker? Blowers are rather one-directional. Even if you reverse the spin, it still blows - but not as efficiently.

Get a card without a blower or fan. Problem solved. Or remove the blower from a blower card.
 
How do you make a centrifugal sucker? Blowers are rather one-directional. Even if you reverse the spin, it still blows - but not as efficiently.

Get a card without a blower or fan. Problem solved. Or remove the blower from a blower card.
I was thinking about a cross-flow fan.
 
I was thinking about a cross-flow fan.
The chassis is designed for fanless (and blowerless) cards. Go with the (air)flow.

I saw lots of 3U boxes at GPUtech today, with up to 16 dual-slot GPUs. (One is the "16 eGPU" image I posted a few back https://forums.macrumors.com/thread...rence-second-day.2043059/page-4#post-24564043.)

All fanless/blowerless. The was one 16 GPU 3U system that had an option for a 4U version - they added extra height so that GTX cards with the top-mounted aux PCIe connectors could be used.
 
  • Like
Reactions: Flint Ironstag
The chassis is designed for fanless (and blowerless) cards. Go with the (air)flow.
No. The point is if the design also makes sense for desktops for the cards to be cheap.

Which could be the case, instead of sucking air from behind the drives, bring "fresh" one from outside and exhaust through the top of the card. For better effect, a normal enclosure would take a rotated bottom PSU.
 
Arent currently Server GPUs made for specific Thermal designs? For example Under 150W TDP, Under 225W TDP, Under 300W TDP?

And all of what you are talking about is just standardizing this?
All GPUs are built for particular power constraints. (Not thermal constraints, although input power and output thermals are closely tied. The power supply determines the max supplied, but the systems thermal design may limit it as well.)

These power constraints are:
  • 75 watt - a PCIe x16 graphics slot can deliver a max of 75 watts to the GPU. GPUs without aux PCIe connections have to be under this limit. (Note that some systems do not supply 75 watts to all x16 slots - so beware.)
  • 150 watts - a six-pin connector (or 2x3 in some literature) supplies up to 75 watts. So a card with one six-pin connector can be up to 150 watts (75 from the PCIe slot plus 75 from the six-pin connector).
  • 225 watts - a card with dual six-pin connectors can use up to 225 watts (75 from the PCIe slot plus 2*75 from the six-pin connectors).
  • 225 watts - an eight-pin connector can provide 150 watts, so a card with one eight-pin connector can also do 225 (75 watts from the slot and 150 watts from the eight-pin).
  • 300 watts - an eight-pin connector plus a six-pin connector can do 300 watts (75 watts from the slot, 75 watts from the six-pin, plus 150 watts from the eight-pin).
In theory, other combinations could be possible - but these five options cover just about everything on the market. (HPE servers ship with ten-pin connectors for the GPU aux PCIe power leads. They ship an assortment of power leads - 10 -> 6, 10 -> dual 6, 10 -> 8, 10 -> 6+8, 10 -> triple 6. I've never seen a triple 6 - but I have twenty of so of those power leads in case one shows up.)

My boatload of GTX 1080 Ti cards that I need to start installing are 250 watt cards, so they have 8+6 connections.

And, just what is a "server GPU"? I don't think that is really a category. There are "compute-only" GPUs without video output ports - but they're the same chips/memory/etc as a "graphics" GPU - they're just missing the circuitry and ports for DP/HDMI/DVI. I put "graphics" GPUs in my servers (but don't connect any monitors), other people put "compute-only" GPUs in their workstations (and only connect monitors to a "graphics" GPU in the system).
[doublepost=1494377002][/doublepost]
No. The point is if the design also makes sense for desktops for the cards to be cheap.
You post a link to a server room rackmount chassis that's $50K before you add CPUs, memory and GPUs, and then you bring "cheap desktop" into the discussion?

And where'd you come up with the idea that these systems have "drives"?

I'm probably done with your tangent.
 
Last edited:
You post a link to a server room rackmount chassis that's $50K before you add CPUs, memory and GPUs, and then you bring "cheap desktop" into the discussion?

I'm probably done with your tangent.
Where did you see the chassis plus 10 blades would be 50K ??

There's a difference between buying 20x $5000 cards and 20x $700 cards.

And while you could fill it with 22-core CPUs, it would not always be necessary to go that far.

I did not say "cheap desktop". I said "cheap cards".

Changing topic, there are also cards with dual 8-pin connectors, like this one:

 
Where did you see the chassis plus 10 blades would be 50K ??
Educated guess - I've bought many millions of dollars of enterprise systems, and sometimes in budget meetings have to come up with quick estimates without any research.

Prove me wrong. Tell me what
  • empty ten slot chassis
  • four 3kw power supplies
  • management controller
  • InfiniBand switch
  • GbE switch
  • management software
  • 3 year support contract for management software and firmware updates (yes, you have to pay for firmware updates month to month)
  • ten empty GPU blades (dual socket, no CPU, no RAM, no GPU)
  • InfiniBand controllers on each blade
  • 3 year support contract for hardware maintenance (get the cheap next business day support)
would cost. (And of course, this assumes that you already have 10GbE network and InfiniBand networking installed.)

Then add 20 CPUs, 20 GPUs, 10,240 GiB of RAM....

And then tie that into "cheap desktop". ;)

You're out of your league. Cut your losses and go post about the new Emojis.
[doublepost=1494379155][/doublepost]
You're out of your league. Cut your losses and go post about the new Emojis.

Just as a reference point, today at GPUtech I was talking to the people selling this eGPU box:

img_1598-jpg[1].jpg


About $20K without GPUs.

And it's a eGPU expander without CPUs, RAM, or networking.

(and if I'd posted the full resolution image - you'd see that it has three Tesla P100 cards and seventeen K80s)
 
Last edited:
Educated guess - I've bought many millions of dollars of enterprise systems, and sometimes in budget meetings have to come up with quick estimates without any research.

Prove me wrong. Tell me what
  • empty ten slot chassis
  • four 3kw power supplies
  • management controller
  • InfiniBand switch
  • GbE switch
  • management software
  • 3 year support contract for management software and firmware updates (yes, you have to pay for firmware updates month to month)
  • ten empty GPU blades (dual socket, no CPU, no RAM, no GPU)
  • InfiniBand controllers on each blade
  • 3 year support contract for hardware maintenance (get the cheap next business day support)
would cost.

Then add 20 CPUs, 20 GPUs, 10,240 GiB of RAM....

And then tie that into "cheap desktop". ;)

Chassis with 4x 3kw PSUs: $3900
10GbE switch: $3700
Controller:$600

Blade without InfiniBand: $900

Software, support: you will have to ask them.
 
Chassis with 4x 3kw PSUs: $3900
10GbE switch: $3700
Controller:$600

Blade without InfiniBand: $900

Software, support: you will have to ask them.
So, you're approaching half my guestimate without InfiniBand or support. Keep working.

And you're already at $17K even with those omissions. What's the price of the "cheap desktop" that you talked about?
 
Last edited:
So, you're approaching half my guestimate without InfiniBand or support. Keep working.
What half? That is about a third.

And I did not show you the InfiniBand thinking you would actually pay for it.
 
Software, support: you will have to ask them.
You've already lost your argument about "cheap desktop" at $17k. We don't need to bother our enterprise account representatives to find out if the real cost is my guestimate of $50K or only $40K.
[doublepost=1494382332][/doublepost]
What half? That is about a third.
With a boatload of stuff missing. Add the boatload.
[doublepost=1494382437][/doublepost]
And I did not show you the InfiniBand thinking you would actually pay for it.
I already have the InfiniBand switches - of course I would order it.
 
You've already lost your argument about "cheap desktop" at $17k. We don't need to bother our enterprise account representatives to find out if the real cost is my guestimate of $50K or only $40K.
You keep saying "cheap desktop". Where did I say that?

What is out of the question for most businesses is adding 100K of GPUs. You balked at it yourself.

And hyperscale even have their own server designs built.
 
You keep saying "cheap desktop". Where did I say that?
https://forums.macrumors.com/thread...rence-second-day.2043059/page-4#post-24564759"

"No. The point is if the design also makes sense for desktops for the cards to be cheap."
[doublepost=1494383037][/doublepost]
What is out of the question for most businesses is adding 100K of GPUs. You balked at it yourself.

And hyperscale even have their own server designs built.
And I am adding dozens of $K of graphics cards - but we don't need ECC or FP64 - so I balk at spending $7000 per card when the $700 card is actually faster and better for us.

Who are hyperscale, and what relevance do they have? Is this "moving the goalposts"?
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.