Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Got my cluster today, it is now installed - right now my system is running a HD5770 (inside of my MacPro, for driving the monitor) and the GTX780 and 980 at the cluster - will add tomorrow for testing purposes the 680.

Question: Is there a tool (for OS X) which shows the GPU usage of _all_ cards in a system? iStatMenu shows in this case only stats of the 5770 and 780 (first two cards).
 
Got my cluster today, it is now installed - right now my system is running a HD5770 (inside of my MacPro, for driving the monitor) and the GTX780 and 980 at the cluster - will add tomorrow for testing purposes the 680.

Question: Is there a tool (for OS X) which shows the GPU usage of _all_ cards in a system? iStatMenu shows in this case only stats of the 5770 and 780 (first two cards).

Please correct me if my assumption is wrong. My assumption is that you're going to use the three external GPUs for rendering of some sort. What specifically concerning your GPUs are you interested in measuring? iStat Menu is the only general purpose OSX tool of which I'm aware for measuring GPU usage generally on a Mac. Keep in mind that Apple seems to have intended that pre-2013 Mac Pros house only one GPU in the vast majority of cases and Apple allocated PCIe space and auxiliary PCIe power in such a way that the two GPU system seemed to have been the max almost all once could conceive being installed in a pre-2013 Mac Pro. The 2013 Mac Pro comes stock with only two GPU cards. So, iStat's two card limitation isn't surprising to me. Of course under Windows OS, there are various options, depending on what specifically you're trying to measure and you may be able to measure what you have in mind, under Bootcamp. Additionally, there're application specific measurement tools, such as those in Octane Render Preferences (and in the render frame itself) for OSX, Windows and Linux systems. If you don't already have Octane Render or its Demo version, you can download the demo version here - http://render.otoy.com/downloads.php . Also, as part of Xcode, there's the OpenGL ES tool [ https://developer.apple.com/library...rammingGuide/ToolsOverview/ToolsOverview.html ] that measures OpenGL performance. Moreover, inside the CUDA developers' kit are CUDA profiling tools such as VampirTrace and the PAPI CUDA Component to measure a GPU's CUDA performance. In sum, letting me know specifically what you're using your GPUs for, trying to measure and under what circumstance(s), may help me to better help you get the specific hardware performance information measures that you need/desire.
 
Please correct me if my assumption is wrong. My assumption is that you're going to use the three external GPUs for rendering of some sort. What specifically concerning your GPUs are you interested in measuring? iStat Menu is the only general purpose OSX tool of which I'm aware for measuring GPU usage generally on a Mac. Keep in mind that Apple seems to have intended that pre-2013 Mac Pros house only one GPU in the vast majority of cases and Apple allocated PCIe space and auxiliary PCIe power in such a way that the two GPU system seemed to have been the max almost all once could conceive being installed in a pre-2013 Mac Pro. The 2013 Mac Pro comes stock with only two GPU cards. So, iStat's two card limitation isn't surprising to me. Of course under Windows OS, there are various options, depending on what specifically you're trying to measure and you may be able to measure what you have in mind, under Bootcamp. Additionally, there're application specific measurement tools, such as those in Octane Render Preferences (and in the render frame itself) for OSX, Windows and Linux systems. If you don't already have Octane Render or its Demo version, you can download the demo version here - http://render.otoy.com/downloads.php . Also, as part of Xcode, there's the OpenGL ES tool [ https://developer.apple.com/library...rammingGuide/ToolsOverview/ToolsOverview.html ] that measures OpenGL performance. Moreover, inside the CUDA developers' kit are CUDA profiling tools such as VampirTrace and the PAPI CUDA Component to measure a GPU's CUDA performance. In sum, letting me know specifically what you're using your GPUs for, trying to measure and under what circumstance(s), may help me to better help you get the specific hardware performance information measures that you need/desire.

Thats right, I am using the external GPUs for rendering images.

Well, iStat doesnt seem to be limited at two cards, there must be something else getting iStat showing me at my cMP in this constellation only two cards: my nMP runs currently a external GPU too (via Thunderbolt, using a Akitio) and there are shown all three cards.

Anyway, the reason I want to see if all GPUs are used when rendering - I know I can see it with Octane, but I want to see it whatever application I am using. Why? Because sometimes it happens that the GTX980 is knocked off during rendering, I want to monitor it.
 
Thats right, I am using the external GPUs for rendering images.

Well, iStat doesnt seem to be limited at two cards, there must be something else getting iStat showing me at my cMP in this constellation only two cards: my nMP runs currently a external GPU too (via Thunderbolt, using a Akitio) and there are shown all three cards.

Anyway, the reason I want to see if all GPUs are used when rendering - I know I can see it with Octane, but I want to see it whatever application I am using. Why? Because sometimes it happens that the GTX980 is knocked off during rendering, I want to monitor it.

Since you are using the external GPUs for rendering images and because you want to see if all GPUs are used when rendering to monitor if any one or more of them is/are being knocked off, then it might be more fruitful if we explore what specific other rendering application(s) you use other than Octane. If you tell me what the other rendering applications are, then I'll look to see what options exist to monitor their useful/persistent presence under OSX.
 
Why I use an assortment of GPU-related performance tools

I'm currently in the process of consolidating my GPUs. My current goal is to have most of my 57 GPU processors with at least 3G of Vram, in seven systems: two 4-GPU processor systems, three 8-GPU processor systems and two 12-GPU processor systems [Yes, I'm not ready to give up that dream of >8 GPUs in a system, not just yet anyway]. As to 20 of my GTX 590 1.5Gs (each with two GPU processors) and 4 of my GTX 480s, they'll hopefully enjoy being housed in six 4-GPU card systems (five of which will really be 8-GPU processor systems because the 590s are 2-GPU processor cards). This move is intended to help better control my application program licensing fees (they're per system, not per card) and cut down on the number of systems that I have to monitor. In sum, my goal is to cut my total number of systems, as close as possible, in about half.

I'll be running mainly Octane Render [ http://render.otoy.com/features.php ], FurryBall*/ and TheaRender [ https://www.thearender.com/site/index.php/features.html ] on the systems with > 3G Vram. I intend to run RedShift 3d and TheaRender on the Fermi systems, i.e, those with < 2G Vram (that includes my GTX 690). RedShift 3d already has out-of-core rendering**/ implemented. TheaRender is a hybrid renderer that can run only on CPUs or only on GPUs or simultaneously on the CPUs and GPUs.

I had never installed iStat until Sedor mentioned it in his earlier post here. I thank him for the reference. While iStat looks like a useful utility, I personally also need utilities more closely related to the persistent proper functioning of my GPUs and that is tied more closely to the application at hand, since the GPU could be recognized by a general system utility, such as iStat, but might not be properly operating in the application at issue (such as intermittent down clocking) in a way that iStat might not reveal. That's why I'll also use CUDA-Z, GPU-Z, EVGA's Precision X and MSI's AfterBurner, along with the OSX and CUDA developer tools that I mentioned in my post above, as well as rendering application built-in utilities (like those in Octane) to help ensure that all is well constantly.

*/ FurryBall's "Two For The Price Of One" sale ends February 16, 2015. [ http://furryball.aaa-studio.eu ]

**/ " Out-of-Core Architecture
Redshift uses an out-of-core architecture for geometry and textures, allowing you to render massive scenes that would otherwise never fit in video memory.

A common problem with GPU renderers is that they are limited by the available VRAM on the video card – that is they can only render scenes where the geometry and/or textures fit entirely in video memory. This poses a problem for rendering large scenes with many millions of polygons and gigabytes of textures.

With Redshift, you can render scenes with tens of millions of polygons and a virtually unlimited number of textures with off-the-shelf hardware. " [ https://www.redshift3d.com/products/redshift ]

Octane is beginning to implement this feature with the recent release of Version 2.21.1, calling it "Out-of-core Textures." "Out-of-core textures allow you to use more textures than would fit in the graphic memory, by keeping them in the host memory. Of course, the data needs to be sent to the GPU while rendering therefore slowing the rendering down. If you don't use out-of-core textures rendering, speed is not affected.

Out-of-core textures come also with another restriction: They must be stored in non-swappable memory which is of course limited. So when you use up all your host memory for out-of-core textures, bad things will happen, since the system can't make room for other processes. Since out-of-core memory is shared between GPUs, you can't turn on/off devices while using out-of-core textures.

You can enabled and configure the out-of-core memory system via the application preferences. For net render slaves you can specify the out-of-core memory options during the installation of the daemon.") [ http://render.otoy.com/forum/viewtopic.php?f=33&t=44704&hilit=out+of+core ].

Furthermore, Octane's FAQ page has been revised to state the following:

"OctaneRender 2.0 or older texture count limitations (Versions 2.01 and onward do not have these limitations) [Emphasis Added.] ... . [ http://render.otoy.com/faqs.php - click "Hardware and Software." ] Looks like someone forgot to reconcile the version numbers, however.
 
Last edited:
A Clearer Layout Of Otoy's Whole Kitchen - Mirroring Time and Space

The Sweetest Outcome From Baking: Eating Your Cake and Having It Too

Having your cake and then eating it too isn't as wonderful as is eating your cake and then still having it too.

Otoy’s recently revised homepage [ http://home.otoy.com ] gives a better, more coherent, full picture view of what Otoy seems to be cooking up and how their ingredients combine. Otoy appears to be baking the sweet half of almost a baker’s dozen. If the final product from Otoy's six ingredients is as sweet as Otoy says each is (and/or projects it will be), those into 3d and video [ soon additionally supporting Photoshop, AfterEffects, NUKE, Houdini, MotionBuilder, Unreal Engine 4, and Digital Molecular Matter Engine [ http://render.otoy.com/forum/viewtopic.php?f=7&t=41634 ] ] will be able to eat their cake from afar (Mars? maybe) and have it too.

I. Mirroring:

1) LightStage - Cutting-edge Facial Scanning Technology [ http://home.otoy.com/capture/lightstage/ ]. If one gets the facial capture absolutely true, then whole body and environmental capture should be a snap;

II. Time:

2) Octane Render - Fully Interactive, Real-time 3D Editing and Rendering, Supporting More Than 20 Applications Thru Plugins [ http://home.otoy.com/render/octane-render/ ];

3) Brigade - Real-time Rendering Engine for Video Gaming [ http://home.otoy.com/render/brigade/ ];

4a) Light Field - Real-time Holography [ http://home.otoy.com/render/light-fields/ ]

III. And Space:

4b) Light Field - Distant Holography [ http://home.otoy.com/render/light-fields/ ]

5) X.IO Cloud Service - Running Windows desktop applications in the cloud [ http://home.otoy.com/stream/xio/ ] and

6) ORBX - streaming platform video codec [ http://home.otoy.com/stream/orbx/ ].

P.S. What if Apple, Otoy and Nvidia became fully functioning partners, while still maintaining each of their independence?
 
Last edited:
Sometimes, older things are better things.

WolfPackAlphaCanisLupus1 has 8 new occupants, but they're a few of my somewhat older board(er)s. I decided to relocate eight of my GTX 780 TI ACX SC OCs to my other Tyan Server. Pics, below, verify, in part, what resulted. I still haven't reached my goal of having a single system that has a single precision floating point peak performance of 56 PetaFLOPS. This server reaches only about 49.25 PetaFLOPS (vs. 46.31 PetaFLOPS for WolfPackAlphaCanisLupus0). WolfPackAlphaCanisLupus1 does beat WolfPackAlphaCanisLupus0 in every test that I've thrown at those eight air-cooled GTX 780 TIs so far, besting my water-cooled 3xTitan Zs (each with two processors), 1xTitan water-cooled and 1xTitan Black water-cooled in WolfPackAlphaCanisLupus0. Surprise! Surprise! Surprise! Well maybe you aren't, but it makes me smile like Gomer to see how those older, air-cooled, Nvidia GPUs score on OpenCL tests: 10,100 for Luxmark's Room scene (vs. 8,348 for WolfPackAlphaCanisLupus0); 20,673 (vs. 20,371 for WolfPackAlphaCanisLupus0) for Luxmark's Sala scene and 155,142 (vs. 125,194 for WolfPackAlphaCanisLupus0) for Luxmark's LuxBall scene. What's next? How about how these seniors perform in Octane Render and OpenGL.
 

Attachments

  • MyRig-WolfPackAlphaCanisLupus1sPetaFlops=49,253.47.PNG
    MyRig-WolfPackAlphaCanisLupus1sPetaFlops=49,253.47.PNG
    145.7 KB · Views: 194
  • WolfPackAlphaCanisLupus1OctanePreferences.PNG
    WolfPackAlphaCanisLupus1OctanePreferences.PNG
    123.4 KB · Views: 204
  • MyRig1+SalaScore+LuxBall+Room+MSI.PNG
    MyRig1+SalaScore+LuxBall+Room+MSI.PNG
    190.4 KB · Views: 223
Last edited:
Words To The Upgraders - Electricity Strongly Desires To Enter The Ground

One who uses an Amfeltec chassis or a supplementary power supply to power a GPU card or another supplementary powered peripheral in his/her computer needs to make sure that the power sources have the same and fully functional grounding path. Thus, in the case of a the supplementary power supply supporting an external chassis' operation one needs to make sure that the chassis itself is properly connected to a properly grounded wall outlet and that whatever supplementary powered cards (that are in the external chassis) that are drawing power from points other than the PCIe connectors have the same ground path. Violating this caution might result in arcing and your having a crispy GPU and/or chassis.
 
Quilt work: Piecing Together Specs about Nvidia's Next GTX Top Dog

Within about a week and a half from today, we should get confirmation of details about Nvidia’s latest top end GTX GPU - The Titan X. All of the pics that I’ve seen of it depict it as having the same size and form factor as the Titan and Titan Black - PCIe double wide. According to Anandtech [ http://www.anandtech.com/show/9049/nvidia-announces-geforce-gtx-titan-x ], it’ll have 96 ROPS, a 384-bit memory bus, 12G of Vram, 8 billion transistors, be based on the Maxwell GM200 architecture with one GPU processor, possibly be born from the TSMC 28nm manufacturing process, and launched at a very high price. Extremetech [ http://www.extremetech.com/extreme/...end-gtx-titan-x-leaves-out-almost-all-details ] adds that it’ll have either 192 or 256 (I believe that 256 is more likely) texture mapping units (TMUs), a core count of 3,072 and maybe 24 SMM blocks. Hexus. net [ http://hexus.net/tech/news/graphics/79710-alleged-nvidia-gtx-titan-x-pricing-surfaces/ ] adds that the price will likely be about $1,349 (for the base air-cooled version). Tomshardware [ http://www.tomshardware.com/news/nvidia-geforce-gtx-titan-x,28694.html ] adds that it’ll have a green LED-lit “GeForce” label on it’s side. Legitreviews [ http://www.legitreviews.com/hands-nvidia-geforce-gtx-titan-x-12gb-video-card_159519 ] adds that it’s a PCIe double wide card, appears to support 3- and 4-way SLI (having two SLI interconnects), has one 6-pin and has one 8-pin power connectors, has three DisplayPorts, one HDMI and one DVI connector and will be dark gray or black with a black fan shroud. Guru3d [ http://www.guru3d.com/news-story/nvidia-geforce-gtx-titan-x-revealed.html ] adds that it’ll have 6 GPCs, a core clock of ~988 MHz, a memory clock of 1653/1753 MHz, a memory bandwidth of 317.4 GB/s and a single precision floating performance of 6.07/7.0 TFLOPS. See, also - http://videocardz.com/54960/nvidia-geforce-gtx-titan-x-new-details .

If all of this pans out as indicated above, I have no regrets for last fall having purchased 6 Titan Z Hydro’s for about $150 to 250 additional for each one. The Titan Z (air cooled) has, among other things:
GTX TITAN Z GPU Engine Specs:
5760
CUDA Cores


705
Base Clock (MHz)


876
Boost Clock (MHz)


338
Texture Fill Rate (GigaTexels/sec)


GTX TITAN Z Memory Specs:
7.0 Gbps
Memory Clock


12 GB
Standard Memory Config


GDDR5
Memory Interface


768-bit (384-bit per GPU)
Memory Interface Width


672
Memory Bandwidth (GB/sec)

[ http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan-z/specifications ] . Moreover, the air-cooled Titan Z has:

“… the full 15 Streaming clusters, thus 2880 Shader Processing Units per GPU, enabled. That's 240 TMUs and 48 ROPs on a 384-bit memory interface of fast 6GB GDDR5 allocated per GPU. So you can double that up. But in a nutshell the card uses two 45 mm × 45 mm 2397-pin S-FCBGA GK110b GPUs with 2880 shader/stream/CUDA processors -- thus 5760 Shader processors. This will give the GeForce Titan Z a cool 8 TeraFLOPS of performance.” [ http://www.guru3d.com/articles-pages/nvidia-geforce-gtx-titan-z-review,1.html ] Moreover, the software that motivated and continue to motivate my main use of GPUs is Octane Render. The Maxwell GPUs for whatever reason have not yet shined more brightly than the Keplers. As of now, a GTX 980 performs in Octane Render most closely to the reference design (standard) GTX 780. I do wish, however, that my Titan Zs had 3xDisplayPorts as well as 12G Vram for each of their two GPU processors.
 
Last edited:
Want To Use All Of Your GPUs In Separate Systems For Rendering On One System

If you’re not timid or are a renegade or have network chops, then check out Renegatt Software’s [ http://www.renegatt.com/about.php ] GPUBOX. It separates your GPU devices from applications and operating systems so that your GPUs can be used on Virtual Machines, thereby allowing you to share multiple GPU devices. It looks like network rendering on steroids. In short, your favorite rendering application (if it’s supported) might be able, from one machine, to see and take advantage of all of your GPU rigs, regardless of their being housed in separate systems. Currently supported are Arion Render, BarraCUDA, Blender Cycles, CUDASW++, HOOMDBlue, Gromacs, john The Ripper, LAMMPS, mCUDA-MEME, Octane Render, V-Ray RT and WideLM [ http://www.renegatt.com/products.php ]. The software is being marketed through a Free 20-day trial, a full license for 199 Euros per GPU, and soon for 59 Euros per GPU for GPUBOX Artist for Octane or Blender. See GPUBox in action here - http://www.youtube.com/results?search_query=GPUbox and here - http://www.renegatt.com/blog/ , rendering in among others, Blender with 50 GPUs and in Octane with 18 GPUs. The company also says that it will introduce support for OpenCL and later also OpenGL and support Mac clients.
 
Caution regarding post no. 1211

Otoy has revised their License to state:
"4) DELIVERY & USAGE
OTOY does not take responsibility for any time of delivery requirement you may have concerning the delivery of the Software to you.
The Software may only be obtained via downloads with usage instructions and documentation included in the download.
You agree that you may use the software supplied on licensed workstations for your own purposes (non–exclusive license). A maximum of 12 GPU's may be used. You will not attempt to circumvent the physical GPU or single machine license limit, including obfuscating or impairment of the direct communication between Octane and the physical GPUs, virtualization, shimming, custom BIOS etc.[Emphasis added.]

In sum, one must have an Octane license for each machine rendering using Octane. 12 GPUs is the maximum number of physical GPU's that may be addressed by an OctaneRender single frame render; each machine running a separate single frame render may address 12 physical GPU’s. For example, having seven Octane licenses, each on a separate system, allows one to run OctaneRender simultaneously on seven separate machines each of which is running a separate single frame render; each machine running a separate single frame render may address only up to 12 physical GPU’s.
Thus, it behooves one who is using OctaneRender to earn money, to buy the fastest GPUs available.
 
Last edited:
New Benchmark Software for CUDA GPUs

Otoy has recently released OctaneBench [ http://render.otoy.com/octanebench/ ]. It uses the score for a GTX 980 for a base score of 100. I've tried OctaneBench on WolfPackAlphaCanisLupus1 :

Best of my 3 scores running all 8 GTX 780 TI ACX SC OCs in WolfPackAlphaCanisLupus1 = 889.521069.
Average Score: 881.87
Median Score: 888.79
Maximum Score: 889.52
Minimum Score: 867.31

Best of my 2 scores running only 1 of 8 GTX 780 TI ACX SC OCs = 114.511415.
 
Last edited:
News Release From GPU Conference re GTX Titan X

It has 8 billion transistors, 3,072 CUDA cores, 7 teraflops of single-precision throughput, and a 12GB framebuffer. It’s based on Maxwell and costs $999.

- See more at: http://blogs.nvidia.com/blog/2015/03/16/live-gtc/#sthash.0FmA1DHc.dpuf .

That means its about 20% more powerful at GPU compute functions than one of the two GPU processors on a Titan Z. For the price, that's a somewhat good thing, but I do not see it's price dropping as fast as did the price of the Titan Z from $3,000 to $1,499. So I have no regrets about purchasing 6 Titan Zs late last fall, since I'm seeing 6000 teraflops of single-precision throughput from each of its two GPU processors: 12,000 / $1,500 = 8 vs. 7,000 / $1,000 = 7. However, I still wish that my Z's had a 12 GB frame buffer for each GPU processor.
 
Problem is getting a Titan Z at the moment, I was looking for a card here but they are all out of stock ooooor you have to pay 2500Euros.

This new card looks really interesting to me, as I currently do not own a Nvidia card and I am looking for a new one.
 
Problem is getting a Titan Z at the moment, I was looking for a card here but they are all out of stock ooooor you have to pay 2500Euros.

This new card looks really interesting to me, as I currently do not own a Nvidia card and I am looking for a new one.

Things change quickly, and that includes my mind. I want 8 of those Xs for WolfPackAlphaCanisLupus0. Here's why I changed my mind:

http://www.techspot.com/review/977-nvidia-geforce-gtx-titan-x/page9.html .

Whereas my 8xGTX 780 TI ACX SC OC's scored a high of 889.5 in OctaneBench [ http://render.otoy.com/octanebench/results.php ], I now believe that 8xGTX Titan Xs will score 1,200 in OctaneBench, which would be truly impression in and of itself, but when added to my network, consisting of nine-8xGTX processor core systems, and the reminder (which is almost all >4xGTX processor core systems), then I'll have the render farm do the types of film projects that I could have only dreamed of two years ago.
 
Last edited:
Sounds really like a nice card - more and more I want to have one too (or two... well, lets start with one and then grow the cluster :) ).

Hope to see some CUDA benchmarks soon.
 
What I like most about GPU GTC 2015

1) There are now at least 319 Cuda apps and sales of Titan Zs, along with Xs, will continue.

2) The potential of Deep Learning
https://www.youtube.com/watch?feature=player_embedded&v=7E7qCs1q6JQ .

3) Octane Render Cloud
https://www.youtube.com/watch?feature=player_embedded&v=TzG6-ypxOWM .

4) Brigade Update
https://www.youtube.com/watch?feature=player_embedded&v=FbGm66DCWok

“To get the quality we showed at GTC you will need roughly 80 amazon GPUs, so it is not cheap or for everyone, and is cloud only.” [ per Goldorak - OctaneRender Team ]

5) Octane for MotionBuilder
https://www.youtube.com/watch?feature=player_embedded&v=NBl4CBG0M0U .

6) OctaneRender for Houdini
https://www.youtube.com/watch?feature=player_embedded&v=cRX0ULUogmQ .

7) OctaneVR
https://www.youtube.com/watch?feature=player_embedded&v=dGDxw1j9FGw .

8) Octane for Adobe AfterEffects
https://www.youtube.com/watch?feature=player_embedded&v=XsQbMG3ubhM .

Thanks to Rikk The Gaijin [ http://render.otoy.com/forum/memberlist.php?mode=viewprofile&u=15259 ] for bringing some of these resources to my attention.
 
Last edited:
One more Octane/OTOY Presentation at NVIDIA GPU Technology Conference 2015

Coming soon from Octane in V. 3: OpenCL on CPUs, GPUs, FPGAs and ASICs; also Octane ORBX scene, Unreal Engine, Photoshop and Zbrush support. Coming soon from 3rd Party Developers - Octane Plugins for Unity, Bryce, Hexagon 3d and Blackmagic Fusion 360. See video below for more details.

https://www.youtube.com/watch?feature=player_embedded&v=mHKmqWwEGxQ

P.S. I said it before; so now I'm repeating myself (with a twist) - "I like having lots of PCIe slots filled with Nvidia CUDA cards with lots of fast cores." But, looks like I have to add: "... and lots of CPU sockets sporting lots of fast CPU cores, all running applications supported by Octane and other Otoy products."
 
Last edited:
Could we be close to the point that a high end GPU card can be easily powered on a MP

Can one Titan X perform stably and safely when powered only by power connectors rated at 225 (75 + 75 +75) Watts?

The new Titan X, with 1x6-pin and 1x8-pin power connectors, has these specs:
NVIDIA GPU Specification GTX Titan X
CUDA Cores 3072
Texture Units 192
ROPs 96 64
Core Clock 1000MHz
Boost Clock 1075MHz
Memory Clock 7GHz GDDR5
Memory Bus Width 384-bit
VRAM 12GB
FP64 1/32
TDP 250W
GPU GM200
Architecture Maxwell 2
Transistor Count 8B
Manufacturing Process TSMC 28nm
Launch Date 03/17/2015
Launch Price $999
[ http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review ]


The new Quadro M6000, with only 1x8-pin power connector, has these specs:
Maxwell GPU
Compute Units 3072 CUDA cores
Peak FP32 Performance (SP) 7 TFLOPS
Peak FP64 Performance (DP) 219 GFLOPS
Memory Size 12GB
Memory Bus 384-bit
Memory Bandwidth 317.4 GB/s
ECC Support Yes
Max. Displays 4
4K2K Displays @ 30Hz 4
4K2K Displays @ 60Hz 4
Power Consumption
(Measured): 222W Graphics 245W Torture
[ http://www.tomsitpro.com/articles/nvidia-quadro-m6000,2-898.html ]


What might likely happen if one ran a Titan X, in e.g., a MacPro, with just the power supplied by the PCIe slot in which the GPU is seated and powering the Titan X's 6- and 8-pin power connectors each from one of the two 6-pin PCIe cable connectors on the MacPro's motherboard (or using one 6-pin to 8-pin adapter for the 8 pin slot on the GPU) or just connecting two 6-pin PCIe cables to a "Y" cable that is then inserted into the Titan X's 8-pin power connector? How delicate are the tracers feeding the MacPro's 6-pin connectors, or more specifically, can one of the MacPro's motherboard PCIe ports safely support 95 watts? Has anyone powered an earlier Titan version (rated at 250W) this way before? If so, over what period of time and what has happened?

P.S. Anyone who has followed this thread closely probably knows that I'd personally go for having too much rather than less power; but not everyone acts like me, nor do I expect them to. This post is for those who have other priorities or just think different(ly).
 
Last edited:
Idea worth pursuing? Really Thinking Outside Of The Box [of the computer]

The Amfeltec Cluster transport is x1; but Amfeltecs GPU splitters [~$200 (USD) - http://amfeltec.com/products/flexibl...-gpu-oriented/ - are x4; so how about this:

Just create an L shaped GPU support rail that has a "L" lip with holes for screws to attach up to 4 GPU cards ; attach it to the motherboard side of the system and attach the GPUs to the back on the L-lip; attach the GPUs to the splitter cards [put the splitter card in the x8 or 16 slot] and now you've got 4-x4 GPUs externally spaced wide apart, in the open air, blowing hot air upward and externally feed, being blown air at their base from the fan cooling the PCIe bay; and three internal PCIe slots for non-GPUs (MPs support only 4 GPU processors - they're four slotted). GPUs would be much cooler too. It's now easier to connect them to a powerful external PSU. Metal railing would be dirt cheap; PSU and GPUs would cost what GPUs and PSU costs. All welcomed to pick up and run with this? Also the bracket could be placed lower given the design of Windows motherboards and case PCIe placement.

P.S. - If you have a Windows system that has adequate IO space to support up to 7 or 8 GPUs, just make the bracket longer with enough connection points for however many GPUs your system will support and get 2 splitter kits; of course that will use two internal PCIe slots.
 

Attachments

  • MyNextMacProMod.png
    MyNextMacProMod.png
    36.3 KB · Views: 152
  • MyNextWindowsMod.png
    MyNextWindowsMod.png
    36 KB · Views: 161
Last edited:
Waste Not - Want Not

Is what I heard as a child. Also, because I'm concerned about how waste affects our environment, I've decided to try to recycle a computer tower case that I've had since the mid-1990s to mod into my first 8 GPU external cage. I'm cheap also.

Mr. Dremel and I will be attending a mod party after daybreak and we'll be cutting up. You're all invited.

Six well spaced double-wide GPUs can be hung on the long side and 1 double wide GPU can be hung on each of the short sides at the base of the other six. The first 3 pics show where I am before getting some sleep; the next 2 pics (the last one is actually a double pic within one) show what the end result should resemble. The areas that I've whited out are those parts that we'll be removing. To tie the GPUs into my systems, I'll be using the x4 Amfeltec GPU splitters that I mentioned in my last post.

I'll also secure a mat to the bottom of each cage to help prevent carpet lint from being sucked up into the GPUs or my external PSU.

After this project is computed, I'll flip the bottom half of the case upside-down and we'll use it to build another 8 GPU external cage.

Finally, I'll recycle the case's cover to build another 8 GPU external cage. All three of these cages will be used for systems that I've built, and thus not be far off the floor.

All of my cMacPro external chassis must be taller, given the location of the PCIe bay and the 12 inch length of the splitter cables. Luckily, I have more old tall cases to recycle.

P.S. Amfeltec responded to my request for longer splitter cables by saying that they could make me up to a 20" cable, but that as the length exceeds 12" it becomes more likely that the GPUs might not work consistently. That's what motivated me to devise a GPU cage so that I can get the GPU PCIe connectors as close as possible to the PCie cards.

P.S.2 (Not PlayStation) - I intend to place the splitter base card in my 4xE5-4650 Supermicros in the end PCIe slot nearest the side of the case door and cut a narrow rectangular hole through the door just large enough to get the PCIe female connectors through the hole and attached to the GPUs in the cage which will sit up against the door.
 

Attachments

  • IMG_0055.png
    IMG_0055.png
    1.6 MB · Views: 215
  • IMG_0056.png
    IMG_0056.png
    2.1 MB · Views: 219
  • IMG_0057.png
    IMG_0057.png
    1.8 MB · Views: 171
  • IMG_0055A.png
    IMG_0055A.png
    522.6 KB · Views: 175
  • IMG_0056A+7A.png
    IMG_0056A+7A.png
    249.5 KB · Views: 194
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.