Even when the trail leads to a seeming dead end, the journey itself teaches.
I like that we can peacefully debate these facts without it devolving into a flame fest.
jas, I only intended to shed light. I love humanity; not the tools that we've created to get work done. I intended no flaming, no glowing, no heating and no smoking. Growing up in B-ham in the first half of the 60's and in the City of Angels in the second half of that decade just taught me a lot about the power of light and the harm that comes from fire. I was just pointing out the fact that many of us have become Hackintoshers, even those who don't want to believe that they are or, at least, have likely taken the first steps down this path, to various degrees and extents out of a desire and/or felt need to keep their PCs, which includes their Mac Pros, up to date.
Perhaps, but no upgrade also means no access to better drivers for things. Things like: video drivers, USB 3 drivers, etc. Love it or hate it, OS X's updates bring better drivers and better performance almost without question.
Love it or hate it, those who go through what is required to get OSX to run smoothly on their hardware of choice can be said to exhibit greater devotion to that OS than any others. I know that I shouldn't draw inflexible lines - never saying "never" is the wisest course - because what I say I'll never do is what I'll have to do next. The people who've had problems with their Hackintoshes splatting on updates/upgrades are mainly those who dabble and don't realize that it requires a commitment to then know much more about (1) their system(s) and other hardware, including the system being emulated, (2) the various OSes and their applications, and (3) what's in each upgrade/update and what its more likely to do once it's installed, than others are tasked with knowing. I love Pacifist [
http://www.charlessoft.com ]. If an update or upgrade has some special driver that I need, I'll just pacify myself by adding that particular driver. My point was that an avowed Hackintosher should not have a casual update/upgrade mentality. A side benefit of understanding what commitments you have to make to be an effective Hackintosher and actually keeping those commitments is that it forces you to continue learning.
I can't speak for 5050's intentions, only my own. My interest in the Netstor solution didn't include Titans. Instead, I considered the idea of having multiple PC-based GTX680s in the device once the prices started coming down. Adobe's Premiere Pro Next (the next version) will take advantage of multiple GPUs for exports, once it's released in May. This intrigues me.. a
lot!.
But, as you pointed out: the cost for the device is ridiculous. Way, way,
WAY ridiculous. So it's not going to happen. At least, not on my Mac.
jas
I understand that the Titan played no role in the conception of the external PCI-e chassis and that there are many reasons why someone may want more PCI-e slots. When it comes to performance those open slots provide an open invitation as points of insertion because of their superior speed advantage. I also understand that various people have differing needs. I've been eyeing the external chassis for many years; in fact, long enough ago that if someone had then asked me, "Is CUDA good to have?" I would've responded, "It's not as good to have as fried Gulf Red Snapper."
Given that the price of the external chassis is ridiculous, it got me to thinking last night why not just come up with a way to connect two or more computers together, just like the external PCI-e chassis connects, to get the same benefit. So I started doing some research.
Here is the first most promising and intriguing lead that I found:
http://davidhunt.ie/wp/?p=232. There, David Hunt details how he got two computers connected through an Infiniband Network that he set up at his home, using:
"2 x Mellanox MHEA28-XTC infiniband HCA’s @ $34.99 + shippping = $113 (€85)
1 x 3m Molex SFF-8470 infiniband cable incl shipping = $29 (€22)
Total: $142 (€107)."
Importantly, there David points out that you don't need one of those mega-grand switch boxes to connect just two computers directly to each other with infiniband. So David taught me some things that I didn't already know.
Then, I googled Mellanox and went to their solutions page [
http://www.mellanox.com/page/solutions_overview?gclid=CPf_85_P6LYCFS8OOgodWg4AVA ], where I saw that they provide solutions for "High-performance compute clusters [that] require a high-performance interconnect technology providing high bandwidth, low latency and low CPU overhead resulting in high CPU utilization for the application’s compute operations."
To say the least, this began to intrigue me. So I then clicked on HPC under "Solutions Overview" and here's what, in relevant part, displayed:
High Performance Computing (HPC)
Overview
High-performance computing encompasses advanced computation over parallel processing, enabling faster execution of highly compute intensive tasks such as climate research, molecular modeling, physical simulations, cryptanalysis, geophysical modeling, automotive and aerospace design, financial modeling, data mining and more. High-performance simulations require the most efficient compute platforms. The execution time of a given simulation depends upon many factors such as
the number of CPU/GPU cores and their utilization factor and the interconnect performance, efficiency and scalability. ... .
One of HPC’s strengths is the ability to achieve best sustained performance by driving
the CPU/GPU performance towards its limits. ... .
By providing low-latency, high-bandwidth, high message rate, transport offload for extremely low CPU overhead,
Remote Direct Memory Access (RDMA) and advanced communications offloads, Mellanox interconnect solutions are the most deployed high-speed interconnect for large-scale simulations, replacing proprietary or low-performance solutions. Mellanox's Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency and performance for HPC systems today and in the future. Mellanox Scalable HPC solutions are proven and certified for a large variety of market segments, clustering topologies and environments (Linux, Windows). ... .
High-performance computing enables a variety of markets and applications:
... .
EDA
EDA simulations often involve 3D modeling, fluid dynamics, and other compute-intensive processes that require high-performance computing (HPC) data center solutions.
... .
Media and Entertianment
To reduce production lags, today’s media data centers invest in high-performance HPC cluster technology, combining the power of hundreds or thousands of CPUs in the service of a highly complex rendering task.
... ." [Emphasis Added]
Being even more intrigued, I began to examine Mellanox's product lines [
http://www.mellanox.com/page/products_overview ], and in particular their InfiniBand/VPI cards [
http://www.mellanox.com/page/infiniband_cards_overview ] because I then also had another Safari window opened to this [
http://www.newegg.com/Product/Produ...tegory=27&Manufactory=13783&SpeTabStoreType=1 ] and I was eyeing that $556 Mellanox MHQH19B-XTR ConnectX 2 VPI - Network adapter 40Gbps PCI Express 2.0 x8 card [
http://www.newegg.com/Product/Product.aspx?Item=N82E16833736004 ] because of it's relatively low price and the similarity of it's specs to those of the NA250A. Then, when I clicked on ConnectX-2 VPI link at the Mellanox site on this page [
http://www.mellanox.com/page/infiniband_cards_overview ], I was taken here: [
http://www.mellanox.com/page/products_dyn?product_family=61&mtag=connectx_2_vpi ]. There, I downloaded this: [
http://www.mellanox.com/related-docs/user_manuals/ConnectX 2_VPI_UserManual.pdf ]. Then, I struck oil, well not really oil but something sticky, murky and obscure like oil. At section 4.4 of the user manual the subtitle is "NVIDIA GPUDirect Support," and provides as follows:
4.4 NVIDIA GPUDirect Support
Utilizing the high computational power of the Graphics Processing Unit (GPU), the GPU-to-GPU method has proven valuable in various areas of science and technology. Mellanox ConnectX-2 based adapter card provides the required high throughput and low latency for GPU-to-GPU communications.
4.4.1 Hardware and Software Requirements
Software:
Operating Systems:
• [Red Hat Enterprise Linux] 5.4 2.6.18-164.el5 x86_64 or later
• Mellanox OFED with GPUDirect support
• NVIDIA Development Driver for Linux version 195.36.15 or later
Hardware:
• Mellanox ConnectX-2 adapter card
• NVIDIA Tesla series.
So for 2 x $556, plus the cabling cost, one can connect two computers, each with Nvidia Tesla cards and the Teslas cards in both systems would be able to operate as if they were in the same computer. That sounded a lot less ridiculous than $2,448.
Then, my feeling of elation began to evaporate because it dawned on me that I had written in post #564 here:
https://forums.macrumors.com/threads/1333421/, about some similarities and differences between the Titan and the Tesla cards: and here's the salient part:
... .
"In Titan, certain other Tesla card features have been disabled (i.e., the Titan drivers don't activate them).
RDMA for GPU Direct is a Tesla feature that enables a direct path of communication between the Tesla GPU and another or peer device on the PCI-E buss of your computer, without CPU intervention. Device drivers can enable this functionality with a wide range of hardware devices. For instance, the Tesla card can be allowed to communicate directly with your Mercury Accelsior card without getting your Xeon or i7 involved.
Titan does not support the RDMA for GPU Direct feature." [Emphasis Added]
Well, at the least, I learned more than I new before following this trail and Mac Pro users with the higher priced Tesla cards in two systems have an alternative connect solution, if they don't mind having to install and working with RHE Linux. Like I said earlier, "PC user nourishes Mac user who then nourishes Hackintosher who was a/is a PC user who then nourish Mac user … ad infinitum." I don't ignore what I've learned and hopefully I will continue to learn much more about Macs and PCs and long may I own systems of both categories, as well as my Ataris and Commodores. But for as long as I am blessed to breathe, to me they shall be only inanimate tools, not deserving of any of my allegiance.
BTW - I just remembered some more vital pieces of information. gnif believes that the Titan has locked within it the capability to do everything that a Tesla K20X does [
http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/ ] (and I tend to believe that he is right). Also, I know that the Titan is very tweakable and can be made to yield great performance. Might not the Tesla drivers that enable Nvidia GPUDirect/RDMA support be abled to be installed for the Titan by causing the installation program to believe that the Titan is a Tesla card by modifying the resisters on the back of the Titan card to change its PCI Device ID byte. To me this hack sounds a little like the Mac Pro 4.1 to 5.1 EFI hack, except that this approach involves modifying the hardware itself which carries its own special risks. Then one could selectively install the driver at issue. That sounds like a job where Pacifist could help because it allows for selective installs. This might require giving the Titan back its own id by replacing the resisters as before. In other words, you wouldn't want the Titan to lose it's own unique features in the process; but just add certain Tesla features that are now locked away. Thus, if the lack of Nvidia GPUDirect/RDMA support is due solely to a driver installation issue, then the Mellanox ConnectX-2 adapter card solution would then be available for the owners of Titans housed in two of their separate computer systems if they are willing to hack there video cards like gnif does.